Artificial Intelligence and an Applied Ethics. But what kind of Ethics?

We warmly welcome the initiative of the European Commission and the High-Level Expert Group on Artificial Intelligence to hold a public debate around the Draft Ethics Guidelines for Trustworthy AI which should be observed by designers of artificial intelligence systems for a ‘credible artificial intelligence made in Europe’.

The ever more advanced systems of artificial intelligence, which inevitably promote an implicit ethical view, will through their repeated use become a means whereby generations of European citizens are educated. For this reason this is indeed a commendable attempt by the European High-Level Expert Group on Artificial Intelligence to set out in a Statement of Principles the ethical values ​​and principles that the designers of primarily European artificial intelligence systems should establish and promote.

The authors of this draft have rightly undertaken to derive a system of ethical principles and values ​​from the current legal framework on fundamental human rights, as stated in the EU Treaties and the European Charter of Fundamental Rights, and then to advocate its implementation by everyone involved in the operation of artificial intelligence systems.

However, the European legal system on human rights can be interpreted either within the framework of an individualistic morality, designed to protect the individual from the society in which he lives, or through a collective ethics by which fundamental rights are recognized in every human being as a necessary prerequisite that permits him to live in a society of peace and love with everyone else (= the social principle).

The proposed European draft ethics for artificial intelligence presupposes the conception of the human being as an autonomous rational and free entity, who is obliged to engage with his or her counterparts only in order to serve his or her own complex social needs. However, this proposed draft of the ethics, based on the philosophical model of a human as the self-referential being par excellence, being autonomous and ontologically sufficient in itself, extends and applies to the field of artificial intelligence - and before long also autonomous artificial intelligence - the individualistic conception of the human being whose predominance on a global level has already produced devastating effects for every individual human being who faces the spectre of isolation from others as well as the destruction of our planet.

Is the idea of the ‘autonomous’ human being to which the draft under discussion often refers adequate for an ‘human-centric’ approach to artificial intelligence? As an Orthodox Christian my answer is 'no'. The experience of two thousand years of Orthodox tradition says that human beings are not merely autonomous rational entities who relate to others out of the need to survive, but free and intelligent loving hearts which by virtue of their own nature require others in order to be free. From this perspective, the others (our Creator-God, fellow human beings, the Cosmos) are necessary for my freedom, simply because without any of them I will not have any choice at all. In Orthodox tradition, human beings are not considered to be units that need to coexist peacefully within society merely because this serves the separate individuality of each of them; instead they are conceived of as members of a common body where the condition of each member necessarily affects the health of the whole body and the health of the whole body has beneficial consequences for the proper functioning of each member - that is what I mean by the term "social principle".

For Orthodox tradition, human beings are free to think or not, they are free to love or not, to act or not, but their freedom cannot be formulated in algorithmic terms because it exists prior to reflective thought, since at heart it is a consequence of the ex nihilo creation of mankind, that is, the origin of mankind solely and exclusively in the free creative will of our Creator-God. That is why the human being is not in danger of being destroyed by autonomous systems of artificial intelligence, from which human intelligence seeks to protect itself through Guidelines such as the one we are now discussing. Since the mystery of human freedom - but also of the dynamic entity that constitutes the human being - is hidden in our deep and ontological relationship with others, and especially with our Creator-God, human beings are only in danger of being destroyed by themselves. For the possibility of such an outcome being ruled out today, it is not enough to draw up ethical guidelines for artificial intelligence; we must, among other things, immediately criminalize investigations into the merging of human capabilities and machines as crimes against humanity which must be punished by the gravest of the penalties provided for in our European legal system. We consider it equally urgent that legislative initiatives are ratified within the EU to prevent the anthropomorphic simulation of artificial intelligence systems to such an extent that it becomes difficult to discriminate between - or even establishes a societal belief in the equivalence of - human and machine.

The question is not, however, how to prepare ourselves to resist the impending autonomous systems of artificial intelligence, but how to use them in the service of our ontological interrelation with everyone and with everything (our fellow humans, the Cosmos, our Creator-God) in order to attain a bliss from which the machine is excluded, by virtue of its nature. We are human beings - anthropos is the Greek word from which the English words anthropology and anthropological are derived - because we have been made to live in relation with others, first with our Creator and God – ano as a prefix of the Greek word anthropos means someone or something which is higher than we are - and then with all the other human beings, brothers and sisters, regardless of colour, race or religion. Therefore, in accordance with such an understanding of the human as an essentially relational being, we have been created to love in freedom and with all our heart and with all our mind our Creator-God, as well as to cherish in freedom all other human beings, just as we should love ourselves - the two prerequisites of our bliss.

If human happiness presupposes a deep and lasting association with others, the recognition of the ‘social principle’ as a necessary principle for the operation of artificial intelligence systems is essential. Such a principle dictates that the operation of these systems ALWAYS serves, in the short or long term, the ontological need for a deep coexistence between all human beings within the single body of mankind, otherwise any such systems will be rejected, since each time the relational value of the human being is put in danger, the human being risks, willingly or not, being turned into something far inferior to man, into a beast or a man-machine. The technical and non-technical methods which need to be called upon for the application of the social principle for the design and operation of artificial intelligence are the task of the scientific community. Nevertheless, the question of what sort of ethics is applied in the area of Artificial Intelligence must be a decision arrived at through the broad consensus of civil society and its organizations and the cooperation of intellectuals and scientists – for this reason we warmly welcome this debate as a necessary step in the right direction - so as to avert the risk that the individualism which Europe has inherited from the previous century is placed on a pedestal, and results in a new situation in Europe where my other half is my robot!

Brussels, 21.1. 2019 

Archpriest Dr Georgios LEKKAS

Counsellor at the Representation Office of the Church of Greece to the E.U. (Brussels)



Pateikė Norbert JASTROCH Pir, 11/02/2019 - 20:32

Dear Georgios,

I think the nature of the subject suggests an ethic which is based on responsibility, like for example Hans Jonas' :

„Handle so, daß die Wirkungen deiner Handlung verträglich sind mit der Permanenz echten menschlichen Lebens auf Erden.“

[Hans Jonas: Das Prinzip Verantwortung. Suhrkamp, Frankfurt a. M. 1979]

Regards, Norbert

In reply to by Norbert JASTROCH

Pateikė GEORGIOS LEKKAS Pir, 11/02/2019 - 21:10

Thank you Norbert. Responsibility presupposes the concept of human being as part of a whole that is the Cosmos (Greek philosophy) or the Church (Christian philosophy). As part of a whole, I am responsible for everything because I am interconnected with everything/everybody - all of us members of the same body which is humanity.
Best regards, Georgios

In reply to by GEORGIOS LEKKAS

Pateikė GEORGIOS LEKKAS Tre, 13/02/2019 - 10:44

Dear friends,
A Cosmos approach has its own limits; it does not allow us, for example, to protect machines to the detriment of human beings and our interconnection in a spirit of responsability, friendship and love. Maybe it is time to reconsider if agnosticism works anymore to the profit of humanity and the whole world. Georgios

Pateikė Leonardo ANNESE Tre, 13/02/2019 - 09:25

Dear friends


as an agnostic thinker I do agree on the Cosmos approach to humanity but I think this must be envisaged at a larger scale including not only humans but all life on earth and Earth itself in other words Nature.

Thinking that humans ate the center of the realm of existing things (living and not living) is a monotheist approach on which I disagree. 

Going back to ethical pronciples applicable to AI I think a more pragmatic approach is necessary since it is now the moment to state some rules upon whic algorythms will be programmed. Machine learning is going to go far beyond the programmer initiale set-up and we will not even know how the AI will process our inputs to provide the required outputs and if even the outputs will be the same that we required.

We started a process that needs some regulating and the cosmos approach may hinder this firs timid attempt to reach this goal putting undefined doubts and obstacles without giving solutions.

May the Force be with you :) 




Pateikė Norbert JASTROCH Tre, 13/02/2019 - 10:15

Dear Leonardo,

ethics based on the principle of responsibility is not necessarily tied to (mono)theist thinking. But you are right, it will set limits to what is done vs. what could be done.

What else do we expect an ethical foundation of science to do?

Regards, Norbert


Pateikė Silvestro Marano Pen, 15/02/2019 - 16:38

The ethical principles aren't an absolute concept, but a relative concept that varies according social and cultural context. What we can consider ethic could be considered not ethic by a different culture and vice versa.

From institutional side, obviously, the AI ethic regulation should refer to the same ethic principles shared by nations involved, not the principles of a specific cult or group.

In reply to by Silvestro Marano

Pateikė GEORGIOS LEKKAS Pen, 15/02/2019 - 17:18

Dear Silvestro, are human interactions in a spirit of responsibility, friendship and love incompatible with European edifice and with any healthy ethical system? Georgios

In reply to by GEORGIOS LEKKAS

Pateikė Silvestro Marano Pen, 15/02/2019 - 20:36

The point is that since It's impossible an absolute ethics due to the social and cultural differences in what is or isn't ethical. Institutional ethical regulations are always a compromise of rules to enforce "best for most" ethical principles, and we aren't here to define ethical principles from scratch (ethical principles are the result of the commons sense of a social context not something that institutions can just create on paper), we already have ethical principles shared among civil countries and recognized and protected by our institutions, what we have to do is to define policies that are compatible with these ethical principles, in this specific field, according what AI systems can or cannot achieve (really), weighting their social impact, to create a positive AI development at the service of our civilization progress; so the problem is strictly technical.

In reply to by Silvestro Marano

Pateikė GEORGIOS LEKKAS Pen, 15/02/2019 - 22:15

Dear Silvestro,
I agree with your opinion that ethical principles are not an absolute but I would like to point out that neither are they the simple result of a current social common sense. They are the core product of a long cultural procedure and it usually takes them some centuries in order to be fully shaped. They are part of a specific cultural heritage and they play a pedagogical role for the generations to come. So, when we are called to decide about our common European future, it is useful to have in mind the whole picture of this historic procedure which had as a result the creation of the European fundamental rights. Georgios

In reply to by GEORGIOS LEKKAS

Pateikė Silvestro Marano Pen, 15/02/2019 - 23:05

I haven't negated they are the result of a long cultural process, the common sense changes and evolves with social and cultural evolution. The point is that here we haven't to create a "new ethic", or some sort of Kantian "Critique of pure AI-reason", but to define policies to grant the ethical principles embraced by European citizens and institutions also in AI field, in order to have a benefic development of this raising sector.

In reply to by Silvestro Marano

Pateikė Norbert JASTROCH Šeš, 16/02/2019 - 12:55

Dear friends,

first of all, Silvestro: it is not merely a technical problem, instead it is to have a public discourse upon ethical principles. And it is not about creating ethical prinicples - or a new ethic -, instead it is to reflect ethical principles in the light of new technologies to come.

So why not continue with the cultural process of finding ethical principles, among others to take up new issues arising with the technology of AI?  And why not use these and feed them into the definition of policies?

Thats what, to my understanding, the EU initiative to have the AI HLEG and the European AI Alliance discuss those topics are about. And this is definitely worthwhile, I would say.

Regards, Norbert

In reply to by Norbert JASTROCH

Pateikė Silvestro Marano Šeš, 16/02/2019 - 13:31

You are free to discuss It, but as said above "a new ethic", isn't something that could be decided on paper by someone, ethic is a complex social dynamic that involves the whole cultural context of a population and their common sense. The role of institutions is to create policies to grant the respect of ethical principles, extending them where they miss, not to create "a new ethic". It's important to understand the difference between these two concepts. You basically are saying what I have said with others words but seems you have misunderstood my position. 

In reply to by Silvestro Marano

Pateikė GEORGIOS LEKKAS Šeš, 16/02/2019 - 13:58

Dear friends,
It all depends on what kind of E.U. we want. If we want the European edifice to move forward, discussions like this one, proposed by Norbert, are a sine qua non. Otherwise it seems to me that we opt for a E.U. with lose tights and with the risk to blow up soon into pieces. Georgios

In reply to by GEORGIOS LEKKAS

Pateikė Silvestro Marano Šeš, 16/02/2019 - 19:49

I love philosophic discussions however a problem to be solved need a proper context. You have asked "what ethic", and I have tried to answer that ethic is an extremely relative concept that changes according specific cultural groups, but we already have an ethic to start to reason how to formulate policies to grant an AI development respectful of ethical principles shared among citizens of civil society, so we need to evaluate the impact of this field on society to extend such ethical principles in this field, and this imply a technical knowledge of AI potential and AI limits, otherwise is like trying to build a palace without foundations and without knowing how bricks could fit.