What we talk about when we talk about AI

By Virginia Dignum, Associate Professor of Social Artificial Intelligence, Delft University of Technology


Nowadays, Artificial Intelligence (AI) is ubiquitous. We can hardly open a newspaper or tune in to a news show without getting some story about AI. AI is probably the technology most talked about. But AI means different things to different people.

I’ve been working on the field of AI, both in industry as in academia since the late 80’s. Developed my first AI system in 86, an expert system to determine eligibility to social housing. Since then I’ve witnessed the deeps and the ups, the winters and the hypes in the field. Never before there has been this level of excitement, and fear, by so many, in so many areas, as we are seeing in the last couple of years. AI is breaking through in many different application domains, with results that impress even the most knowledgeable experts. Three main factors are leading this development: the increasing availability of large amounts of data, improved algorithms and substantial computational power. However, of these three only algorithms can be rightfully seen as a contribution from the AI field.

More recently, awareness that AI has the potential to impact our lives and our world has no other technology has done before, is rightfully raising many questions concerning its ethical, legal, societal and economical effects. Government, enterprises and social organisations alike are coming forward with proposals and declarations of their commitment to an accountable, responsible, transparent approach to AI, where human values and ethical principles are leading. This is a very needed development, one to which I’ve dedicated my research in the last few years. Responsibility in AI begins with a proper AI narrative, which demystifies the possibilities and the processes of AI technologies and enables that all are able to participate in the discussion on the role of AI in society. In this short piece, I will try to explain what AI is, starting by describing what it is not.

AI is not the Algorithm

The “algorithm” is achieving magical proportions, used right and left to signify many things, de facto embodying, or seen as a synonym to, the whole of AI. AI has been around for give or take some 80 years, but algorithms are way older than that[1]. AI uses algorithms, but then so does any other computer program or engineering process. Algorithms are far from magic. In fact, the easiest definition of algorithm is that of a recipe, a set of precise rules to achieve a certain result. Every time, you add two numbers, you are using an algorithm, as well as when you are baking an apple pie. And, by itself a recipe has never turned into an apple pie. The end result of your pie has more to do with your baking skills and choice of ingredients. The same applies to AI algorithms: for a large part the result depends on its input data, and the ability of those that trained it. And, as we have the choice to use organic apples to make our pie, in AI we also have the choice to use data that respects and ensures fairness, privacy, transparency and all other values we hold dear. This is what Responsible AI is about, and includes demanding the same requirements from the ones that develop the systems that affect us.

AI is not Machine Learning

Machine Learning, and in particular Neural Networks, or Deep Learning, is a subset of AI that uses statistical techniques to enable computers to perceive some characteristics of their environment. Current techniques are particularly efficient in perceiving images, and written or spoken text. By analysing many thousands of examples (typically a few million), the system is able to identity commonalities in these examples, which then enable it to interpret data that it has never seen before, which is often referred to as prediction. Even though, the results of current machine learning algorithms are impressive and go well beyond expectations, also this process is far from magic, but the result of applying well-known mathematical and statistical methods. Moreover, current algorithms are brittle (changing only one pixel in a picture can cause completely different classification), not easily generalisable (training has to start from scratch every time the machine should learn another task), and perception is just one component of intelligence, more is needed to achieve intelligent machines.

A short definition of AI

AI includes Machine Learning and is based on algorithms. However, the ultimate goal of AI is to develop computer systems that are able to simulate human-like intelligence. The term Artificial Intelligence was coined in the 50’s by John McCarthy, who defined it as the endeavour to develop a machine that could reason like a human, was capable of abstract thought, problem-solving and self-improvement. The challenge proved much harder than what those original scientists expected, and even current success of AI, in the area of Machine Learning, are very far from realising those objectives. The aim of this piece is not to speculate on the feasibility, or not, of this endeavour, but to provide a concise, practical definition. More than perception, AI is about reasoning. Besides machine learning, AI includes knowledge representation, planning, dealing with uncertainty, theorem proving, cognitive robotics and human-agent/robot interaction just to mention a few on the fields.

Borrowing from the definition given in the seminal textbook on AI[2], I would say that AI is the discipline of developing computer systems that are able of perceiving its environment, and to deliberate how to best act on it in order to achieve its own goals, assuming that the environment contains other agents similar to itself. As such, AI is about autonomy to decide on how to act,  adaptability to learn from the changes affected in the environment, and interactability  to be sensitive to the actions and aims of other agents in that environment, and decide when to cooperate or to compete.

A responsible, ethical, approach to AI will ensure transparency about how adaptation is done, responsibility on the level of automation on which the system is able to reason, and accountability about the principles that lead its interactions with others, most importantly with people.


[1] The word algorithm derives from al-Ḵwārizmī ‘the man of Ḵwārizm’ (now Khiva), the name given to the 9th-century mathematician Abū Ja‘far Muhammad ibn Mūsa, author of widely translated works on algebra and arithmetic. (source Wikipedia)

[2] Russell and Norvig (2009): Artificial Intelligence: A Modern Approach, 3rd edition. Pearson Education.


Enviado por Vladimiros Pei… el Lun, 07/23/2018 - 10:22

Really interesting miss Dignum. 

It is true that we are not sure about what AI really is, as it is not here yet. We can talk about AI, based on the things we worked on in order to give birth to it, such as machine learning, algos etc, but in fact, a self-sustainable egoistic type of AI would be nothing we can imagine yet. 

People think that the AI will be some sort of a robot with hands and laser guns, but that would mean that it's not so smart, it's predictable, and we could terminate it at any moment. 

Therefore if we really talk about a higher level of self-sustainable intelligence we must be very cautious about trying to predict it's behavior as it is practically impossible.

An egoistic-type of intelligence would appear and present itself as it sees fit, it is not necessarily a physical unit, neither it has to express itself through physical reality.

For what is right, the real AI is equal to the concept of "God". It is something we can talk about, but never be sure about it.

Enviado por Pawel RZESZUCINSKI el Lun, 07/23/2018 - 11:47

Thank you, having read your piece, I'm happy to note that great majority of the topics being discussed by the AI Alliance to date seem to discuss AI according to the rules of your clarification. Good direction!

Enviado por Richard Krajčoviech el Mié, 07/25/2018 - 16:15

Thank you for this view. Whole AI community would benefit if we reduce misunderstandings on what AI is and prevent undue (and dangerous) reaction or overreaction by public or by regulators/legilsators. We should educate them to overcome the marketing hype around Artificial Intelligence created by marketers and (unfortunately) by some scientists. It would be helpful, if e.g. scientists balance their press news and beside explaining, what AI achieved announce how the AI is under human control.

En respuesta a por Bogdan MICU

Enviado por Christian RUSS el Dom, 09/09/2018 - 17:31

Hi Bodgan,

thanks for sharing that AI KM. It helps to calibrate ourselves and to have a kind of reference point for new conversations around specific approaches and problems in the AI field. Hence I am still wondering, if the HLEG is planning to "develop" or document a kind of best-off AI Knowledge Map or taxonomie on AI concepts, so that we all have a common base?

So the question goes more to the HLEG, as I read in one of the minutes that a definition of "what is AI" is not the primary goal? IMHO this will not help to reduce the confusion within this community and even less in the public...

See https://ec.europa.eu/futurium/en/system/files/ged/minutes_of_the_ai_hlg… 1.4.8 Q&A:" The issue of defining the scope of AI covered by the AI HLG was raised. The Commission responded that clarity was important but that the AI HLG should not spend all its time on the question “What is AI?”. That AI is the bringing of autonomy into artefacts and that a broad definition needs to be taken. However, if the AI HLG felt the need to work on a definition in parallel to the other tasks then it should do so provided it did not impede progress on the main tasks."



En respuesta a por Christian RUSS

Enviado por Virginia Dignum el Dom, 09/09/2018 - 17:46

Thanks for your replies. Whereas there is always some benefit from attempting to create maps of concepts, the risk is that the classification criteria used force some type of view on the issue that does not cover all the complexities. In the case of the article you mention, this is visible in the attempt to separate symbolic, statistical and sub-symbolic approaches. Most researchers,, including myself, working on the areas the author classifies as sub-symbolic will strongly disagree with this classification, and are in fact using symbolic approaches. Another issue concerns the classification of DNNs at reasoning level. Again here, many will see DNNs at the perception level. So, in summary, useful to get some grasp but introducing inevitable bias by the nature of attempting to classify stuff into 2-dimensional spaces.

As to your question, whether HGEL will come up with a definition of AI. Indeed this is not our task. But it has been decided to provide a working definition which will enable all members to use a shared understading of the concepts being discussed. (Again here with the cautionary note I indicate above)