Does strong AI actually work?

The appearance of Strong AI or Artificial general intelligence (AGI) - as a machine that can understand or learn any intellectual task that human beings can - has been predicted several times in the history of computing.

In 1955 McCarthy, Minsky, Rochester and Shannon coined the term Artificial Intelligence, in 1956 at the Dartmouth Conference they proposed a project:
"The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

Since then we saw several updates on this prediction, including the concept of 'Intelligence explosion' by Gudak in 1965, and its update to a 'Singularity' by Kurzweil in 2005.
A new wave and technological concept appeared in the 1980s after Neural networks were understood better. 

The current incarnation of AI enthusiasm is accompanied by an allocation of private and public budgets - including the EU - as well as media attention not seen before.
Many publications care about the ethical conclusions of strong AI and its effects on the human society - including here at the European AI Alliance.

Personally since 2018 I conducted new research and published on the field, having lectured on it first in 1988. From this insight I am - still - surprised by the current AI attention.

Does strong AI actually work?
Is there any evidence or sign of it actually occuring any time soon?
Do we have evidence about 'less intelligent' systems to demonstrate a progress toward strong AI?

Comments

Enviado por Matthieu Vergne em Qui, 03/12/2020 - 23:08

As far as I know, AGI and Strong AI are different: the former is about getting rid of the limitation of performing a single, highly specific task at once, while the latter is about the consciousness problem.

AGI is a technical problem, where we have to find a way to generalize the learning. Strong AI is more about philosophy for now, and will become a technical goal when we will agree on how to translate conciousness in technical terms.

For AGI, there is existing works, including proposed measures for evaluating the degree of intelligence of an entity. For instance, you could define intelligence as the capacity to maximize your future freedom of actions:

https://www.ted.com/talks/alex_wissner_gross_a_new_equation_for_intelli…

So there is ways to see where we are going, but since almost nobody really work on AGI, there is no incentives towards strengthening those works and establish some reliable frameworks. I think since a while that it is a pity, and that the hype on narrow intelligence is almost depressing.

So regarding strong AI, I don't know where we are and I don't think it will change soon. But for AGI, there is potential, but nobody care about that for now. And please don't mix the Blue Brain project with that: achieving artificial intelligence and simulating human thinking, which can be extremelly stupid too, are two different things.

Em resposta a por Matthieu Vergne

Enviado por Bernd Brincken em Sex, 03/13/2020 - 15:40

Matthieu, IMHO the definition of intelligence is a discussion on its own, and not one that I intended to open here.

My question is about machines or computational systems with effects like:
"AI is transforming societies and economies. It promises to generate productivity gains, improve well-being and help address global challenges, such as climate change, resource scarcity and health crises. Yet, as AI applications are adopted around the world, their use can raise questions and challenges related to human values, fairness, human determination, privacy, safety and accountability, among others."
( https://editorialia.com/2020/03/07/artificial-intelligence-in-society/ )

Or, let's hear German ministers:
"Artifical intelligence is becoming a key technology for the whole economy. ... We want to become a global leader on the development and use of AI technologies. For this purpose, we will make available €3 billion in the coming years. .. We encourage everyone to participate in the ethical, legal and cultural shaping of the use of AI.  .. Account was taken of a total of 109 comments and the results of in-depth discussions with experts during six specialist forums."
( https://www.bmwi.de/Redaktion/EN/Pressemitteilungen/2018/20181116-federal-government-adopts-artificial-intelligence-strategy.html )

So call it AI, or call it AGI, or call it strong AI - something with great impact to the human society seems to be coming up.
But, is there anything actually existing that could come even close to produce these kind of effects?

 

Em resposta a por Bernd Brincken

Enviado por Matthieu Vergne em Sáb, 03/14/2020 - 11:45

If you put aside all the (quite dreamy) comments about Strong AI, like singularity and related stuffs, and focus on the concrete ones likes impacts on society and economics, then from what I could see you could always replace the term "AI" by "tools" and it would works exactly the same. Then the answer is yes, we do have ways to see where it is going on, because it is about evaluating the impact of tools on the society, be it economical, financial, emotional, etc. The same way we evaluate the impact of any tool: surveys, comparisons with and without the tools, benefits measures, etc. AI techniques, like any other tool, do have their own measures for the specific properties that concern themselves. If I speak about machine learning techniques, we can speak about classical measures, like precision and recall to measure the true/false positives/negatives, often combined in more complex ones, like F-values, ROC curves, and so on.

The whole points of the comments you cite is that they do not speak about specific impacts, so you can interpret it the way you want. That is what we call marketing. These comments are generic ones that, in their essence, say that AI techniques are expected to be disruptive technologies, like the internet or the steam engine were in their time. A simple way to assert a disruptive technology is to look at (i) how well received it is, which should lead to a great spreading of its use, and (ii) how much it changes the practice between those who use it and those who don't. Both points have been already observed for AI: (i) everyone has already heard about AI, many surveys have shown a global interest (including global fears, which usually come together), just look at the hype within high tech companies and how the countries push for them, and (ii) we have already confirmed how AI techniques can change the practice, just look at how much effort companies are putting in integrating them, how they are impacting various domains, like gaming (chess, go, etc.), medicine (medical evaluations, surgery tools, etc.), security (face recognition, behaviour categorization, etc.), movies (multi-agents simulation, face & voice alteration, etc.), laws (GDPR, etchical AI, etc.), and so on.

When you can assert both a broad interest and a significant impact on practices, then you can consider with high confidence that it is a disruptive technology. The point is that this is not specific to AI. And the comments you cited could have been stated for anything else. Just replace "AI" by "internet" or "steam engine", imagine them being stated by people in the past, and you should get the point. These are marketing comments which aim to generate a global interest and involvement of the markets into these technologies, because these people are betting on AI.

Em resposta a por Matthieu Vergne

Enviado por Bernd Brincken em Sáb, 03/14/2020 - 12:46

Ok, then let's talk about "Tools that can be classified as AI" first.

Yes, Neural network pattern recogniction or classification does actually work, and it is a viable addition for any programmer, researcher, technician.
Before he had to analyze the functional relations of properties of his subject, and cast them into program code. I worked in the field of optical pattern recognition in holographic applications in the 1990ies and know a bit what I'm talking about.
Now he can just feed sample data into a (more or less standard) NN, adjust parameters, maybe just by trial-and-error, until his NN has learned what he is interested in - if there is any evidence in his data and not just noise.
This provides results impressively faster than classical programming, and may even recognize patterns that could not be functionally analyzed before (with reasonable effort).

For Expert systems, I am actually saddened that the technology has not flourished more during the internet revolution; which was realistic to predict in the 1980ies.
Many esp. technical applications could benefit from online interaction in combination with knowledge bases which hold rules too complex to consider for (most) human minds.
But practically, if you look at the huge number and technical differentiation of online communities nowadays, there seem to be too many human experts around who are willing to share their knowledge (often for free) and prevent an expert system provider to build a viable business model in the field.
Still, in areas like medicine or genetics ES are being used effectively.

Em resposta a por Bernd Brincken

Enviado por Matthieu Vergne em Sáb, 03/14/2020 - 17:47

Expert systems are not completely dead, though. They are complementary to machine learning, which is efficient in retrieving relations but lack the required information to map it to human concepts. I remember there is some works which try to fill the gap, something about hybrid systems, but I don't remember well.

I don't believe that one will surpass the other. They seem too much complementary to me. Especially, I think the explainability of the results remains a mandatory requirement for critical cases, but that we cannot afford to simply reduce the complexity, and thus the performance, of the system to "keep it simple enough". So at some point we will have to dig in the mud of the complexity and make it explainable anyway. At that time, I think that machine learning will need to reuse expert systems techniques to fill the gap properly.

But considering your first post, it seems the discussion has diverged from your initial question. Maybe you could rephrase it. Because it seems to me that, if we consider current AI techniques rather than Strong AI or AGI, we already know that it works well. There is plenty of measures already used to evaluate the performance of these systems, since such evaluation is generally mandatory to motivate the publications of new techniques in top conferences.

Em resposta a por Bernd Brincken

Enviado por Bernd Brincken em Sáb, 03/14/2020 - 22:57

Mathieu, the discussion was not meant to diverge, I just wanted to formulate the indisputable achievements of AI techniques - in order to differentiate them from the question of this thread.

I propose to clarify the difference between
a) AI tools and techniques and
b) what is claimed for strong AI to occur some time soon.
A tool needs someone - intelligent, responsible, human being in most cases - with an intention to use it for a certain purpose.
On the opposite, strong AI points to machines that rose up to the role of a tool-user.

IMHO only this expectation would justify statements like
"AI is transforming societies and economies", or ask for
"everyone to participate in the ethical, legal and cultural shaping of the use of AI".

Or would you state that the mentioned successes - from a programmer's perspective - already justify statements like these? In this case, how would neural networks "transform societies"? How would their use raise questions about ethics or culture?

Em resposta a por Bernd Brincken

Enviado por Matthieu Vergne em Dom, 03/15/2020 - 18:56

Well, tools can already be considered as users: when a smartphone automatically retrieves the SMS and app updates from the network, the smartphone uses the network. Yet, this is the purpose of automation, no intelligence required here beside the one from the engineers to automate it corretly.

Maybe you want to speak about the initiative of the AI agent. In the smartphone example, someone (developper, manufacturer, reseller, owner) configured the smartphone to use the network to retrieve the information for this specific purpose. You may consider that a Strong AI is, thus, an agent that do something without being explicitly asked to.

But even this interpretation is hard to hold, since we often see this kind of behaviour. In technical terms, we call that a bug: an action that is not expected and, thus, should not occur if we implement the agent correctly. In other words, to consider an agent to have some initiative, we should introduce a difference between an unexpected behaviour that is just a mistake (the bug) and an unexpected behaviour done on purpose. But this aspect implies to consider that the agent itself establishes its own purpose. And to do so, it should have some kind of consciousness, which is what leads to the notion of Strong AI: the agent should have some kind of self decisions which goes beyond what we put in it. But currently, I don't know about any Strong (conscious) AI.

When people state that "AI is transforming societies and economies", this is an indirect consequence. It is not that AI is willing to do so, but that AI is such a disruptive technology that it comes with these disruptive effects. One could also say that "weapons kill many people in the USA each year", everyone would understand that weapons don't kill by themselves, but that they give to their users the capacity to kill easily, thus facilitating this effect. This is the same for AI: AI don't shape by itself, but it gives to its users the capacity to significantly change his or her practices, thus shaping the society and economy in an indirect way. Once again, these are marketing statements: they use metaphores and word plays to make powerful messages. One should read them with the interpretation they require, not in a literal way.

Em resposta a por Matthieu Vergne

Enviado por Bernd Brincken em Seg, 03/16/2020 - 11:52

Simple question - do you think this is this true?:
"AI is transforming societies and economies"
If yes, can you give an example for this effect?

Secondly, "AI is such a disruptive technology" - can you give an example?
Software or online services can have disruptive effects in a specific branch - but do you have one in mind where AI (/-tools) had a crucial role?

 

Em resposta a por Bernd Brincken

Enviado por Matthieu Vergne em Seg, 03/16/2020 - 20:31

I already cited a bunch of examples in my previous post, although I was not giving concrete cases. So let's fix that:

  • gaming (chess, go, etc.): winning against the best players, like Kasparov and Lee Sedol, has proven that humans can be easily beaten by the machine with just raw power or few days of automated training, thus pushing humans to rethink about what it means to be the best player, and especially why they should strive for the first place if they already know that they can always be beaten easily by a machine.
  • medicine (medical evaluations, surgery tools, etc.): a domain I am not familiar with, but we are accumulating examples of analyses performed by machines that reach, sometimes overcome, experts analyses, thus providing a basis for massive, personnalized medicine, since the shortcoming of doctors can more and more be compensated with machines.
  • security (face recognition, behaviour categorization, etc.): look for example at massive surveillance, which is now possible with the help of advanced face recognition, the Chinese government being a particularly representative example. We haven't been closer to Orwell's "1984".
  • movies (multi-agents simulation, face & voice alteration, etc.): although not really heard of out of the domain, movie production is going more and more forward with 3D techniques, an example being the use of multi-agent systems for the simulation of massive crowds, like war scenes. A more recent and disturbing example is the production of deep fakes: videos altered to change the speech of a person, with tremendous realism. Example on the net can be found where we see deep fakes of Obama or Trump, making them say something they (probably) never said. This rises the question about using videos as proofs in court: can we still use them? Or should we consider that the risk of judging based on a deep fake is too strong?
  • laws (GDPR, etchical AI, etc.): with AI techniques, especially machine learning, we are now able to process huge amounts of data. The issue is that we have no details about how the results are produced, yet they show so much performance that we want to use them everywhere... including in domains which can impact the lifes of people, like in court (e.g. China) or in employment (e.g. USA). When you don't know on which basis the decision is taken, how can you trust it? If someone rejects you because of the result of an algorithm nobody really knows, should we give the right to the decision maker, because he should be free to decide based on his own criteria, or should it be the victim, because one's decision should be motivated?

Now I hope you won't ask me to provide also the sources: take the keywords and search over the Web, you should easily find them. They have made the news at some point.

The whole point is that we are only at the beginning, because we are still improving on these techniques and finding new use cases. So what we can do now is just a glimpse of what we should be able to do at the end, which leads to further dreams and fears... which can also impact the society and economics: the markets evolve depending on expectations and trust too.

Em resposta a por Matthieu Vergne

Enviado por Bernd Brincken em Ter, 03/17/2020 - 13:18

Agreed that techniques counted to the AI field have contributed to improvements in these fields.

Do you consider these kinds of effects as "disruptive" in the respective industry branches?
Do you consider them to be "transforming societies and economies"?

Em resposta a por Bernd Brincken

Enviado por Matthieu Vergne em Ter, 03/17/2020 - 21:47

My personal opinion is that they have broken common practices and thinking by bringing new ones. That is what I consider to be disruptive: it is not a mere iterative improvement over existing stuff, it breaks what exists to do something new.

Now, my own opinion has no value here. The people you cited are the ones who said that stuff. If you want an opinion, you should ask their own, not mine. And if you want to know whether this is a shared opinion, then ask others' opinions too.

Em resposta a por Matthieu Vergne

Enviado por Bernd Brincken em Ter, 03/17/2020 - 22:32

Of course, I am asking everyone's opinion, including the 52 honourable high-level experts of the AI HLEG steering group.

By the way, the budget for their work in the AI Alliance has been approved by politicians like the ones cited above.