“Artificial Intelligence: a Rupture Technology for Innovation” by James L.Crowley in the AI4EU Café

On Nov 13th, 2019 at 3 p.m. James L. Crowley (Univ. Grenoble-Alps and INRIA Grenoble Rhone-Alpes Research Center) will make his presentation titled “Artificial Intelligence: a Rupture Technology for Innovation”.

“The Turing test defines intelligence as human-level performance at interaction. After more than 50 years of research, Machine Learning has provided an enabling technology for constructing intelligent systems with abilities at or beyond human level for interaction with people, with systems, and with the world. This technology creates a fundamental rupture in the way we build systems, and in the kind of systems that can be built.


In this talk I will provide a review of recent progress in Machine Learning, and examine how these technologies change the kind of systems that we can build.  Starting with a summary of the multi-layer perceptron and back propagation, I will describe how massive computing power combined with planetary scale data and advances in optimization theory have created the rupture technology known as deep learning.  I will discuss common architectures and popular programming tools for building convolutional and recurrent neural networks, and review recent advances such as Generative Adversarial Networks and Deep Reinforcement Learning. I will examine how these technologies can be used to build realistic systems for vision, robotics, natural language understanding and conversation. I conclude with a discussion of open problems concerning explainable, verifiable, and trustworthy artificial systems. “


This is the link for registration: https://attendee.gotowebinar.com/register/6538776261591328268


Vytvořil uživatel Keith Tayler dne St, 13/11/2019 - 18:22

The Turing test is an extremely easy test to pass and the fact that it has not been passed   shows how far we are from having systems that can interact at anything approaching a human level. 

Turing's paper Computing Machinery and Intelligence' was probably deliberately confused and humorous, but he did make some attempt to explore the problem of 'machine thinking'. In parts of his paper he wants to replace 'can the machine think' with 'it has passed the imitation game'. The notion that we can produce a machine in the conceivable future to 'understand' natural language is nonsense. The performance of machines will no doubt improve, but an operational test like Turing's, as he realised, does not mean the machine understands language, thinks, sees, knows, believes, etc..