Video: All you need to know on Trustworthy AI and the role of law and regulation

The mass rollout of Artificial Intelligence is going to influence the way we work and we live. It is going to affect all aspects of our lives, economies and societies. 

According to Prof Christiane Wendehorst: "very soon, all ecosystems surrounding our societies, will to a large extent become AI ecosystems. This means that in our everyday lives we will be more and more confronted by decisions that have been made also with the help of artificial intelligence.

It is important to all the players of these ecosystems to make sure that they can trust AI, that Europe is the best place for innovation and that they can have legal certainty, knowing that the products that they use or they put on the market as well as the products of their competitors are trustworthy.

That holds true even more for the users the businesses the public authorities that are using AI, for their operations they need to rely on the fact that they can safely use AI and not endanger their operations and not harm anyone. Of course it is the really vital interest of all those affected by AI decisions that AI is not going to harm them and is not going to harm our society at large."

Follow a 20-minute master class on why we need regulation on AI, as presented in the Keynote Speech of Prof. Christiane Wendehorst, during the High-Level Conference on AI: From Ambition to Action.

 

Etiquetas
trustworthyai airegulation videoresource

Comentarios

User
Enviado por Christina Vary… el Dom, 17/10/2021 - 19:25

The basic problem to achieve technologically neutrality on AI is that the legislation itself has to be flexible and set different requirements based on the technology and its capabilities. You cannot possibly expect transparency, a key requirement for trustworthy AI to meet a minimum standard regarding an ML system in the same way as a DL system. This unfortunately was not adequately addressed under the current AIA, which took a horizontal, product-safety approach. Not all AI products are the same not only due to the different risks each usage raises, but also due to the different  technology they embed.

Profile picture for user dervijo
Enviado por Jola DERVISHAJ el Lun, 08/11/2021 - 09:51

The horizontal product safety approach adopted by the AIA is aligned with the so called New Legislative Framework regulatory approach. This is premised upon the distinction between essential requirements and harmonised standards. Essential requirements are set by the legislator (the AIA) and define the results to be attained, but do not specify the technical solutions for doing so. The precise technical solution may be provided by a standard or by other technical specifications (including to be developed in accordance with general state of the art engineering or scientific knowledge). This regulatory approach therefore allows manufacturers to choose the way to meet the requirements including in light of the specific risks linked to the risk posed by the AI system at hand.