What can be done so consumers trust AI? All conformity checks must cover evolving nature of the AI system. Access to the algorithms, codes & data sets must be ensured to understand and assess risks.
What can be done so consumers trust AI? Think how uncertainties and assumptions affect risk assessment. Use standards to support risk assessment, and adopt mitigating measures.
What can be done so consumers trust AI? A label must consider the information asymmetry associated with AI. This is different from one linked to non-AI products (e.g. Ecolabel) where content of the product is “static”.
Read our position on the EC White Paper: https://bit.ly/2Na6Lyc
- Prihlásiť sa na účely uverejňovania komentárov
Pripomienky
![User User](/sites/default/files/styles/avatar/public/default_images/default-user-avatar.png?itok=jn92qtvp)
Reflects and can inform the WEFs article.
https://www.weforum.org/agenda/2020/08/consumer-trust-ai-potential/
- Prihlásiť sa na účely uverejňovania komentárov
![Profile picture for user kad](/sites/default/files/styles/avatar/public/2020-06/picture-14647-1477105954.jpg?itok=X-vQOQ0v)
TRUST is the fundamental issue for any advances in using long-living, large-scale, real-world technologies - I have provided some arguments in 20 slides I presented on O'Reilly Conference in Berlin, Nov 6, 2109. -
https://www.researchgate.net/publication/338980342_Can_AI_Systems_be_Tr…
- Prihlásiť sa na účely uverejňovania komentárov