8 Parameters to Qualify AI Solutions

This thread aims to consider a framework to qualify and rate AI solutions across relevant parameters. While discussing in another thread, it occurred that while it is important to discuss what AI can or cannot do, we should also think of standards that govern our currently available day-to-day AI solutions and how to make them easily understandable to the public. It would allow effective, responsible and careful adoption of current AI solutions by users, and will require an official certification or rating, just the way movies today are rated R, PG etc. or a food product is rated on its level of spice.

One way could be to devise a way to identify some of the most critical parameters to look for in any AI solution, and to rate/label them on a standard scale. Eight such parameters are discussed below:

1. Depth of AI: There are many tools and techniques that belong to the AI domain. Decision trees, Random forest, Gradient boosting, Monte Carlo, to name a few. The use of any one of these (say, Regression) in a solution can technically qualify it as AI-enabled, but it would not be very accurate or useful for a user. This has led to disillusionment among early AI users, while also giving rise to plethora of solutions and companies calling themselves AI. Since most users or policymakers would have difficulty understanding the rigor of AI simply by looking at the methods used, perhaps we can classify these techniques and assign a rating for the type/number of tools used. It will reflect how rigorous/deep the underlying AI is in any given solution.

2. Explainability of AI: A very important factor in limiting Black Box AI solutions. Consider the case of Mount Sinai Hospital that employed the AI solution Deep Patient to predict cases of schizophrenia, which is otherwise a notoriously difficult thing to do for doctors. While Deep Patient could indeed do it more accurately, the problem was that doctors had no clue why/how, and had to blindly trust the AI. The idea here is that if an AI solution makes a prediction or decision, it should also be able to explain the rationale behind it. Sooner or later, we will have ethical laws in place to ensure this. For now, we can at least have a rating of the degree of transparency an AI solution has, based on how well users can see/understand the reasoning behind its predictions.

3. Type of AI: An AI solution performs one or more of the following three broad tasks: sense, think, respond. If we go in more detail, we have predictive analytics, chatbots, virtual agents, data visualization, speech/facial/social analytics, etc. that different AI solutions are designed to perform. A proper dictionary of all these function types and their explanation would help label each AI solution on the kind of functions it performs.

4. Support needed for AI: Most organizations adopting AI solutions today are learning this the hard way: deriving value from an AI solution requires data cleanup, appropriate connectors, employee training, habitual change and culture shift among employees, standard operating and measuring procedures in place, and a clear use case(s). If we wish for successful adoption of AI, clear messaging is needed on the support it needs to be successfully implemented by organizations in order to be used to maximum utility.

5. Usage conditions for AI: Any AI solution requires certain minimum amount of data, can be used for specific use cases, and is usually effective in only certain situations for even those use cases. Such conditions/criteria should accompany any AI solution for potential users to clearly understand and qualify if an AI solution will be useful to them.

6. Biases with AI: AI algorithms are trained on datasets that may carry inherent biases. All such biases that the developer can think of should be listed with the solution. Much as in legal or medicine fields, these should also be added to a universal list of known and unknown biases that is iteratively built so that future developers can refer to it to test their own solutions for biases seen historically.

7. Job-loss risk of AI: Since the biggest worry today is the potential of some AI solutions to replace jobs, I wonder if there is a way an estimate could be laid out (i.e. number of employees who'd lose their job without being reassigned, and the number of years over which it will happen), represented by a "Risk"-rating (say, in green, yellow and red colour codes) which can be applied to loosely indicate the potential cost to current workers.

8. Kindness of AI: So far mostly overlooked, how kind an AI solution is becomes important when we consider the tricky cases of autonomous cars making decisions with human lives when on road. A great challenge is currently underway to call out for datasets that can teach AI to be kind, just the way we teach children to be. If you are interested in exploring and participating in this challenge, here is the link: https://www.herox.com/EthicsNet

Regards,

Malay

 

Ετικέτες
ranking european strategy European ICT regulation Artificial Intelligence JobsRobotisation blackbox xAI AI alliance

Σχόλια

User
Υποβλήθηκε από τον χρήστη Darko IVANCAN στις Δευ, 13/08/2018 - 18:46

only the 1st maybe also the 2nd one fit the title, the rest is general AI phylosophy

User
Υποβλήθηκε από τον χρήστη Christian RUSS στις Τετ, 15/08/2018 - 09:19

Hello all,

I agree somehow with Darko. The idea and attempt is very good, however I think there are more generic & additional (technical) parameters needed or the wording is maybe misleading?  Would parameters like algorithms & methods, data sources & types, application domains. Of course philosophy and ethics are also important as "soft facts", same as 6-8 in the list.

 

Further to reuse already existing attempts to classify the domain: eg. https://www.researchgate.net/figure/Schematic-diagram-of-classification…

Best