Rules for dealing with artificial intelligence in companies

For the use of artificial intelligence in companies, preliminary rules for the internal and external use of AI should be established. There is often a lack of AI experts in companies who can develop an AI strategy. 

My best practice tips from my experience in large and small companies. Here are some examples of the rules for dealing with AI from some companies. I have established the rules with the board of directors and managing directors. Furthermore, all co-determination employees were involved at an early stage. 

- AI must not harm any group or individuals. 

- The AI use must not allow a person to be discriminated against by the use of an AI system 

- The AI system must be correctable by experts. 

- The underlying data should be adaptable and revised in every situation 

- An AI system requires clean data quality and constant control by AI experts 

- The data should be checked and validated by the AI experts before each import into the AI system

- For a high data quality, a clean and transparent approach to the training (supervised learning) of the AI system should be chosen

- The AI system is not used to discriminate against or exclude individual customers or groups of people 

- The AI system is used to support decisions and is not an autonomous system

- The AI experts should put their own stamp (branding) on the data analysed with the AI!

- The AI system is used for data-based decision

 

This creates transparency and acceptance of the AI in companies 

Oznake
AI ML AIINHR Artificial Intelligence Künstlicheintelligenz Isbert KIIMHR machinelearning

Komentari

User
Objavio Joerg Schecker čet, 14/05/2020 - 10:17

@Marcel Isbert: excelent policy for companies to follow-up on - I even would add: Regulatory requirements must be considered for AI systems and services

Profile picture for user n003rya5
Objavio Dietmar Koering uto, 22/09/2020 - 11:48

Isaac Asimov proposed a well-known, moral regulation in his “Three Laws of Robotics” (Asimov, 1942), which are as follows:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law;
  • A robot must protect its existence as long as such protection does not conflict with the First or Second Laws.

Although these moral laws can be applied to AI, they are abstract. These laws would also not work for certain military weapons, such as drones and rockets, as they have other purposes, which are often to spy on, injure, or kill human beings. This constitutes another ethical severe debate. In the broader sense, even with the First Law, applied to robots in production, they would only be allowed to function in an environmentally sustainable manner. Otherwise, although the specific work that a robot may do could be beneficial to human beings, such a robot would be ultimately harming human beings if they produced pollution or caused environmental degradation. It must be admitted that such issues are neutral regarding algorithms. Thus AI systems, i.e., it is entirely up to the programmer or funding institution to determine which ethical standard will be included in AI system algorithms. Asimov seems quite cognizant of this issue, as delineated in his three laws.

Another problem then arises concerning moralisation, and which is also in line with von Foerster’s view. Monika Broecker writes about a conversation that she had with von Foerster, in which she stated that ethics needs to be implicit. This is again only possible with a consciousness and a capacity for self-criticism, which are outside of the scope of AI. Ethical standards must currently be programmed by humans until such a time that sufficiently strong AI develops its own consciousness, and thus an ability of self-criticism. This underlines the call to begin now to develop AI safety and associated legislation. Indeed, only legally-binding standards for AI will lead to trust in new systems.