What are the main requirements for AI systems in Healthcare?

1. Main barriers to adoptation of AI in healthcare

Absence of a specific law, or clear legal framework from the perspective of both professional users (A) and patients (B).

 

When constructing such a framework, it is important to make a distinction between the various sub-areas of healthcare, such as research and development (products such as eHealth apps, wearables, MRI scanners, smart medicine), professional care providers (primary care, drug distribution, complex surgery) and recipients of care (patients). Because each sub-area has different needs.

Moreover, there are already carefully built up European and worldwide legal and ethical frameworks per sector/sub-area. Existing good practices should be built upon, where possible and applicable.

 

A. Barriers for professional users: Unfamiliarity with AI-systems, their advantages, their legal requirements/boundaries and fear of the unknown.

 

- If a system (any system) has been working with or according to well-known principles/certain established methods, it is ‘difficult’ to change this (path dependency).

- This is even more so in healthcare, since traceability is key within any healthcare system. Changing one thing within the system, means that everything else has to be reassessed, and/or changed as well.

- It is simply unclear for companies and private and academic research institutes in the medical sector what is and is not allowed in the field of AI, blockchain, big data, deep learning algorithms, cognitive computing, virtual reality and robotics. Both at European level and at national level. This knowledge is important for the commodification of their inventions/creations. Two practical examples are permission from Farmatec and obtaining a CE-marking.

- These stakeholders already experience a lot of uncertainty about legal matters such as liability (professional indemnity, insurance, product liability, statutory and strict liability, punitive damages) and intellectual property (copyrights, patents, trade secrets, database rights, sui generis rights on computer generated works).

- These stakeholders already experience a lot of uncertainty about the new REGULATION (EU) 2017/745 of 5 April 2017 on Medical Devices (which replaces COUNCIL DIRECTIVE 93/42/EEC of 14 June 1993 concerning Medical Devices, mid-2020), and the Machinery Directive (2006/42/EC).

It is important that a new law (AI Regulation or Directive) does not add to the confusion.

 

B. Barriers for patients: Unfamiliar with or unable to work with AI-driven technology and consequences for privacy.

 

2. Requirements for sustained use of AI in healthcare

- It should fit into existing QAQC-systems (quality assurance and quality control).

- It should be able to implement and/or adhere to principles of Eudralex (The body of European Union legislation in the pharmaceutical sector), Good Manufacturing Practices (GMP) and Good Distribution Practices (GDP) in particular.

- It should be easy to adjust/correct ‘bugs’ in the system.

- Privacy of both patients/consumers and users/businesses is a great priority (GDPR-Compliant).

- It should be practical and easy to use.

- Overly stringent and complex legal requirements hinder innovation (incentive & reward).

- Enforcement should be carried out by a government agency/public body such as Farmatec, with a multidisciplinary approach. Thus by healthcare experts, IT experts, ethicists and privacy experts, coordinated by this central body. Instead of enforcement by notified bodies who have commercial interests when they issue a CE-marking. Compare this with the way the FDA (Food and Drug Administration) operates in the United States.

 

3. Steps to overcome barriers

- Inform and teach.

- Start actual pilots with AI-driven technology.

- Make sure that AI is a help within the sector. Let the AI ​​process run in parallel with the old process, where relevant.

- Include AI, robotics, DLT blockchain in the Curricula of Medical School, Law School, Business School, primary and secondary education.

- Perform an Artificial Intelligence Impact Assessment.

- Make sure privacy and other fundamental rights are respected.

 

Mauritz Kop & Suzan Slijpen

Tags
eHealth Artificial Intelligence Legal Healthcare robotics blockchain Ethics

Kommentare

User
Von Tomasz Smolarczyk am Sa., 15/12/2018 - 12:48

Main barriers to adoptation of AI in healthcare are the difficulty to build up a clear business case and lack of enough success stories in order to encourage companies to invest in such products. 

Professional users

Corporations try to test new ideas and AI products but with low risk and small investments. They develop them in a rash, with too small teams and sometimes in wrong place in the patient/customer journey. Therefore it is hard to prove a business value after the pilots with startups. 

Professional users very often consider AI products as a treat to their work (and startups sometimes advertise them in such a way!) and they repel those products. 

Educating the users, explaining how AI could improve their work, speed it up and increase the quality is one of the ways we can help them.

If the AI product is supposed to provide guidance or steer patient journey (eg. specifing triage level of a patient), the clients/professional users don't want to take clinical risk on their side. The AI products need to be tested and prove that they can work even more precisely than the human experts do. 

What is more, the AI products are constantly evolving, so it is improtant to establish a process for quality checking. The clients could test the product before the pilot to validate the clinical quality, but since the products constantly change, AI providers need to figure out how to maintain the quality and communicate to the clients.

Patients

Patients need to feel safe while using the AI products, that's why design and user experience is extremely important. The question asked in a different way could impact the way the patients answers, which could have huge clinical consequences. 

 

Requirements for sustained use of AI in healthcare

  • QAQC-systems (quality assurance and quality control) and acceptance tests for AI product results are a must have for each product. 
  • Depending on the product, careful selection of the amount of information needed - not every product will need all the patient sensitive data. Use the smallest amount of data that is enough for the product/use case

Steps to overcome barriers

  • AI pilot projects should have higher priority at corporations with more focus on business impact. They should be placed in the right way into the business and customer/patient journeys. Otherwise, business case will be poor and pilots will not results in full-scale roll-outs.

Antwort auf von Tomasz Smolarczyk

User
Von Christian RUSS am So., 16/12/2018 - 10:29

Hello all,

I agree with the initial post and the comment of Tomasz. One key element adressed several times is the privacy and protection of patient data. This is really crutial and maybe EU projects like http://www.myhealthmydata.eu/ with blockchain based data privacy and security can build the foundation?

Further I think we should use already successfull applications and pilots in AI much more as positive examples and drivers for the acceptance of this technology in health. Besides all the risks and challanges I see, there is the hugh gain and benefit of it, when used carefully and thoughfully for the patient and in science.

Best