AI for public health: Applications and ethics

AI will change public health systems, but above all medical diagnosis. We will have systems supporting doctors or even replacing them in locations where it is not easy to reach. Data will be provided by the pacient or by the smartwatch or any other intelligent device he is wearing. 

There are several ethical questions here to address:

  • Is the access to the same quality of diagnostic made by a AI system a right or we will it be under the private sector control and we will have different levels ?
  • About all the data collected, it is clear that its anonymous analysis could provide more evidences about how to detect and prevent some health issues, but, how do we keep that data private?
  • In case of a mistake in the diagnosys, who has the responsibility?

I would like to start a discussion and know your opinions about those questions and other AI applications and ethics matters related with public health. 

Tag
eHealth health public health

Comments

Profile picture for user njastrno
Inviato da Norbert JASTROCH il Ven, 29/06/2018 - 14:00

First of all, the question of diagnostic quality in the case of AI support appears to be the same as in the case of traditional medical treatment. That is, there are different "philosophies" in place in different countries. 

Secondly, as health data of a person are most private data, privacy thereof is an essential issue. A concept for the establishment of data privacy here could start from dedicated data spaces, utilize appropriate individual data permit apps, and be built upon distributed ledger technology for the management of individual data use permits. 

Thirdly, as to responsibility, I think there is no way other than holding the producer of an AI based health application responsible, while the obligation for algorithmic accountability should be introduced as a general principle.   

 

In risposta a di Norbert JASTROCH

Profile picture for user n0025dc0
Inviato da Andres Abad Ro… il Lun, 02/07/2018 - 21:37

Great post. Thanks! There is also another thing affecting not only health applications, but this type of applications are specially sensitive: how different cultures and ages trust or approach to AI systems. How will for instance an elder person  not used to technology interact with these systems?

 

In risposta a di Andres Abad Ro…

User
Inviato da Richard Krajčoviech il Mer, 11/07/2018 - 08:08

The critical part in the usage is probably understanding, what inputs are required and wheter there is no misunderstanding regarding provided data and interpretation of results. This might remain human role for long time (a medical doctor, nurse or other educated and responsible staff) - at least until AI applications prove to be dummyproof (or until isurance companies will be willing to insure the damages :-).

In risposta a di Norbert JASTROCH

User
Inviato da Richard Krajčoviech il Mer, 11/07/2018 - 08:17

Fully agree with the responsibility being with the producer (potentially shared with the trained operator).

I am not sure, how distributed ledger can help with managemento findividual data use permits. I would prefer anonymization, which I think is even today's practice. "Organizations that process personal data for research purposes may avoid restrictions on secondary processing and on processing sensitive categories of data (Article 6(4); Recital 50). As long as they implement appropriate safeguards, these organizations also may override a data subject’s right to object to processing and to seek the erasure of personal data (Article 89)." (Quoted from https://iapp.org/news/a/how-gdpr-changes-the-rules-for-research/)

User
Inviato da Bjoern Juretzki il Mar, 03/07/2018 - 21:10

I think one interesting ethical question to discuss in this context is whether it will even get imperative and possibly mandatory by law to use AI in healthcare since the outcomes will be so much better, even if this means accepting certain undesirable "side-effects", e.g. concerning transparency of the decision-making process, data privacy, etc.

In risposta a di Bjoern Juretzki

User
Inviato da Richard Krajčoviech il Sab, 14/07/2018 - 11:07

We should distinguish AI as such and specific models. If a specific model, properly trained and tested, proves to be beneficial, its usage might become included in recommended diagnosis procedures. We should also distinguish models in medicine and in e.g. entertainment. AI in entertainment might be able to learn your voice, habits etc. at your home and adjust its behavior accordingly, but for medical use, AI need much more thorough training on well built training sets and much more thorough testing. So development of AI system for medicine will be still much more expensive than for general use, as it is today.

We do not have enough data to support general imperative AI usage.

User
Inviato da Mariana POPOVA il Ven, 20/07/2018 - 15:42

Hi all,

I would like to add a new point to your interesting discussion: the specific case of mental health and AI.  So what about the use of mental health chatbots  - on the one hand you have financially and physically accessible mental health care on the other hand the issue of privacy is quite sensitive since your most private data go to social media messengers. Also how reliable is the treatment since most of the people who need it already experienced a trauma connected to somebody's missing presence, to which extent this could be replaced by a technical AI-based solution? And again who is responsible in case of wrong diagnosis and worsening of the situation?

In risposta a di Mariana POPOVA

Profile picture for user n0025dc0
Inviato da Andres Abad Ro… il Mar, 24/07/2018 - 17:20

Hi Mariana,

definitely, mental health is an important use case for AI. I can see a clear use case to help in depression cases. 

In risposta a di Andres Abad Ro…

User
Inviato da Mariana POPOVA il Ven, 27/07/2018 - 11:27

Hi Andres,

Thanks for your reply and the initiation of this interesting discussion. Do you think AI can catch the specificity of the individual depression cases? Using AI for cancer diagnosis should be in a way different from depression diagnosis since the former case has objective physical determinants and the latter is rather more connected with individual specificities, which in general are difficult to treat with universal model solutions.

Profile picture for user n0025dc0
Inviato da Andres Abad Ro… il Mar, 24/07/2018 - 17:25

Summarizing the discussion so far, we have discussed these elements:

  • Medical data privacy
  • Usage on mental health cases
  • Responsibility

I would like now that we extend this discussion to find use cases in the health environment. Mental health was an excellent starting point and I suggest that we try to enumerate more, so we can create a proper list of possible applications. What do you propose? 

 

In risposta a di Andres Abad Ro…

User
Inviato da Bjoern Juretzki il Lun, 13/08/2018 - 11:46

I would like to add that we shouldn't restrict ourselves to use cases related to diagnostics (be it for cancer, heart diseases or mental health), as important as this use case is.

Hospital logistics, for example, is another area where AI-powered robots can make a big difference. The same is true for surgery and rehabilitation. In fact, the whole healthcare value chain can benefit from AI-powered solutions.

Nurses, for example, typically spend less than two hours of a 12-hour shift on direct patient care (http://www.fortherecordmag.com/archives/0915p10.shtml). The remaining time is used for paperwork, changing bed linen, searching for medications, and delivering patient meals. Using robots for some of these tasks can generate considerable time efficiencies and free medical staff from tedious, repetitive tasks. Instead, they can focus more on the patient.