AI for public health: Applications and ethics

AI will change public health systems, but above all medical diagnosis. We will have systems supporting doctors or even replacing them in locations where it is not easy to reach. Data will be provided by the pacient or by the smartwatch or any other intelligent device he is wearing. 

There are several ethical questions here to address:

  • Is the access to the same quality of diagnostic made by a AI system a right or we will it be under the private sector control and we will have different levels ?
  • About all the data collected, it is clear that its anonymous analysis could provide more evidences about how to detect and prevent some health issues, but, how do we keep that data private?
  • In case of a mistake in the diagnosys, who has the responsibility?

I would like to start a discussion and know your opinions about those questions and other AI applications and ethics matters related with public health. 

Reacties

Profile picture for user njastrno
Geplaatst door Norbert JASTROCH op vr, 29/06/2018 - 14:00

First of all, the question of diagnostic quality in the case of AI support appears to be the same as in the case of traditional medical treatment. That is, there are different "philosophies" in place in different countries. 

Secondly, as health data of a person are most private data, privacy thereof is an essential issue. A concept for the establishment of data privacy here could start from dedicated data spaces, utilize appropriate individual data permit apps, and be built upon distributed ledger technology for the management of individual data use permits. 

Thirdly, as to responsibility, I think there is no way other than holding the producer of an AI based health application responsible, while the obligation for algorithmic accountability should be introduced as a general principle.   

 

Als antwoord op door Norbert JASTROCH

Profile picture for user n0025dc0
Geplaatst door Andres Abad Ro… op ma, 02/07/2018 - 21:37

Great post. Thanks! There is also another thing affecting not only health applications, but this type of applications are specially sensitive: how different cultures and ages trust or approach to AI systems. How will for instance an elder person  not used to technology interact with these systems?

 

Als antwoord op door Andres Abad Ro…

User
Geplaatst door Richard Krajčoviech op wo, 11/07/2018 - 08:08

The critical part in the usage is probably understanding, what inputs are required and wheter there is no misunderstanding regarding provided data and interpretation of results. This might remain human role for long time (a medical doctor, nurse or other educated and responsible staff) - at least until AI applications prove to be dummyproof (or until isurance companies will be willing to insure the damages :-).

Als antwoord op door Norbert JASTROCH

User
Geplaatst door Richard Krajčoviech op wo, 11/07/2018 - 08:17

Fully agree with the responsibility being with the producer (potentially shared with the trained operator).

I am not sure, how distributed ledger can help with managemento findividual data use permits. I would prefer anonymization, which I think is even today's practice. "Organizations that process personal data for research purposes may avoid restrictions on secondary processing and on processing sensitive categories of data (Article 6(4); Recital 50). As long as they implement appropriate safeguards, these organizations also may override a data subject’s right to object to processing and to seek the erasure of personal data (Article 89)." (Quoted from https://iapp.org/news/a/how-gdpr-changes-the-rules-for-research/)

User
Geplaatst door Anonymous (niet gecontroleerd) op di, 03/07/2018 - 21:10

User account was deleted

Als antwoord op door Anonymous (niet gecontroleerd)

User
Geplaatst door Richard Krajčoviech op za, 14/07/2018 - 11:07

We should distinguish AI as such and specific models. If a specific model, properly trained and tested, proves to be beneficial, its usage might become included in recommended diagnosis procedures. We should also distinguish models in medicine and in e.g. entertainment. AI in entertainment might be able to learn your voice, habits etc. at your home and adjust its behavior accordingly, but for medical use, AI need much more thorough training on well built training sets and much more thorough testing. So development of AI system for medicine will be still much more expensive than for general use, as it is today.

We do not have enough data to support general imperative AI usage.

User
Geplaatst door Mariana POPOVA op vr, 20/07/2018 - 15:42

Hi all,

I would like to add a new point to your interesting discussion: the specific case of mental health and AI.  So what about the use of mental health chatbots  - on the one hand you have financially and physically accessible mental health care on the other hand the issue of privacy is quite sensitive since your most private data go to social media messengers. Also how reliable is the treatment since most of the people who need it already experienced a trauma connected to somebody's missing presence, to which extent this could be replaced by a technical AI-based solution? And again who is responsible in case of wrong diagnosis and worsening of the situation?

Als antwoord op door Mariana POPOVA

Profile picture for user n0025dc0
Geplaatst door Andres Abad Ro… op di, 24/07/2018 - 17:20

Hi Mariana,

definitely, mental health is an important use case for AI. I can see a clear use case to help in depression cases. 

Als antwoord op door Andres Abad Ro…

User
Geplaatst door Mariana POPOVA op vr, 27/07/2018 - 11:27

Hi Andres,

Thanks for your reply and the initiation of this interesting discussion. Do you think AI can catch the specificity of the individual depression cases? Using AI for cancer diagnosis should be in a way different from depression diagnosis since the former case has objective physical determinants and the latter is rather more connected with individual specificities, which in general are difficult to treat with universal model solutions.

Profile picture for user n0025dc0
Geplaatst door Andres Abad Ro… op di, 24/07/2018 - 17:25

Summarizing the discussion so far, we have discussed these elements:

  • Medical data privacy
  • Usage on mental health cases
  • Responsibility

I would like now that we extend this discussion to find use cases in the health environment. Mental health was an excellent starting point and I suggest that we try to enumerate more, so we can create a proper list of possible applications. What do you propose?