Socially just AI beyond ethics

Dear all,

I have written a concise roundup of AI ethics developments and issues, including global policy developments, for the LSE British Politics and Policy blog: http://blogs.lse.ac.uk/politicsandpolicy/artificial-intelligence-and-so….

Here, I suggest that: we need a clear picture of ‘AI’, ‘ethics’ and ‘bias’, AI inequality must be placed more centrally, the social sciences need to play a more active part (and that funding opportunities need to reflect this), there is a need for AI intelligibility, education and regulation, and that AI must serve a gateway to tackle urgent social problems. 

I hope that the piece can inform the work of the High Level Group and look forward to responses from EU AI Alliance discussants on this platform - especially as some of these points overlap with issues that have been raised here.

Tag
innovation Artificial Intelligence social sciences policy machine learning Ethics Inequality

Commenti

User
Inviato da Mariana POPOVA il Gio, 19/07/2018 - 15:15

Hi Mona,

Thanks for sharing your comprehensive article with us. You might be interested to check the activities of our REELER project, which deals with the social implications of AI. At the project website http://www.reeler.eu/  you can find quite useful info related to the topic and a number of prominent scientists specialised in the social aspects of AI.

Profile picture for user n0029u2r
Inviato da Benjamin Paaßen il Sab, 21/07/2018 - 22:22

Dear Mona,

thank you very much for this very insightful article. I completely agree with major points you made, especially that ethics and bias are currently ill-defined in the machine learning community, that the expertise accumulated in the social science about inequality is not sufficiently heard, and that inequality generated or perpetuated due to AI applications is the major concern. No matter whether the AI hype will continue or not, we will likely see continued and increased application of machine learning technologies for algorithmic decision making in settings which are related to issues of inequality, such as criminal justice, banking, job decisions, medicine, etc. Regulating these decision-making systems is, I believe, one of the key challenges regarding AI ethics in the next years.

Best regards

Benjamin

In risposta a di Benjamin Paaßen

User
Inviato da Mariana POPOVA il Lun, 23/07/2018 - 15:45

Hi Benjamin,

I cannot but agree with all your points. There are just in line with our policy, namely the provisions of our EU AI strategy which aim at leaving nobody behind regarding the new opportunities offered by AI.  Since we face a sea of concepts and notions of AI I am interested to hear about your definition of ill-defined ethics and bias in the machine learning community, probably you can give some specific examples. Thanks.

In risposta a di Benjamin Paaßen

Profile picture for user monasloane
Inviato da Mona Sloane il Lun, 23/07/2018 - 17:43

Dear Benjamin,

Thank you very much for your comments. I am glad to hear they resonate with your interpretation of the field. I agree that regulation will be key, as will be enforcement of such regulation. This will require highly skilled and agile regulators, and an ongoing dialogue with higher education and research and the AI industry. I also think that it is vital to specify the notion of 'ethics' and relate it to specific algorithmic products and systems. Grounding this specified notion of 'ethics' in empirical evidence sourced from social research will be essential. I also think that this will be more productive (and inclusive) than priveleging abstract concepts derived from moral philosophy. The 'Algorithmic Impact Assesment' report by AI Now (focus on the use of algorithmic systems among public agencies) is a great leap into the right direction: https://ainowinstitute.org/aiareport2018.pdf. 

Best wishes,

Mona

In risposta a di Mona Sloane

User
Inviato da Mariana POPOVA il Mer, 25/07/2018 - 12:15

Hi Mona,

From the positions of the evidence based EU policy making I fully agree that defining AI-related ethics should be sourced from social research rather than from abstract concepts. Here it would be good the social research findings to be based also on objective statistical indicators. Yet at the moment there is a lack of official statistical indicators on AI. So my question to you as a social researcher is what kind of statistical indicators you would need to perform a representative AI-related social research?

In risposta a di Mariana POPOVA

Profile picture for user monasloane
Inviato da Mona Sloane il Ven, 27/07/2018 - 11:47

Dear Mariana,

Thank you for your reply. I am a qualitative social researcher, not a quantitative one, and therefore am not in a position to answer your question about ‘objective statistical indicators’. But I will say this: I am little concerned that ‘social research’ tends to be equated with quantitative analysis and statistics. I argue in the piece above that we need a more holistic approach to AI innovation and social justice. This is a call for grounding AI policy and innovation in social evidence sourced from beyond the realms of statistics. Here, we need more and better mix-method approaches, as well as collaborations between technologists and (the full range of) social researchers.

The backdrop for this is that many problems we are seeing today in the context of AI, bias and discrimination are (not exclusively, but certainly to a substantial degree) are grounded in issues of categorising data to make it fit for statistical analysis and algorithmic prediction. This pre-classification necessitates a reduction of social complexity, often at the expense of more reflexive data collection practices. In the context of automated systems, this often works to the advantage of the already-privileged and leads to discrimination and the perpetuation of socio-cultural biases – much of the cutting edge research in this area shows this (see e.g. the work of Cathy O’Neil, Safiya Umoja Noble, Virginia Eubanks, Timnit Gebru, Joy Buolamwini, Kate Crawford, Hanna Wallach, Sasha Costanza-Chock). We currently run the risk of opting for (what we believe is a) quick technocratic fix for a problem that is deeply engrained in society, not just AI innovation. I highly recommend Kate Crawford’s recent talk at the Royal Society on this topic: https://www.youtube.com/watch?v=HPopJb5aDyA.

To get back to your question: I would argue that, overall, we urgently need research into (1) AI innovation and inequality, including research into the socio-cultural make-up of the AI industry (this includes a view for intersectional inequality, not just class, racial or gender inequality), studying technologists as new elite (comparable to, e.g. bankers) as well the organisational and commercial power structures  (including policy making) that are at play; (2) data epistemology, including research into how training data (visual and linguistic) for automated systems is sourced and deployed and studying how social groups are framed within this data; (3) notions of ‘ethics’, including research into what constitutes ‘ethical’ considerations and behaviours amongst AI technologists, regulators and users. 

I hope this helps.

Best,

Mona

In risposta a di Mona Sloane

User
Inviato da Mariana POPOVA il Mar, 31/07/2018 - 16:45

Dear Mona,

Many thanks for this insight. If I may use it to modify a bit my initial question: Assuming that there is no objective data in any of the three AI-related research domains you outline (which is quite a fair assumption). In your opinion (it's actually an open question to all readers) where do we start? Is it actually possible to prioritise among the domains? If so what would be your first priority.

Best,

Mariana

In risposta a di Mariana POPOVA

Profile picture for user monasloane
Inviato da Mona Sloane il Mer, 01/08/2018 - 15:32

Dear Mariana,

Thank you for your response. I think your question is an important one and agree that it concerns everyone within and beyond the Alliance and the HLG. Perhaps you can take it into the next meeting?

My own answer is twofold: (1) I think your role as EC is to create an ecosystem which helps to ensure that all three domains are covered (they are also, by no means, exhaustive, other members will surely have to contribute more) and that research will be conducted on them, so you need to provide funding opportunities for this, particularly for the social sciences and humanities, not just for technological research on AI – these opportunities need to be made available on a quick turn-around-basis, not just via the large-scale funding calls the EU usually puts out;

(2) it is key to treat transfer this research into action on an ongoing basis (perhaps you can incentivise researcher to do this from the get-go) – in concrete terms, this can mean to relate research on ‘ethics’ reflexively to existing legal frameworks, as well as make sure ethical guidelines are continuously reviewed, and to put the theme of data epistemology (i.e. where does training data come from, how was is classified, what kinds of information reduction has taken place, what kinds of assumptions are deployed as part of categorising it, etc.) in context with regulation enforcement and business practice.

I look forward to input from other members on this.

Best wishes,

Mona

In risposta a di Mona Sloane

User
Inviato da Mariana POPOVA il Gio, 02/08/2018 - 10:41

Dear Mona,

Thank you for the prompt and detailed reply. We take the nontechnical aspects of AI quite seriously. Two pillars of our three pronged EU AI strategy are directly addressing them. We need also to make sure we have an adequate framework at project level. So we take note of your proposal for quicker short-term projects.

I invite again everybody concerned to complement Mona's proposal and share their thoughts on what kind of social research we need to make AI development and deployment a success in Europe.

Kind regards,

Mariana