Hi All,
I would be very much interested in any exchange of best practices/ideas/initiatives on addressing bias in AI systems (in particular gender bias).
P.s. Great initiative by the Commission.
Thank You!
- Logige sisse kommentaaride postitamiseks
Kommentaarid
I believe that equal to a human organization, an AI organization could be divided into two or more basic operational models. For example, a significant difference in what capabilities an AI might have could result in the illusion of separation between the new species, exactly the way we differentiate male and female beings based on significant external and operational differences, even when acknowledging that we are both humans.
- Logige sisse kommentaaride postitamiseks
User account was deleted
- Logige sisse kommentaaride postitamiseks
Vastus kommentaarile I will be based in Ghent next kasutajalt Anonymous (verifitseerimata)
Yes, that would be great - my schedule is more flexible next month if you like?
- Logige sisse kommentaaride postitamiseks
Hi Emmanouil,
Take a look at the recent work of the EU Fundamental Rights Agency on this topic, namely "#BigData: Discrimination in data-supported decision making (May 2018)".It suggests 4 potential ways of minimising the risk of discrimination in using big data for automated decision making.
- Logige sisse kommentaaride postitamiseks
Vastus kommentaarile Hi Emmanouil, kasutajalt Eleftherios Ch…
Great report. Thanks.
- Logige sisse kommentaaride postitamiseks
Hi Emmanouil,
I'm slightly puzzled by the context of your question, and judging by the difference responses provided, I'm not the only one.
Nevertheless, from a strictly data scientific poin of view - you want your data to be as bias-free as possible (not to confuse with the model bias, as in the variance - bias trade-off). If the data to be modelled were to be biased, and you attempt to perform some sort of multicategorical classification, having the data biased may result in the algorithm learning to detect only the dominant category, rather than trully determine the correct category. Not all estimators suffer from this sensitivity issue (decision trees-based techniques do pretty well on imbalanced data), but in principle you should start by considering the three options: downsampling the dominant category i.e. removing some samples of the most common class from the dataset; upsample the underrepresented classes, making sure you don't alter the true nature of the real data; mix the two approaches, typically in the order of upsampling followed by downsampling.
Not sure this is the angle of perspective you were aiming at, but if - hope this helps to guide you in the right direction.
Regards,
Paweł
- Logige sisse kommentaaride postitamiseks
Hi Emmanouil,
I'm sending you a link to a position paper I presented at the ICSE conference this year on gender bias in AI - hope this helps.
https://www.insight-centre.org/content/gender-bias-artificial-intellige…
Thanks,
Susan.
- Logige sisse kommentaaride postitamiseks
Hi,
This is a very specific case example, but I'm sending this because from your question is not clear whether you look for generic info, or specific techniques & algorithms.
Check this one - the video about Debiasing word embeddings may give you some insights.
https://www.coursera.org/learn/nlp-sequence-models/home/week/2
Cheers,
Jarek
- Logige sisse kommentaaride postitamiseks