How Gender Bias in AI Models Hurts Everyone?

Two years ago, the European Commision released their white paper, “White Paper on Artificial Intelligence: A European Approach to Excellence and Trust.” In it, the Commission called for standard requirements for the data sets that train AI systems in order to avoid bias, including gender bias. “Requirements to take reasonable measures aimed at ensuring that [the] use of AI systems does not lead to outcomes entailing prohibited discrimination. These requirements could entail in particular obligations to use data sets that are sufficiently representative, especially to ensure that all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination are appropriately reflected in those data sets.”

Bias can creep into algorithms through the historical data sets that they are trained on. Humans are inherently biased. Subsequently, our own personal biases and social gender inequalities are often reflected in data about the past. When this happens, the outcome can be negative or — even worse — deadly. 

How gender bias creeps into AI models

Gender often plays a role in the development and application of AI. We know from that research that models that are skewed with more data from one gender are less accurate for the entire population. In a 2021 study, “Gender Bias in Artificial Intelligence: Severity Prediction at an Early Stage of COVID-19” the researchers wanted to know what model bias could occur when training an AI model that could predict patient severity in the early stage of coronavirus disease (COVID-19) using only one gender vs. a more diverse data set. They found that the gender-dependent AI model was less accurate compared to the unbiased, mixed gender model.

Biased algorithms also hurt womens’ careers. A study from UNESCO, the OCED and the Inter-American Development Bank found that because many resume scanning systems are built on historical job performance data in which men — specifically white men — performed the highest, the tools are inherently biased against women. 

Data sets used to train AI models need to represent the populations in which they serve. Biased data sets that lean more male than female will train the algorithm to be better able to detect male-specific outcomes. For example, in a study assessing digital biomarkers for Parkinson’s disease, only 18.6% of the people in the data set were women. If an algorithm is then trained using this data set, that algorithm will be able to more accurately detect the symptoms that appear more often in men and less accurately be able to detect female-specific symptoms. This bias in the data leads to possibly less accurate detection of Parkinson’s symptoms and worse patient outcomes for women. 

Ethical AI 

The negative implications of gender-biased AI on the broader society cannot be understated. At anch.AI, we are focused on AI governance. We believe it’s so important that organizations not only identify these biases in algorithms, but take steps to mitigate these biases. That is why we created our Ethical AI Governance Platform, an AI prediction and recommendation tool that  assesses what ethical AI risk(s) are exposed and presents next steps for mitigating them. With anch.AI, organizations can efficiently and quickly adopt responsible AI solutions, gain control over their ethical AI risks, all while upholding regulatory compliance and conformity to ethical principles. The outcome is stronger companies, technology and outcomes for everyone. 

 

Sildid
Gender bias AI