De-mystify tech silos for responsible AI in workforce management

The Swedish Government has assigned the Discrimination Ombudsman, (DO), to start detecting discrimination that the use of AI can entail within the Swedish labor market. It is about time. Discrimination, social exclusion, and injustice can no longer be neglected due to organizations’ inability to govern AI cross-functionally. Using the anch.AI platform will help the DO map the unintended ethical AI risks.

In workplace and workforce management, the deployment of AI systems is considered a pioneering and groundbreaking arena as it fuels major changes and efficiency gains. Yet ungoverned, these AI solutions open the door to ethical and legal breaches leading to costly reputational harm and lack of trust.

In the world of work, historical data upon which AI systems are trained is usually clouded with non-diverse societal sets. Hence, data is inherently biased perpetuating the same old and sad realities of the job market. It is even more troublesome when all current efforts deployed in the non-digital world to mitigate bias and enhance diversity and inclusion, do not find echo in a digital world left skewed and distorted as per biased data and a non-diverse representation of people behind the AI systems.

However, all these warnings are nothing new. What concerning when AI and automated decision-making in work life are assumed to get rid of bias and unfairness. We all heard of the assumption that automated decision-making will simply be the solution to combat work life discriminations and injustice. An example here is the HireVue, a leading provider of hiring software based on algorithmic assessment, which while hammering home in its website its ability to “build a faster, fairer, friendlier hiring process”, was yet impelled, last year, to “kill off a controversial feature of its software: analyzing a person’s facial expressions in a video to discern certain characteristics.”, says a report by the  AI Incident Database.

All this adds weight to the point of view that AI risks must be tackled if we want to combat discrimination and injustice. AI and automated decision-making used in working life cannot be left to tech silos, it must be governed cross-functionally involving the legal and business areas.

In Sweden, the Government has now said in a press release that AI risks in the labor market require mapping and combating as it can lead to discrimination. As a result, the Swedish Government formally commissions the DO to “map the risks of discrimination that the use of AI and other automated decision-making can entail and to what extent and in which contexts employers can use such technical solutions” says Deputy Minister of Labor Johan Danielsson.

That is a good start. The anch.AI platform is a manifestation of the “state of the art” anch.AI methodology. Our multidisciplinary research on governing AI ethical risks started in 2016. Almost 200 AI use cases have been screened for ethical risk exposure by using our methodology. Many of them are within the space of AI in employment and recruitment. I welcome the work of the DO and are happy to share our insights to support the investigation.

Taggar
AIGovernance