HLEG Guidelines on Trustworthy AI - Implementation in Financial Services Workshop, 28 October 2019

Workshop outline:

As stated by the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) in their Ethics Guidelines for Trustworthy Artificial Intelligence[1], in a context of rapid technological transformation, it is essential that trust remains the bedrock of societies, communities, economies and sustainable development. In this respect, it is important to ensure that technologies are developed that adhere to ethical principles and values.

Firms within the financial services industry, as they consider how to implement the Guidelines, may identify common considerations and challenges across the industry.

As a contribution to the pilot phase of the Guidelines, this half-day workshop of financial services firms and trade body representatives discussed the challenges, suggested solutions, and any areas where further assistance from policymakers and regulators may be useful. Discussion took place under the Chatham House Rule.  

Summary of Discussion:

The discussion centred on three key themes emerging from the Guidelines: data bias; transparency & explainability; and oversight & accountability.

Please note that this is not  a verbatim transcript of the discussion and that the summary has been organised per topic for clarity. In addition, this summary doesn’t necessarily reflect the positions of individual participants or organisations represented at the discussion.

  1. Data Bias
  • Further clarification is needed on data bias and how it should be considered separately from human bias and algorithmic bias.
  • It is important to keep accuracy and fairness as separate concepts in the assessment list. There was also the view that fairness of process is different from fairness in a model.
  • Fairness is a concept that has been extensively considered within financial services in terms of obligations to clients e.g. credit decisions, mark-up on trades. It may be useful to perform analysis of the existing ‘fairness’ obligations in financial services legislation, or other independent standards (e.g. ISO) in any consideration of sector-specific guidance.
  • While all firms must consider the social consequences of their decisions, there was a concern that they are now being challenged to be vehicles for social justice, which is not usually a primary objective..
  • While some protected groups already exist, firms should be aware that the use of AI may create additional protected groups. Furthermore, this is a concept which varies in its definition between jurisdictions, which can be challenging for firms. 
  • In some cases, it may be better to retain a challenging data point and mitigate its effect than remove it altogether. In use cases such as Anti-Money Laundering (AML) or credit decisioning, data points may be critical to retain that might elsewhere be seen as discriminatory, e.g. country of origin
  • Fairness of outcomes should remain the responsibility of the organisation and cannot be delegated to an automated process.
  • It was noted that the EU General Data Protection Regulation (GDPR) is being used as a model in other jurisdictions and that some firms have chosen to implement it globally. This helps with standardisation, but the regulation is open to interpretation in places and can be a challenge to implement with local or regional requirements in some jurisdictions.
  • Firms should consider the different controls and processes required for AI models that are bought in, rather that developed in house.
  • The differing obligations in relation to data sharing across the sector were raised as a challenge for firms – broader data sharing obligations would generally be welcome.

 

  1. Transparency & Explainability
  • It can be challenging to delineate AI from existing uses of technology. There has been a significant rise in the availability of data and computing power, but many of the principles and processes applied to existing technologies are equally applicable to AI.
  • The distinction between the concepts of ‘transparency’ and ‘explainability’ is important and not widely understood. Technical explainability of a model has a specific meaning which should not be confused with the more general use of the word. Transparency in relation to testing and the assumptions used in the model development are likely to be more applicable for many stakeholders and is generally part of good modelling practice in any case.
  • The ability of the end user to understand an AI model was briefly discussed as a baseline. However, there was agreement that the complexity of AI, along with the technical knowledge base of those who are not AI practitioners, render this impractical.
  • Nevertheless, there is an expectation from consumers that firms should be transparent when interacting via AI, e.g. at present, the disclosures relating to robo-advice differ widely
  • GDPR is helpful in terms of considering the input data, but additional transparency measures are required for the output of an AI model. Furthermore, the application of the output of a model was seen as higher-risk than the development of the modelling itself.
  • Proportionality is seen key:the degree and type of transparency required will vary for different stakeholders and different AI models. For example, the development of a model that performs natural language processing is likely to require less transparency, provided that the output is tested.
  • The importance of protecting intellectual property and trade secrets was also noted.

 

  1. Oversight & Accountability

    • Firms have reassessed their governance structures for use of AI, some creating new roles (e.g. Chief AI Officer) and some re-examining existing roles (e.g. how a Chief Diversity Officer should be thinking about diversity in AI teams).
    • Governance structures should take diversity into account, including diversity of thought; this may be particularly helpful when considering edge cases.
    • The importance of identifying and involving all relevant stakeholders was discussed, e.g. Chief Data Officers, Chief Privacy Officers etc. In large firms, this often involves connecting individuals who have not necessarily interacted with each other previously and making them aware of each other’s roles in an AI project. This was noted as a positive outcome of the Guidelines and the AI HLEG consultation on the assessment list.
    • A good measure is whether firms and individuals are taking decisions that make sense and that they can explain to others. The focus remains firmly on individuals being accountable, not AI models themselves. While machines should not be held to a higher standard than humans, they can operate at significantly higher scale, so a lower bar must not be set either.
    • The difference between principles and process was discussed, with each the responsibility of a different part of the governance structure. The implementation of each is dependent on the risk appetite of the firm and also needs to adapt to different use cases of AI.  
    • The governance structure must be built to adapt as the technology develops and also able to identify, where possible, new issues that may arise.
    • It was suggested that deliberately putting a ‘bad’ case study through a firm’s governance processes may be a good way to test whether any gaps exist which would allow it to progress.
    • There was some concern that the approach taken to implementation of GDPR within the industry had inhibited rather than empowered individuals, with too much emphasis placed on the severe consequences of breaches. Instead, the Guidelines should be a tool to enable AI.
    • Firms must consider the policies of third parties they interact with, as well as their ability to control the use of AI throughout the supply chain;  a firm is only as good as its weakest link.
    • Existing sector regulations regarding individual accountability and the apportionment of responsibility for each part of a firm’s business (e.g. the UK Senior Managers Regime) was noted. 
    • Overall, firms should consider for whose benefit the AI model is being developed and deployed – in general the benefit will be mutual for the firm and its clients.

Attendees:

The following organisations were represented at the meeting:

Association for Financial Markets in Europe (AFME)

Barclays

BBVA

Better Finance

Deutsche Bank

European Association of Co-operative Banks (EACB)

European Banking Federation (EBF)

InsuranceEurope

Intesa San Paolo

Mastercard

Orange Bank

Raiffeisen Bank

Visa

 

The following policymakers and regulators were also present:

European Banking Authority (EBA)

European Commission – DG CONNECT

European Commission – DG FISMA

European Commission High Level Expert Group on Artificial Intelligence (AI HLEG)

UK Financial Conduct Authority (FCA)

 

[1]https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Tag
banking insurance Artificial Intelligence Financial Services