Sectors

Services

27 November 2020

Thinking outside the black box: explainable AI is coming into focus

Artificial intelligence (AI) and machine learning (ML) technologies continue to be deployed in an increasingly broad array of applications, across both the private and public sectors. In healthcare applications, for example, AI/ML has supported the identification and development of potential vaccines for COVID 19.

Other applications include AI/ML solutions in the financial sector to support credit decisions and investment advice and in employment scenarios to support human resources functions in triaging high volumes of applications.

Lawmakers continue to consider the appropriate approach to regulating AI/ML and the European Commission is working to produce draft legislation specifically aimed at AI/ML development and deployment. Considerable debate remains as to whether technology-specific regulation is desirable at this stage or whether existing technology neutral rules, including data protection laws, achieve an appropriate balance between protection from harm and fostering innovation.

Data protection laws have increasingly taken centre stage in the regulation of AI/ML and the UK’s Information Commissioner’s Office has partnered with the Alan Turing Institute to produce a guidance note entitled ‘Explaining decisions made with AI’ (the ICO Guidance) for those developing and deploying AI/ML technology to support decision making about individuals. The ICO Guidance sets out a framework for explaining decisions made using AI/ML and addresses both AI/ML supported decisions, as well as ‘solely-automated’ decisions without human intervention.

What emerges from the ICO Guidance, is that the regulatory requirements for explaining AI/ML decisions pose significant challenges for so-called ‘black box’ technologies; these typically comprise deep-learning algorithms, the functioning of which can be particularly difficult to explain in plain, non-technical language. Even for less opaque applications, data protection law’s requirement for transparency about the uses made of personal data, and the higher threshold for explanation where decisions affecting individuals are made exclusively by algorithm, mean that organisations may well find that focussing on ‘human in the loop’ decision making supported by AI/ML is commercially beneficial.

Explainability emerges as a path to successful AI/ML

As a sign of the increasing importance of AI/ML technologies to support important decision making processes, governments, regulators, international organisations and developers of AI/ML technology, have issued a range of codes, frameworks, white papers and guidance documents setting out, in particular, ethical parameters for the development and deployment of AI/ML. From the myriad attempts to identify foundational principles for a successful future for AI/ML technologies (by one count, there are in excess of 80 such sets of principles in circulation¹), we can distil a common understanding: successful AI/ML must be trustworthy, and this will require, to varying degrees, the ability to explain the workings and/or outputs of such systems.

The codes and guidance point to ‘explainable AI’ being key to overcoming a lack of trust in AI/ML systems which are perceived as being opaque, in particular to those affected by decisions which are supported by the technology. Laws directed specifically at regulating AI/ML are in development, with draft legislation anticipated from the European Commission in the first half of 2021. Existing laws however, regulate decisions supported by AI/ML, including human rights laws such as the Equality Act 2010, and sector specific regulations for financial services and medical devices.

Data protection law, in particular, has application in all phases of AI/ML, from training through deployment. In our Winter 2019 edition of Inside IP, we considered the ways in which risk can be limited by focussing on data minimisation; equally important are the data protection principles of fairness, transparency and accountability, which are closely linked with the notion of explainable AI.

In its 2018 report, ‘AI in the UK: ready willing and able?’ the UK’s Lords Committee on AI concluded that AI systems should be “intelligible to developers, users and regulators.” The Council of Europe’s Convention 108 text on the protection of individuals in relation to automated decision-making, provides that:

“Data subjects should be entitled to know the reasoning underlying the processing of their data, including the consequences of such reasoning, which led to any resulting conclusions, in particular in cases involving the use of algorithms for automated-decision making including profiling.”

Similarly the European Commission, in its February 2020 white paper on ‘Artificial Intelligence: a European approach to excellence and trust’ identified the challenges posed by technologies perceived to be a ‘black box’:

“The specific characteristics of many AI technologies, including opacity (‘black box-effect’), complexity, unpredictability and partially autonomous behaviour, may make it hard to verify compliance with, and may hamper the effective enforcement of, rules of existing EU law meant to protect fundamental rights. Enforcement authorities and affected persons might lack the means to verify how a given decision made with the involvement of AI was taken and, therefore, whether the relevant rules were respected. Individuals and legal entities may face difficulties with effective access to justice in situations where such decisions may negatively affect them.”

Explaining AI/ML to meet data protection law obligations

The ICO Guidance sets out a framework for organisations using AI/ML either to support decisions about individuals or in some cases, to make decisions without human intervention. The ICO Guidance seeks to assist those deploying AI/ML technology in meeting, in particular, the principles of fairness, transparency and accountability under the EU General Data Protection Regulation (GDPR), and the UK Data Protection Act 2018, as well as the specific rules under those regimes for decisions which are ‘solely automated.’ The key GDPR principles, which apply to any decision making processing of personal data which AI/ML is deployed, are:

Fairness takes account of how a decision affects the individual concerned. Where AI/ML is used to support a decision, the individual must be provided with information about the decision, which may be by way of explanation, in the absence of which that processing of their personal data may be unfair.

Transparency is a key aspect for GDPR compliance in the course of any processing of personal data. The ICO Guidance formulates the aim as being “clear, open and honest” with individuals about how their personal are processed; providing a form of explanation about AI/ML supported decisions will assist in meeting the requirement for transparency. Importantly, the ICO Guidance recalls that the transparency principle applies equally to personal data used in the training phase for AI/ML, as well as in deployment.

Accountability is the ability to demonstrate compliance with data protection law when processing personal data, including any decisions about individuals supported by AI/ML. The provision of an explanation to individuals will itself help an organisation to be accountable, as it will show that fairness and transparency have been considered in the course of the AI/ML implementation, as well as in broader data processing activities.

Solely automated decision-making – Additional challenges for compliance

The data protection principles of the GDPR must be met wherever personal data are processed, including where AI/ML is used to support decision-making, where there is meaningful human intervention (i.e. more than ‘rubber stamping’ an algorithmic output) in order to reach a decision (commonly referred to as ‘human in the loop’ processes).

The GDPR also contains a general prohibition against ‘solely automated’ decision making about individuals, where the decision ‘produces legal effects’ or similarly significant effects for the individual. Examples of such decisions are whether an individual is entitled, or not, to a social benefit such as housing, or whether they will be extended credit by a financial institution.

There are narrow exceptions to the prohibition on solely-automated decisions falling within the prohibition (in Article 22 of the GDPR), where the decision is either (i) necessary for a contract with the individual; (ii) authorised by law; or (iii) based on the individual’s explicit consent. Even where a solely-automated decision can rely on one of these exceptions, an organisation deploying AI/ML may still prefer instead to use those technologies to support a decision, rather than relying on a solely-automated decision. This is because solely-automated decisions carry with them all of the compliance requirements of ‘human in the loop’ decisions under the GDPR, as well as further obligations:

  • The individual must be informed that solely-automated decision making is being employed, and an explanation must be provided of the significance and envisaged outcomes of the decision making;
  • Meaningful information must be provided about the logic involved in the decision making;
  • Individuals have the right to obtain human intervention in the decision process, to express their view, and to contest the solely-automated decision.

The additional transparency obligations which accompany the use of solely-automated decision making pose a challenge for ‘black box’ AI/ML technologies, the operation of which may be particularly difficult to explain to the level required by the GDPR. Accordingly, for decisions which produce particularly significant effects on individuals, organisations may often find that deploying AI/ML in support of those decisions is preferable to relying on those technologies for solely automated decision making falling within the scope of Article 22 of the GDPR.

The general prohibition in Article 22 of the GDPR also extends to the ‘profiling’ of individuals by solely automated means, where such profiling is likely to produce legal or similarly significant effects on an individual. The precise limits of such profiling remain to determined, however, the UK Information Commissioner has opined that automated profiling of an individual in order to target online advertising for particularly significant matters, such as political campaigning, may fall within the ambit of having a ‘legal or similarly significant effect,’ and therefore subject to the stricter conditions for solely-automated decision making under the GDPR.

Explaining AI/ML decisions

The ICO Guidance is presented in three parts, each aimed at a distinct operational group within organisations: Data protection officers and compliance teams; Technical teams; and Senior Management. The framework proposes six types of explanations for AI/ML decisions (set out below), and for each type provides examples of how those explanations can be approached based either on the process behind the decision making, or on the outcome of a decision:

Process based explanations focus on demonstrating that best practices have been followed in the development and deployment of AI/ML solutions which underpin a decision; for example that the GDPR requirement for data protection by design and by default has been observed.

Outcome based explanations focus on clarifying how a particular decision outcome was obtained, by explaining the reasoning behind the decision in “plain, easily understandable, and everyday language.” Where there has been meaningful human involvement in a decision), then the explanation must be clear as to how the human decider arrived at the decision and how the decider was aided by the AI system.

The six types of explanation proposed in the ICO Guidance are:

Rationale explanation: Setting out the basis for a decision, in an ‘accessible and non-technical way.’

Responsibility explanation: Details of the organisation’s internal responsibility for the AI/ML deployed in the decision making process, covering all phases from development to deployment, and whom to contact for a human review of the decision.

Data explanation: The data used, and how it was used, to reach a decision.

Fairness explanation: Steps taken in the design and deployment of the AI/ML in order to ensure it supports fair and unbiased decisions, and whether an individual has in fact been treated equitably.

Safety and performance explanation: Steps taken in the design and deployment of the AI/ML to “maximise the accuracy, reliability, security and robustness of its decisions and behaviours”.

Impact explanation: Steps taken in the design and deployment of the AI/ML to consider and monitor its decisions’ impact, and that of the AI/ML more broadly, both on individuals affected by its decisions and on wider society.

The ICO Guidance proposes that organisations stand to realise benefits from explainable AI/ML, including; legal compliance; increased trust from a better informed public; improved internal governance; and better AI/ML outcomes by monitoring and assessing performance to improve accuracy and eliminate biases.

Risks are also acknowledged, and organisations will need to consider properly their approach to explaining AI/ML decisions to avoid such pitfalls; a key risk identified is public distrust if explanations are seen to be overwhelming, thereby reinforcing the impression that processes are opaque. Commercial sensitivities may also mean that organisations are reluctant to provide a high level of detail into the algorithmic decision making for an AI/ML process. The ICO Guidance suggests that, whilst transparency must be considered on a case by case basis, it is unlikely that explanations using the guidance framework would risk such disclosures, as they would not require an organisation to reveal confidential information about an algorithm, for example source code or other trade secrets.

The future is explainable

As debate continues on whether technology specific regulation is desirable, what appears certain is that data protection law will continue to apply to AI/ML decisions affecting individuals. Amongst other examples, a recent position paper led by Denmark and signed by 13 other Member States, confirmed that, whatever the regulatory approach adopted toward AI/ML, such technology must continue to comply with the GDPR.² This view was subsequently echoed by the Presidency of the Council of the European Union, Germany. ³

Accordingly, organisations would be well advised to consider their approach to ‘explainable AI’ throughout the design and development process for AI/ML, particularly where it may be deployed for decision-making about individuals; not simply with a view to regulatory compliance, but perhaps also to gain a competitive advantage. For organisations deploying AI/ML, this is likely to mean a focus on ‘human in the loop’ AI/ML supported decision-making, and eschewing heavy reliance on ‘black box’ solutions for those decisions which significantly impact individuals.


[1] Jobin, A. et al. (2019), “The global landscape of AI ethics guidelines,” Nature Machine Intelligence, vol.1, 501-507.

[2] ‘Innovative and Trustworthy AI: Two Sides of the Same Coin.’ Position paper published October 8th 2020 on behalf of Denmark, and joined by Belgium, the Czech Republic, Finland, France, Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden.

[3] Presidency conclusions – The Charter of Fundamental Rights in the context of Artificial Intelligence and Digital Change, October 21st 2020.

 

Sectors/Services
Author(s)
Share