Sectors

Services

Background
5 December 2022

The UK charts its own course on the regulation of AI

AI technologies are transforming industries and impacting economic landscapes and legal frameworks worldwide. The UK government is committed to making AI a central part of the country’s industrial strategy and growth, and in September 2021 published its National AI Strategy (the UK AI Strategy) based on three pillars: investment, ensuring that AI benefits all sectors and regions, and governance.

In a previous article for Inside IP, we looked at the draft Artificial Intelligence Act of the European Commission (the AI Act), which is making its way through the legislative process and is not expected to be in place before 2024 at the earliest. The AI Act represents an ambitious attempt to set out a cross-sectoral regulatory approach to the use and development of AI systems across the European Union by prescriptive legislation with extra-territorial application.

In July 2022, as part of the UK AI Strategy, the UK government published its AI Regulation Policy Paper, setting out its proposals for a decentralised approach to regulation of the use of AI technologies – this would include roles for governments, regulators, technical standards bodies, and industry. The government paper positions its proposals as being context-specific, coherent, risk-based, proportionate, and adaptable.

The UK AI Strategy is the culmination of recommendations, reports, and shared strategies from across various UK institutions, regulators, and organisations that play a key role in reviewing the AI governance landscape. In June 2019, the House of Lords appointed the Select Committee on Artificial Intelligence to “consider the economic, ethical and social implications of advances in artificial intelligence.” In its report, “AI in the UK: ready, willing and able?”, the Select Committee concluded that blanket AI-specific regulation would be inappropriate, finding that “existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed.”

In contrast with the EU approach, which comprises a single framework with a fixed list of risks associated with the development and use of AI, the UK’s more flexible approach seeks to address AI risk in a more targeted way, via existing regulators, including the Information Commissioner’s Office (ICO), the Financial Conduct Authority, and the Competition and Markets Authority.

The UK approach is directed by a strong belief that a fixed list of AI risks could quickly become outdated, impeding flexibility, whereas a decentralised approach seeks to leverage the expertise of regulators within their respective domains. As stated in the AI Regulation Policy Paper:

“[We] think this is preferable to a single framework with a fixed, central list of risks and mitigations. Such a framework applied across all sectors would limit the ability to respond in a proportionate manner by failing to allow for different levels of risk presented by seemingly similar applications of AI in different contexts. This could lead to unnecessary regulation and stifle innovation. A fixed list of risks also could quickly become outdated and does not offer flexibility. A centralised approach would also not benefit from the expertise of our experienced regulators who are best placed to identify and respond to the emerging risks through the increased use of AI technologies within their domains.”

The UK approach also diverges from that of the AI Act by eschewing reliance on a single definition of AI; this is a logical consequence of the two different approaches. A centralised piece of legislation aiming at harmonising the law of 27 EU Member States calls for a technical definition that will need to be applied across all sectors, industries, and AI applications. By contrast, the UK stated in its proposal that it is the use of AI which should be the focus of regulation, rather than the technology itself . Once it was concluded that “[AI] is a general purpose technology”, its universal definition became less relevant in a sector-based approach.

Nonetheless, the UK is set to guide regulators by providing more detailed definitions based on the core characteristics and capabilities of the relevant AI. The AI Regulation Policy Paper suggested that the two general characteristics that would bring AI systems within the scope of regulation are the “adaptiveness” and “autonomy” of the technologies:

An “adaptive” system is trained on data and operated according to patterns and connections which are not easily discernible to humans. The clear difficulty arising out of an adaptive system is explaining the logic or intent by which an output has been produced. An implication of this would be in all cases where “there is an expectation that a decision should be justifiable in easily understood terms – such as legal dispute.”

An “autonomous” system can operate without the ongoing control of a human operator, highlighting the issue of assigning responsibility for actions taken by AI systems.

There are AI systems that would be considered to be of both varieties, for example, a self-driving car control system is both autonomous and adaptive.

The decentralised proposition in the UK AI Strategy could lead to disparities in the approaches between regulators on similar issues. Having acknowledged such a challenge, the UK government has not ruled out that further legislation may be needed, in particular to address the potential for overlap and coordination between the different regulators.

The UK approach also includes the development of a set of cross-sectoral principles, which regulators will be tasked with implementing within their respective remits:

  • Ensure that AI is used safely: regulators will be required to take a context-based approach in assessing any risk AI can pose to safety in their respective sectors and formulate a proportionate approach to managing this risk.
  • Ensure that AI is technically secure and functions as designed: regulators shall ensure that consumers and the public have confidence in the proper functioning of AI systems.
  • Make sure that AI is appropriately transparent and explainable: individuals should be able to understand why an AI-enabled process has come to a particular decision. This could present challenges, as AI technology may not provide reasoning fully understandable to individuals. The AI Regulation Policy Paper suggests that decisions that cannot be explained might be prohibited by the relevant regulator. For example, a tribunal decision would be of such significance for an individual that an explanation of its underlying reasoning would be indispensable.
  • Embed considerations of fairness into AI: in order to ensure a proportionate and pro-innovation regulation, each regulator will determine in the context of their own sector and domain what amounts to fairness.
  • Define legal persons’ responsibility for AI governance: accountability for the outcomes produced by AI and legal liability must rest with an identifiable legal person.
  • Clarify routes to redress or contestability: the use of AI should not remove an affected individual’s or group’s ability to contest an outcome that may result in a material impact on people’s lives.

The potential for gaps in the regulatory framework, and for inconsistency and overlap between regulators, are to be addressed in a forthcoming White Paper to be published by the Office for Artificial Intelligence (established to implement the national strategy for AI) by the end of 2022.

A key advantage for the UK approach compared with that of the EU is the agility with which legislation can be progressed. The AI Act is still a considerable time away from being finalised and it will then need to be approved by all the EU Member States and implemented, including the establishment of new regulatory bodies. We have already seen indications that existing regulators – the European Data Protection Board and national data protection regulators in the EU Member States – are seeking to secure a remit for oversight of the AI Act. The UK government, by contrast, has the ability to move quickly in advancing its lighter touch approach to regulating AI.

Intersection with data protection regulation and changes on the horizon

In September 2021, the UK government launched a consultation “Data: a new direction” to inform the government’s approach to updating data protection laws following the UK’s departure from the EU. Responses were received from the ICO and domestic and overseas organisations. The consultation addressed, among other things, the use of AI-powered automated decision-making in light of the need for organisations working on AI tools to have space to experiment without causing harm to the public. The consultation asked for views on whether organisations should be able to use personal data more freely for the purpose of testing and training AI.

In parallel with the AI Regulation Policy Paper, the government released the Data Protection and Digital Information Bill in July 2022 (the Data Protection Bill) . The Data Protection Bill proposes a series of amendments to Article 22 of the UK GDPR (which is identical to Article 22 of the EU GDPR), re-working the approach to focus on specific safeguards for individuals rather than a general right not to be subject to solely automated decision-making.

Further changes to the approach in Article 22 of the EU GDPR were also proposed, however the future of such reforms is uncertain. An announcement was made on 3 October 2022 by the Secretary of State for Digital, Culture, Media and Sport, Michelle Donelan, that: “[we] will be replacing GDPR with our own business and consumer-friendly, British data protection system,” with the new legislation drawing the “best bits” from other data protection regimes such as Israel, Japan, South Korea, Canada, and New Zealand.

We shall be monitoring the progress of regulatory proposals in the AI space, and will provide further updates in due course, including via our regular Data Blast on data protection and privacy related matters.

Sectors/Services
Author(s)
Share