UK Government reaffirms principles-based approach to regulating AI

March 19, 2024

UK Gov reaffirms approach to AI regulation Blog Image
Share on LinkedIn Share on Facebook Share on X

By Chris Occleshaw, Recall Consultant

The UK government has published its response to the consultation on its 2023 AI White Paper that outlined its planned “pro-innovation approach” to regulating artificial intelligence (AI). The detailed response largely reiterates the approach that was outlined in the AI White Paper but also provides further information about how the context-based regulatory framework will unfold.

Key details

The response paper reaffirms the five cross-sectoral principles for existing regulators that were outlined in the white paper. These are intended to be interpreted and applied by regulators within their sectors to drive safe, responsible AI innovation and include:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

While the government noted its decision would remain under review, it plans to move forward with these principles under a non-statutory basis, at least as it establishes its approach. The onus will remain on existing regulators to apply the principles within their sector-specific frameworks, and the UK government will not create a central regulatory body for AI. However, the government will establish a central function that will “monitor and assess risks across the whole economy and support regulator coordination and clarity.”

To help regulators interpret and apply the AI principles, the UK government released voluntary guidance that can be followed at the regulator’s discretion. A £10 million fund to help “jumpstart regulator’s AI capabilities” was also announced in the government’s response paper.

While the government’s current approach to AI regulation is non-statutory, it does note in the response paper that future regulation may be necessary for specific AI uses or as the technology advances. It is also worth noting that the UK may have a new government by the end of 2024, which could adopt its own, different regulatory approach to AI. For example, the Labour party has indicated it would prefer to implement “an overarching regulatory framework” for AI.

Looking ahead

Some UK regulators have already taken steps to better define how AI uses in their sectors are regulated, and more are sure to follow in the coming months. While the context-based approach adopted by the UK will allow for innovation, the decentralized nature of how AI is regulated could create challenges for businesses that answer to multiple regulatory bodies. Businesses will need to closely follow all applicable regulatory developments to ensure they maintain compliance.

For businesses that operate in the UK and elsewhere in the European Union, the diverging regulatory approaches may bring confusion and complicated risks. It will be challenging but necessary to keep up to date on all relevant regulations to avoid reputational damage or regulatory oversight. Engaging with lawyers and third party brand protection experts can bring an added layer of protection amidst the ever-changing AI landscape.

Trusted by the world’s leading brands, Sedgwick brand protection has managed more than 7,000 of the most time-critical and sensitive product recalls in 100+ countries and 50+ languages, over 25 years. To find out more about our product recall and remediation solutions, visit our website here.

Tags: AI, Artificial Intelligence, Brand, Brand protection, Compliance, Europe, international, Preserving brands, recall, regulation, UK