European and U.S. regulators issue joint statement on competition in AI

August 22, 2024

Joint statement on competition in AI Blog Image
Share on LinkedIn Share on Facebook Share on X

By Mark Buckingham, Recall Advisor

Competition authorities in the European Union, United Kingdom, and United States have released a joint statement regarding competition in generative artificial intelligence (AI) foundation models (FMs) and AI products. The European Commission, UK Competition and Markets Authority (CMA), U.S. Department of Justice (DOJ), and U.S. Federal Trade Commission (FTC) said in the statement that they “will work to ensure effective competition and the fair and honest treatment of consumers and businesses” in the use of AI products.

The statement comes amid varying approaches to regulating the use of AI across jurisdictions. So far, the EU has led the way and was the first to introduce a comprehensive legislative framework with the EU AI Act which came into force on 1 August 2024. Meanwhile, the UK has adopted a non-statutory approach that it bills as “pro-innovation,” where existing regulators will apply five cross-sectoral principles for AI within their existing regulatory frameworks. However, the UK has issued an opinion paper establishing seven principles that should be used when developing and deploying FMs. This paper was revised in April 2024. In the U.S., the Biden Administration has used executive orders to establish requirements for the safe use of AI, while regulatory agencies work on their own sector-specific guidelines for AI products.

Central AI risks

The joint statement acknowledges three central risks to competition associated with AI products:

  1. Concentrated control of key inputs, which could potentially stifle innovation or put a small number of companies in a position to exploit existing or emerging bottlenecks in AI development. The concern is that these companies would have outsized influence over the future development of AI tools.
  2. Entrenching or extending market power in AI-related markets. This issue may emerge if large incumbent digital firms that are already enjoying strong accumulated advantages make even more gains.
  3. Arrangements involving key players amplifying risks by undermining or coopting competitive threats. While partnerships related to the development of generative AI may not harm competition in every case, in some instances they could be used to steer market outcomes in their favour at the expense of the public.

Measures to protect the market

In addition to identifying the central risks, the joint statement also lays out three principles for protecting competition in the AI ecosystem. These build on existing common principles in related markets:

  1. Fair dealing, in which firms with market power should avoid exclusionary tactics in order to encourage innovation, investment, and competition.
  2. Interoperability, which will enhance innovation and competition by allowing more compatibility across AI products. The competition authorities note that any claims that interoperability require sacrifices to privacy and security will be closely scrutinised.
  3. Choice, which will benefit businesses and consumers in the AI ecosystem. According to the regulators, this means scrutinising ways that companies employ lock-in mechanisms that prevent users from seeking or choosing other options. Additionally, authorities will examine partnerships between incumbents and newcomers to ensure agreements do not sidestep merger enforcement or give incumbents undue influence.

The competition authorities will also monitor and address any specific risks that may arise from other developments and applications of AI beyond generative AI. The regulators will also call out the potential harm AI can cause to consumers and “be vigilant of any consumer protection threats that may derive from the use and application of AI.”

Looking ahead

The development and use of AI is a priority for regulators across industries and jurisdictions. Although the authorities who issued this joint statement have adopted diverging approaches to the regulation of AI, they are united in their commitment to scrutinising potential anti-competitive behaviour and to protecting consumers from AI-related harm. As regulators work to create guidelines for the safe use of AI, businesses should follow new developments closely and regularly audit their own operations for alignment with best practices and new rules. 

Trusted by the world’s leading brands, Sedgwick brand protection has managed more than 7,000 of the most time-critical and sensitive product recalls in 100+ countries and 50+ languages, over 30 years. To find out more about our product recall and remediation solutions, visit our website here.

Tags: AI, Artificial Intelligence, Brand, Brand protection, Consumer, Europe, Preserving brands, recall, regulations, research and intelligence, Risk, Technological advances, Technology, UK, United Kingdom, United States