Navigating the latest developments in AI regulation: what businesses need to know

November 23, 2023

Share on LinkedIn Share on Facebook Share on X

Regulators, lawmakers and technology experts alike seem to agree on two things: artificial intelligence (AI) is an important piece of our future and it needs guidelines for its use. Across the globe, there has been a recent flurry of activity ranging from executive orders and guiding principles to voluntary codes of conduct and regulatory proposals.

While the European Union was an early leader in AI regulation with its proposed EU AI Act, other governing bodies have been quick to follow suit with a variety of approaches to regulating a technology that continues to evolve. As we continue to track new developments in AI regulation, here are some recent updates that could impact how businesses across industries use AI in their products.

G7 leaders reach agreement on guiding principles for AI

At a meeting in the end of October, leaders from the Group of Seven (G7) economies reached an agreement on International Guiding Principles for Organizations Developing Advanced AI Systems and a voluntaryCode of Conduct for Organizations Developing Advanced AI Systems. Both documents aim to “promote safe, secure, and trustworthy AI worldwide” and provide guidance for organisations developing AI, as lawmakers work to develop regulations.

The voluntary Code of Conduct outlines 11 actions that AI developers are encouraged to follow. These include taking appropriate measures to identify, evaluate, and mitigate risks across the AI lifecycle, as well as publicly reporting advanced AI systems’ capabilities, limitations, and areas of appropriate use to contribute to increased accountability.

In announcing the agreement, G7 leaders stressed that the Guiding Principles and Code of Conduct would be living documents that “will be reviewed and updated as necessary” to ensure they remain “responsive to this rapidly evolving technology.” While the actions outlined by the G7 are not mandatory, companies with a vested interest in AI should try to comply given that the regulations that follow will likely build off these models.

U.S. Executive Order outlines sweeping standards for AI safety and security

Also at the end of October, President Joe Biden issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which introduces new requirements for AI security and outlines new safety standards to be developed. While most welcomed the executive order as a necessary first step, some technology industry stakeholders raised concerns about the broad nature of the order and the potential it has to stifle innovation.

Some of the key measures outlined in the order include a requirement that developers of AI systems “share their safety test results and other critical information with the U.S. government” to make sure these systems are safe, secure, and reliable before companies make them public. The order also directs several federal agencies to develop standards, tools, and tests to “help ensure that AI systems are safe, secure, and trustworthy.” The timeline for implementation of the actions in the executive order is relatively short, with most deadlines occurring between 90 and 270 days after the order was issued.

Looking ahead

The two developments in AI governance outlined above join a growing list of efforts to simultaneously manage the risks of AI and promote innovation in the field. For companies developing AI systems or using them in their products, it can be difficult to keep track of the voluntary guidelines and mandatory regulations that they either should or must comply with. As lawmakers continue to weave an international, industry-crossing web of regulations for AI, having a solid team of expert partners in compliance, brand protection, and litigation at their side will become crucial for companies’ success.