Background

The United States of America (USA) is one of numerous major economies taking forward a program for Artificial Intelligence (AI) Regulation. To ensure the USA plays a leading role in Artificial Intelligence research and development, the National Artificial Intelligence Initiative Act of 2020 was introduced and became law in 2021.

The Act’s overarching aim, inter alia, was to provide a broader initiative within the United States to ensure academia, the public, and private sectors could monitor and evaluate the performance of AI-based systems both before and post-deployment [1], [2].

Following this in 2022, the White House Office of Science and Technology Policy introduced the Blueprint for an AI Bill of Rights [3]. Following a year of public engagement to inform the creation of the framework, it outlines five core principles and associated practices to guide the creation, management, and iteration of automated systems while ensuring the protection of the American public’s rights [3].

With OpenAI introducing ChatGPT (Chat Generative Pre-Trained Transformer) in November 2022 and its forecasted economic potential exceeding $2.1 trillion [4], concerns were raised about how the technology at its current and continuous speed could work within the parameters of the justice system.

With the increasing adoption of Generative AI, President Biden in October 2023 issued an executive order on safe, secure, and trustworthy Artificial intelligence, stipulating (in amongst other requirements) that those developing the most powerful AI systems share their safety test data with the US government. For example, any companies developing foundation models that risk national security, national economic security, or national public health are required to inform the federal government during model training in addition to sharing red-team safety tests. [5].

The role of US Antitrust Laws

AI is seeing rapid expansion across sectors and organizations of all shapes and sizes, and therefore, the interoperability between AI as a tool and existing antitrust laws has been and will continue to be tested. While some states continue to work towards localized AI regulation, some argue that the pace and advancement around AI require a rewrite of antitrust laws.

For context, back in 1890, Congress passed the Sherman Act: a charter whose aim was to preserve free and unrestrained competition. Then, in 1914 a further two antitrust laws were passed, namely the Federal Trade Commission (FTC) Act (which created the Federal Trade Commission) and the Clayton Act [6], each of which are still in effect to this day.

Generally speaking, Antitrust laws exist to prevent unlawful mergers and business practices, with judgment being put in the hands of the courts to determine which cases are illegal based on the facts of each case. For over a century, these laws have retained the same core principle: protect competition to benefit consumers through the operation of operational efficiencies, fair pricing, and high-quality goods and services.

In summary, the Sherman Act makes illegal “every contract, combination, or conspiracy in restraint of trade” along with any “monopolization, attempted monopolization or conspiracy or combination to monopolize” [6].

The Supreme Court, however, ruled a while back that only unreasonable acts are prohibited: not every restraint of trade is included. For example, a partnership agreement between two individuals may restrain trade but not unreasonably and thus may be lawful under US antitrust law.

Any acts, though considered harmful to competition, are almost always illegal, and are known as per se violations, which include arrangements between businesses to fix prices, divide markets, or rig bids.

The Sherman Act can be enforced both in civil and criminal law, and both businesses and individuals can be prosecuted under it by the Department of Justice.

If a competitor fixes prices or rigs bids, penalties can include up to $100 million for corporations and $1 million for an individual, along with ten years imprisonment. Under federal law, the maximum fine can be increased to twice the amount the conspirators gained from the illegal activity or, on the other hand, twice the money lost by the victims if either of these amounts exceeds $100 million [6].

The Clayton Act, however, addresses more specific areas the Sherman Act does not clearly prohibit. Section 7 of the Clayton Act prohibits mergers and acquisitions demonstrating anti-competitive effects, to quote, “may be substantially to lessen competition, or to tend to create a monopoly.”

A further amendment in 1976 of the Clayton Act by the Hart-Scott-Rodino Antitrust Improvements Act requires advance notice from organizations planning a large merger or acquisition: they must notify the government of their plans.

It is important to note that private parties under the Clayton Act authorize private parties to sue for triple damages if they have been harmed by conduct that is in violation of either the Sherman or Clayton Act. Additionally, they can obtain a court order prohibiting the anticompetitive practice in the future. [6]

USA approaches to AI regulation example: Colorado AI Act

Antitrust laws aside, states are taking differing approaches in trying to regulate AI. The Colorado AI Act (also referred to as Consumer Protections for Artificial Intelligence), for example, was signed into law on the 17th of May 2024 but does not come into effect until the 1st of February 2026 [7].

While there are similarities between it and the EU’s AI Act, the Colorado AI Act specifically focuses on high-risk AI systems. Developers are required to put in place safeguards by sharing information with deployers, such as what data has been used for model training, risk mitigation measures, and reasonably foreseeable limitations of the system.

Additionally, developers must publicly share information on their website or in a public use case inventory two key pieces of information: 1) the type of high-risk system they developed and 2) what steps they are taking to manage risks of algorithmic discrimination. Most importantly, should algorithmic discrimination occur through the intended use of the system, developers must disclose this to both the Colorado Attorney General and the system developers in question. 

Alongside developers, deployers must follow a risk management policy such as the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework and the International Organization for the Standardizations Standard ISO/IEC 420001.

Among these existing standards, the size and complexity of the deployer will need to be factored into the reasonability of the framework. Should the system undergo an intentional or substantial modification, deployers must conduct an impact assessment within 90 days of the modification taking place in addition to two additional impact assessments for the system, specifically an impact assessment for the system alongside an annual impact assessment for any system deployed. 

Similar to developers, deployers will need to publish information on their website and disclose any occurrences of algorithmic discrimination to the Colorado Attorney General. There are some cases, though, in which exemptions can be granted: if deployers have an employee headcount under 50, they can be exempt from most of the requirements, providing certain conditions have been satisfied. [7]

Does the USA have a long road ahead behind the AI Act?

In September 2024, the European Commission’s EU Competitiveness Report highlighted that 30% of unicorn startups founded in Europe between 2008 and 2021 were relocating abroad, with many of them to the USA [8].

It will therefore be imperative that while the technology conglomerates continue to innovate in the AI space, lead by example, and collaborate closely with the US government, safeguards for fundamental rights and product safety must be in place but not be excessively restrictive to the point they prevent smaller players from the development and adoption of frontier AI.

When it comes to digital competition, the European Union’s Digital Markets Act and Digital Services Act ensure fair online market practices are enforced. In contrast, in the US there are no digital-specific competition laws. However, two pending pieces of legislation, namely the American Innovation and Choice Online Act (“AICOA”) and the Open App Markets Act (“OAMA”), could, if passed, result in drastic changes to American regulation of digital competition with the aim of targeting companies such as Google, Apple, Meta, Amazon, Microsoft and possibly TikTok [9].

A multilateral approach to managing, understanding, and implementing AI regulation will be required to in the long run assess whether laws around AI technologies can be fairly but rigorously enforced.

The recent executive order, the introduction of state-level AI-specific laws, and the voluntary commitment from influential AI companies (i.e., OpenAI, Meta, and Google) to increase testing of AI systems alongside sharing information on managing AI risks are important steps in understanding this fast-paced technology.

It will not, however, change the challenge of a lack of a singular definition of AI, and instead, it could be seen as a hard yard ahead in identifying that any material shift in antitrust regulation more closely aligned to AI innovation may only be feasible if regulating the outcome of AI becomes the focus instead of the attempt to holistically regulate AI. 

Bibliography

[1] Parker Lynne, Director of the National AI Initiative Office, Deputy United States Chief Technology Office, ‘National Artificial Intelligence Initiative’ (Artificial Intelligence and Emerging Technology Inaugural Stakeholder Meeting, June 29, 2022) < www.uspto.gov/sites/default/files/documents/National-Artificial-Intelligence Initiative-Overview.pdf > accessed 1st October 2024. (OSCOLA)

[2] H.R.6216 - 116th Congress (2019 - 2020): National Artificial Intelligence Initiative Act of 2020, 116th Cong. (2020), https://www.congress.gov/bill/116th-congress/ house-bill/6216 (BlueBook - change to OSCOLA)

[3] ’Blueprint for an AI Bill of Rights’ (Office of Science and Technology Policy, The White House) < www.whitehouse.gov/ostp/ai-bill-of-rights/ > accessed 1st October 2024. (OSCOLA)

[4] ’Economic Potential of Generative AI: The Next Productivity Frontier’ (McKinsey Digital, 14 June 2023 ) < www.mckinsey.com/capabilities/mckinsey-digital/our insights/th e-economic-potential-of-generative-ai-the-next-productivity-frontier#/> accessed 3rd October 2024.

[5] ’President Biden Issues Executive Order on Safe, Secure and Trustworthy Artificial Intelligence’ (The White House, Briefing Room Statements and Releases) < www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial intelligence/ > accessed 1st October 2024.

[6] ’The Antitrust Laws’ (Federal Trade Commission - Competition Guidance) < www.ftc.gov/advice-guidance/competition-guidance/guide-antitrust-laws/antitrust laws> accessed 30 September 2024.

[7] ‘Colorado Governor Signs Comprehensive AI Bill’ (Mayer Brown, Insights) <www.mayerbrown.com/en/insights/publications/2024/06/colorado-governor-signs comprehensive-ai-bill > accessed 1st October 2024.

[8] ’The future of European Competitiveness: Part B’ (European Commission) <https://commission.europa.eu/document/download/ec1409c1- 

d4b4-4882-8bdd-3519f86bbb92_en?filename=The future of European competitiveness_ In-depth analysis and recommendations_0.pdf > accessed 1st October 2024.

[9] B Hoffman, ‘Digital Markets Regulation Handbook’ (Cleary Gottlieb, January 2024)

< https://content.clearygottlieb.com/antitrust/digital-markets-regulation-handbook/ united-states/index.html > accessed 3rd October 2024.


Looking for 200+ hours of expert AI advice?

Our Pro+ membership gives you access to videos of all of our past events, plus frameworks and templates.

But there's more...

Sign up today and unlock your full potential.

AI Accelerator Institute Pro+ membership
Unlock the world of AI with the AI Accelerator Institute Pro Membership. Tailored for beginners, this plan offers essential learning resources, expert mentorship, and a vibrant community to help you grow your AI skills and network. Begin your path to AI mastery and innovation now.