What do we mean by an AI race?
The discussion on an “Artificial Intelligence race” (or AI Race as it is commonly referred to) has been the subject of continuous discussion and debate across the internet.
However, the question that really needing asked is what is an “AI Race”? And if it is indeed a race, what lies at the finishing line? Is it the most advanced algorithm, a hugely beneficial customer feature, or the most cost-effective ecosystem? And where will Competition Law allow for a balance of all of these in such a fast-moving space?
Before taking a deep dive, it is important to look at what companies have brought out to put the “race” into context. Apple, back in June, announced at their Worldwide Developers Conference (WWDC) Apple intelligence.
Example features announced include the ability to adjust the tone and style of written responses across applications, prioritize urgent emails and notifications, along with the ability to write mathematical expressions in notes using the Apple Pencil and the system giving the answers in the user's own handwriting [1].
The functionality described is the backbone of a more substantial cloud architecture called Private Cloud Compute (PCC), which, in summary, processes the user's request solely for that purpose without any visibility to Apple and is subsequently deleted once the request has been fulfilled.
In May, Microsoft announced AI-focused hardware named Copilot + PC. The silicon chips powering them have been advertised as being capable of 40 trillion operations a second, being twenty times more powerful and one hundred times for running AI-based workloads [2].
The backdrop to both of these announcements is OpenAI: the company that introduced ChatGPT (Chat Pre-Trained Generative Transformer) and Sora to the world stage back in November 2022 and February 2024, respectively [3], [4].
The AI Race and Competition Law
The functionality described is the backbone of a more substantial cloud architecture called Private Cloud Compute (PCC), which, in summary, processes the user's request solely for that purpose without any visibility to Apple and is subsequently deleted once the request has been fulfilled.
Latest developments aside, technology companies have greatly expanded their hardware and software offerings over the years, and with Artificial Intelligence (AI) functionality along with On Device Machine Learning (ODML) becoming the norm, regulators are closely monitoring offerings to ensure fair market access and pricing.
Competition in this - or any sector - is more than just ensuring a level playing field for businesses: it ensures consumers get a fair deal and access to a broad range of products and services, contributing to economic growth. Anti-competitive practices can result in higher prices and the dilution of market opportunities for other organizations. Within the United Kingdom, the role of the Competition Markets Authority (CMA) is to promote competition within markets and tackle anti-competitive behavior.
Areas they oversee include mergers (with the ability to block them should they risk substantially reducing competition), ensuring both individuals and businesses are informed on their rights and obligations under both competition and consumer law, as well as protecting people from unfair trading practices that may arise from a wider market issue. When it comes to approaches to regulating technology, this is where substantial approaches have been and are being taken.
The EU approach:
Back on the 21st of April 2024, the EU passed the AI Act, the world’s first standalone law governing the use of AI [5]. This new law takes a risk-based approach with differing requirements according to the level of risk, namely:
• Unacceptable risk: there are some AI practices considered to be a clear threat to fundamental rights with examples including AI systems that manipulate human behavior with the objective of distorting their behavior.
• High risk: These AI systems will need to follow strict rules, including, for example, high-quality data sets and human oversight (human in the loop systems, for example).
• Limited risk: The design of an AI system classed as limited risk should be done in a way that an individual is informed if they are interacting with an AI system. Should an AI system that generates or manipulates deepfakes be developed the organization or individual must declare the content has been artificially generated. [6]
The UK approach:
The other end of the spectrum is the United Kingdom’s principles-based approach, formed off the back of a 2023 white paper, A Pro-Innovation Approach to AI Regulation [7].
Based on a lack of understanding (and arguably confidence in the underlying legalities) around AI, its risks, and regulatory gaps, it was identified that this path might be the most appropriate to start with while recognizing the need for future legislative action. The methodology in laying down a framework for AI Regulation was based on the following five principles, namely:
• Safety, security, and robustness
• Appropriate transparency and explainability
• Fairness
• Accountability and governance
• Contestability and redress
If, amongst the above, an outcomes-based approach based on adaptivity and autonomy is taken on a sectoral basis by regulators, this leaves room for interpretation and cross-sector regulatory uncertainty.
The former government did not outline plans for the introduction of an AI regulator to oversee the implementation of the framework, and instead, it has been put to individual regulators (ie: Information Commissioners Officer, Ofcom) to implement the five principles based on existing laws and regulations, with regulators submitting their plans to the government back in April this year [8].
While the incumbent government has outlined in its manifesto to boost funding in AI technologies, exacting roadmaps from the Department for Science, Innovation and Technology are yet to be outlined.
The future trajectory
With differing approaches, questions have and will continue to emerge about how each can be applied, and enforced and the resulting effectiveness to shape future policymaking.
It is important to recognize that both approaches will be subject to varying sectoral scrutiny - while at the same time maintaining sectoral compliance - as use cases of AI expand. In conjunction with the expansion of AI use cases organizations operating in sectors where regulators have limited understanding of AI systems could identify loopholes to engage in anti-competitive practices.
From a European Union perspective, competition thriving within the internal market could result in the effectiveness of the AI Act being distorted, resulting in the need to increase the frameworks for competition standards to ensure they are continually aligned with the EU’s constitutional values. A
Although the incumbent government has yet to outline plans, it more specifically remains to be seen whether they continue with the principles-based approach or whether they push for an approach along similar lines to the EU.
Conclusion
The convergence of understanding AI, principles, sectoral use cases, and the wider laws aren’t solely there for an AI race, but instead, it should be viewed as a targeted effort to increase understanding,
humanize use cases and look at how compatible the law and technology are with one another both in the present to ensure society can not just see, but feel the benefit of AI technologies.
Bibliography
[1] Apple Intelligence Announcement: Apple Newsroom (United Kingdom). (n.d.). iPadOS 18 introduces powerful intelligence features and apps for Apple Pencil. [online] Available at: https:// www.apple.com/uk/newsroom/2024/06/ipados-18-introduces-powerful-intelligence-features-and apps-for-apple-pencil/ [Accessed 25 Jun. 2024].
[2] Microsoft Copilot + pcs announcement: Mehdi, Y. (2024). Introducing Copilot+ PCs. [online] The Official Microsoft Blog. Available at: https://blogs.microsoft.com/blog/2024/05/20/ introducing-copilot-pcs/.
[3] OpenAI ChatGPT: WhatIs.com. (n.d.). What Is ChatGPT? Everything You Need to Know. [online] Available at: https://www.techtarget.com/whatis/definition/ChatGPT#:~:text=Who%20created%20ChatGPT%3F.
[4] OpenAI Sora: Roth, E. (2024). OpenAI introduces Sora, its text-to-video AI model. [online] The Verge. Available at: https://www.theverge.com/2024/2/15/24074151/openai-sora-text-to-video-ai.
[5] Passing of AI Act: Clover, W.R.L., Francesca Blythe, Arthur (2024). One Step Closer: AI Act Approved by Council of the EU. [online] Data Matters Privacy Blog. Available at: https:// datamatters.sidley.com/2024/06/06/one-step-closer-ai-act-approved-by-council-of-the-eu/ #:~:text=On%2021%20May%202024%2C%20the [Accessed 24 Jun. 2024].
[6] AI Act Risk Levels: www.wilmerhale.com. (2024). The European Parliament Adopts the AI Act. [online] Available at: https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and cybersecurity-law/20240314-the-european-parliament-adopts-the-ai-act.
[7] UK Government Pro-Innovation Approach: Gov.uk (2023). AI regulation: a pro-innovation approach. [online] GOV.UK. Available at: https://www.gov.uk/government/publications/ai regulation-a-pro-innovation-approach.
[8] UK Regulator Deadline: GOV.UK. (n.d.). Regulators’ strategic approaches to AI. [online] Available at: https://www.gov.uk/government/publications/regulators-strategic-approaches-to-ai/ regulators-strategic-approaches-to-ai [Accessed 26 Jun. 2024].
Interested in in-person events where AI and tech leaders share their knowledge?
Get your tickets for our summits today!