Opinion

Zooming in on AI - #5: AI under financial regulations in the U.S., EU and U.K. – a comparative assessment of the current state of play: part 1

Rapid and accelerating developments in artificial intelligence have prompted governments around the world to consider how AI should be regulated and used responsibly by businesses, without stifling innovation.

This is particularly the case in the financial sector, where AI has the potential to bring operational efficiencies and even improved investment performance, but also brings with it risks due to the inherent unknowns that come with new technology. AI (as now defined) has in reality been widely adopted for many years in the financial sector, including for text transcription, chatbots and helpdesks and data analytics. However, there are many potentially novel applications where AI has the ability to replace roles traditionally performed by humans.

Governments and regulators are concerned with mitigating risks associated with AI—for example, ensuring that the use of AI by businesses is safe and transparent with proper systems and controls. However, approaches in different countries have differed drastically. While the EU has the most comprehensive AI-specific legislative measure in the AI Act with detailed regulatory requirements in particular for high-risk AI systems, the U.S. and U.K. have thus far adopted more of a common law approach of addressing risks as they arise or become apparent, predominantly using tools under existing technology-neutral legislation to issue policy pronouncements, supervise firms using AI and manage any issues.

This is the first in a series of three publications, in which we will compare the current approaches for regulating AI in the financial sector in the United States, the European Union and the United Kingdom. In this note, we consider at a high level the differing approaches to and principles of regulation. The second part will look at scope, extraterritoriality, data and third-party service providers. The final part will consider differing approaches to enforcement and remedies as well as liability.

Firms should carefully monitor developments across these and other relevant countries and consider for their business the recommendations in the Action Plan at the conclusion of this note.

Current approaches to regulating the use of AI

This table summarises at a high level the current approaches to regulating the use of AI in the U.S., the EU and the U.K., presenting a summary of all of the topics that will be addressed across the series of notes.

U.S.EUU.K.
Specific AI legislation, regulation or policy
  • No overarching AI-specific legislation at the federal level.
  • Significant legislative activity at the federal and individual states-level (e.g., stand-alone AI laws and comprehensive privacy laws which apply to automated processing via AI).
  • White House Executive Order; rulemaking and guidance germane to AI from various regulatory agencies.
  • EU AI Act.
  • European Supervisory Authorities statements and reports.
  • European Commission targeted consultation on the use of AI in financial services. 
  • No comprehensive AI-specific legislation.
  • U.K. AI White Paper (not binding).*
  • Implementing the U.K.’s AI Regulatory Principles: Initial Guidance for Regulators.*
  • U.K. financial services regulators strategic approach to regulating AI systems.*
Approach
  • The U.S. has a highly fragmented legislative and regulatory landscape, involving multiple governmental and regulatory authorities at the federal and state levels. 
  • Requirements for high-risk systems fully prescribed in law by the EU AI Act, with lighter requirements applying to limited risk systems. 
  • U.K. approach is based on common law principles of only imposing legal and regulatory obligations where necessary to address identifiable risks.
  • Responsibility with sectoral regulators to use existing powers to supervise appropriately.
Key Principles
  • Safe, secure and effective systems.
  • Explainability and transparency.
  • Bias, algorithmic discrimination protections.
  • Data protection & data privacy.
  • Accountability and governance.
  • Human alternative, consideration and fallback for automated decisions in fundamental services. 
  • Technical robustness and safety.
  • Transparency.
  • Diversity, non-discrimination and fairness.
  • Privacy & data governance.
  • Societal and environmental well-being & accountability.
  • Human agency and oversight.
  • Safety, security, robustness.*
  • Appropriate transparency & explainability.*
  • Fairness, including data protection.*
  • Accountability and governance.*
  • Contestability and redress.* 
Scope
  • In the U.S the scope of implementation of existing laws and regulations applicable to AI will match the scope of those laws and regulations.
  • The EU AI Act defines four main players in the AI sector – deployers, providers, importers and distributors.
  • It also categorises AI systems according to risk. Differing standards and requirements apply to each identified category. However, most of the obligations apply to high-risk systems and the use of those systems.
  • There are some derogations for providers and deployers of high-risk AI systems that are financial institutions subject to similar requirements under EU financial services law.
  • The scope matches the regulatory perimeter of sectoral regulators, such as the U.K. financial regulators, who supervise the use of AI by all U.K. regulated financial firms, and who will also supervise certain third-party service providers to financial firms.
Data governance / processing
  • Biden's Executive Order 14110 encourages regulatory agencies to use their authorities to protect consumer privacy and to consider introducing rules or clarifications and guidance as to how existing rules apply to AI systems.
  • State laws on data protection and privacy may also apply.
  • The AI Act provides that EU laws on data protection and privacy, such as the General Data Protection Regulation (GDPR), apply to personal data processing using AI.
  • The AI Act does not affect the rights and obligations contained in GDPR.
  • The U.K.’s General Data Protection Regulation (U.K. GDPR) and the Data Protection Act 2018 apply.
Extraterritoriality
  • The U.S. financial regulatory scheme has various laws and regulations that have extraterritorial effect, or which apply when non-U.S. persons deal with U.S. persons.
  • The U.S. has already imposed restrictions on AI that will have an extraterritorial effect, such as limitations on the exports of emerging technologies like AI.
  • The EU AI Act will apply to providers regardless of whether the provider is physically present or established within the EU or in a third country.
  • Third-country providers must appoint an EU representative.
  • The AI Act will also apply to providers and deployers of AI systems that are located or established in a third country, where the output produced by the system is used in the EU.
  • EU GDPR has an extraterritorial reach that could impact firms using or deploying AI systems.
  • In general, the U.K.'s exemptions from the licensing (e.g., the U.K.’s overseas persons exclusion) and financial promotions requirements will be available to third-country financial institutions, including their use of AI systems, when dealing with U.K. wholesale (large corporate) users.
  • Retail business with U.K. customers is generally regulated, including when the supplier is overseas.
  • U.K. GDPR has an extraterritorial reach that could impact firms using or deploying AI systems.
Third-party providers
  • Biden's Executive Order 14110 suggests that financial institutions should expand their typical third-party due diligence and monitoring to account for AI-specific factors.
  • Existing guidance and proposed new rules for U.S. financial institutions apply to their management of risks arising from third-party arrangements.
  • EU financial institutions remain responsible for any functions that are outsourced and must manage the risks arising from outsourcing critical functions.
  • The EU Digital Operational Resilience Act (DORA) will strengthen that framework from 2025, with additional requirements for IT providers to financial services entities and direct regulation of critical third-party providers.
  • EU GDPR imposes obligations on both data controllers and data processors, including where the data processing is undertaken by a third party.
  • U.K. financial institutions remain responsible for any functions that are outsourced and must manage the risks arising from outsourcing critical functions.
  • The U.K. recently introduced direct regulation of critical third-party service providers to financial institutions.
  • U.K. GDPR imposes obligations on both data controllers and data processors, including where the data processing is undertaken by a third party.
Fines/ enforcement
  • No U.S. specific AI regulatory enforcement regime. However, U.S. agencies have used their existing powers to enforce laws and regulations concerning AI.
  • Enforcement of the EU AI Act will be at national member state level.
  • The AI Act sets maximum levels of fines.
  • EU data protection authorities have already taken enforcement action against companies infringing the data protection laws while using AI.
  • No specific AI regulatory enforcement regime in the U.K.
  • Various regulators have enforcement powers, including the financial services regulators for financial regulations, the Information Commissioner’s Office (ICO) for data protection matters and the Competition and Markets Authority for antitrust matters.
Remedies
  • No U.S. AI-specific legislation.
  • Companies, including regulated financial institutions, are liable to consumers for any breach of applicable federal or state laws.
  • Individuals and legal persons may lodge infringement complaints with the relevant authority under the AI Act, and the same applies under EU GDPR.
  • No U.K. AI-specific legislation.
  • Regulated financial institutions are liable to retail consumers for any breach of the regulatory regime. Firms are also required to have complaints handling procedures.
  • The Financial Ombudsman Service hears retail complaints which are not resolved through such processes.
  • Individuals and legal persons may lodge infringement complaints with the ICO under U.K. GDPR.
Liability
  • No U.S. AI-specific liability legislation at federal level. Potential liability under various existing federal or state-level statutes.
  • Specific legislation in the draft EU AI Liability Directive.
  • The draft Directive Liability for Defective Products will replace the existing Product Liability Directive, and the scope will be extended to AI.
  • Individuals have a right, for material or non-material damages arising from an infringement of EU GDPR, to compensation from the controller and data processor.
  • Damages cover pecuniary and non-pecuniary losses.
  • No U.K. AI-specific liability legislation.
  • Liability may arise under various U.K. statutes as well as under the common law, e.g., negligence claims.
  • A data controller and data processor may be liable to compensate an individual for losses suffered as a result of material damage or non-material damage (e.g., distress) arising from an infringement of the requirements in U.K. GDPR.

*Issued by or under the U.K.’s previous government. The principles noted above derive from the AI White Paper, also issued under the U.K.’s previous government.

General Approach & Principles

There is a rapidly changing ecosystem of laws and regulations applicable to the development and use of AI. In some countries there is broad AI-specific legislation, such as the EU.’s new AI Act, which applies alongside a wide range of existing laws, such as on intellectual property, data protection and privacy, financial services, antitrust, cybersecurity, consumer protection and others may apply. However, in most countries, there is no AI-specific legislation and AI-related matters are governed only by these existing laws. Countries around the world are currently considering if any of these existing laws require changes to address the novel questions and challenges raised by AI. Many jurisdictions have developed general principles for regulating AI, and these enshrine similar rights such as transparency, fairness and human oversight.

Some countries have also signed the AI Convention, namely, Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova, San Marino, the United Kingdom, the United States of America and the European Commission. The AI Convention, which is the first legally binding international agreement on AI, will enter into force once there are five ratifications. It sets fundamental principles for activities within the lifecycle of AI systems, prescribes remedies, procedural rights and safeguards and requires risk and impact management. Many of the principles align with those in the EU’s AI Act, such as transparency, oversight, accountability, data privacy, reliability and safe innovation. The AI Convention apples to both public authorities and public actors. In its application to the private sector, parties to the AI Convention may opt for it to apply directly or implement their own measures. 

U.S. 

In the United States, there are currently no comprehensive AI-specific laws at the federal level, though there have been more limited laws passed in this space, including laws to coordinate the U.S. government’s use of AI and state AI laws. As a result, and consistent with the financial regulatory approach in the U.S., there are efforts at the federal and state levels to both apply and enforce existing laws and regulations to AI and to develop new rules where there are gaps in the existing regulatory landscape. U.S. agencies have already begun enforcement efforts, including related to so-called “AI washing” and AI disgorgement for improperly collected data, and indicated an increased enforcement focus on AI and other emerging technologies. We discuss enforcement and fines in the third part of this series. 

The federal and state governments are focused on mitigating risks arising from both the public and private sector’s use of AI, including those related to privacy, fair use and ensuring appropriate disclosures to the public. In addition, reflective of geopolitical pressures, a core emphasis is national security and ensuring that AI technology is not weaponized against the U.S. in either a military or commercial sense—both as regards outbound investment and exports through restrictions on sensitive technologies, including AI, and in-bound investment from the Committee on Foreign Investment in the U.S. We discuss the latest developments in our August note, “Sanctions and export controls expand further.” 

A plethora of bills have been introduced in various Congressional committees with bi-partisan support, and the House and Senate have set up bipartisan task forces, working groups, and congressional hearings to better understand AI policy priorities and further coordinate legislative efforts.

Most recently, on May 15, 2024, a bipartisan Senate working group issued a report entitled Roadmap for Artificial Intelligence Policy (the “Roadmap”), which addressed eight key policy areas: (1) supporting U.S. innovation in AI; (2) AI and the workforce; (3) high impact uses of AI; (4) elections and democracy; (5) privacy and liability; (6) transparency and explainability; (7) intellectual property and copyright; and (8) safeguarding against AI risks.

Notably for the financial services sector, the Roadmap calls for the creation of a comprehensive federal data privacy framework related to AI that can be applied across multiple sectors. The Roadmap specifies that this data privacy framework should include provisions addressing data minimization, data security, consumer data rights, consent and disclosure, and data brokers. The Roadmap encourages relevant Senate committees to develop legislation that ensures financial service providers are using accurate and representative data in their AI models. The Roadmap also supports a regulatory gap analysis in the financial sector—which was also proposed by the bipartisan Artificial Intelligence Advancement Act introduced in the Senate in October 2023. 

In the absence of comprehensive Congressional action on AI, the Biden Administration has sought to take the lead by issuing a broad-ranging Executive Order on AI in October 2023—Executive Order 14110, entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Executive Order 14110 directs over 50 federal entities to engage in more than 100 specific actions to implement the guidance set forth across eight overarching policy areas: (1) safety and security; (2) innovation and competition; (3) worker support; (4) AI bias and civil rights; (5) consumer protection; (6) privacy; (7) federal government’s usage of AI; and (8) international leadership. Executive Order 14110 highlights areas of focus for the enforcement of existing regulations and directs agencies to conduct studies, publish reports and develop guidance around AI. See A&O Shearman on Tech, “Biden Administration Issues Broad Executive Order to Regulate and Advance Artificial Intelligence.”

Under the direction of Executive Order 14110, the U.S. Department of the Treasury issued a public report on best practices for financial institutions to manage AI-specific cybersecurity risks (the “Treasury Report”). The Treasury Report is a digest of AI use cases, threat and risk trends, governance and cybersecurity best practice recommendations, and challenges and opportunities for financial institutions, incorporating 42 in-depth interviews with various industry stakeholders. The Treasury Report outlines the current regulatory landscape applicable to the use of AI in cybersecurity and fraud management by financial services firms. These regulatory expectations, in turn, closely track best practices shared by participating financial institutions for mitigating AI-related cyber and fraud risks. These best practices include incorporating AI risk management within existing enterprise risk management programs; mapping data supply chains; proper due diligence of vendors; maintaining high levels of cybersecurity, especially around data; and having the right risk tolerance for both the specific use case and the overall risk appetite of the firm. 

Other U.S. federal agencies have also begun to interpret and provide guidance on how existing laws and regulations apply to AI and consider new rules for AI within their respective jurisdiction. Such agency efforts to address AI may be forestalled in light of the U.S. Supreme Court’s June 2024 decision in Loper Bright Enterprises v. Raimondo to overturn a long-standing doctrine that instructed courts to defer to reasonable interpretations made by administrative agencies. It is likely that any agency rulemaking on AI will be closely scrutinized both by the public and by courts. These agency efforts include: 

  • Securities and Exchange Commission (SEC) 

Gary Gensler, the Chair of the SEC, made public statements concerning the use and potential risks of AI technologies in the securities industry, and specifically identified four key areas of concern: (i) the potential for conflicts of interest; (ii) the potential for fraud and deception; (iii) the impact on privacy and intellectual property issues; and (iv) the impact on financial stability. This speech was soon followed by the SEC’s new proposed rulemaking on “predictive data analytics” that would among other things, require broker-dealers and investment advisers to eliminate or neutralize the effect of certain conflicts of interest associated with their use of AI and other technologies. It seems unlikely to be finalized in the near future, as the SEC has announced that it is likely to re-propose the rule. 

The SEC has also proposed rules addressing outsourcing of certain covered functions by investment advisers and cybersecurity risk management rules for investment advisers and broker-dealers. For example, in May 2024, the SEC finalized amendments to Regulation S-P, which governs how certain financial institutions treat consumers’ non-public personal information. The amendments were intended to help protect investors’ privacy from the “expanded use of technology and corresponding risks.” 

In the absence of final AI-specific rules, the SEC’s efforts indicate that it is considering using existing regulatory provisions to address risks the SEC perceives with respect to AI. Investment advisers and broker-dealers are required to implement policies and procedures designed to prevent violations of the federal securities laws. Furthermore, under SEC rules such as Regulation S-P and Regulation S-ID, broker-dealers, investment advisers and investment companies must take certain steps to safeguard customer information and appropriately respond to red flags related to possible identity theft.

  • Commodity Futures Trading Commission (CFTC) 

The CFTC issued a request for public comment on a wide range of AI-related questions in January 2024. In May 2024, the CFTC’s Technical Advisory Committee recommended that the CFTC develop an AI Risk Management Framework governing the use of AI in financial markets. In developing the framework, the committee said the CFTC should hold public roundtables, conduct a “gap analysis” of existing regulations, and generally aim to align with other financial regulators and the National Institute of Standards and Technology. The committee highlighted use cases and related risks for AI concerning trading and investment; customer communications, advice and service; risk management; regulatory compliance; and back office and operations.

In accompanying statements, Commissioners Kristin Johnson and Caroline Pham emphasized identifying existing CFTC regulations that may address AI-related risks, for example by looking at the existing approach to risks and controls in algorithmic trading. The CFTC has not proposed any AI-specific rulemaking. 

In June 2023, the CFTC formed the new Cybersecurity and Emerging Technologies Task Force within the CFTC Division of Enforcement, which will address “cybersecurity issues and other concerns related to emerging technologies (including artificial intelligence).”

  • Banking Regulators 

On June 6, 2023, the Federal Reserve, Federal Deposit Insurance Corporation and Office of the Comptroller of the Currency released final Interagency Guidance on banking organizations’ management of risks associated with third-party relationships which, while not specific to AI, is highly relevant, and is discussed further in part two of this series. The federal banking regulators have not otherwise released any guidance or rulemaking specific to AI. However, general principles of safety and soundness apply to any use of AI.

  • Consumer Financial Protection Bureau (CFPB) 

The CFPB has produced guidance, reports, and proposed rules related to the use of AI in certain contexts, mostly related to consumer credit. For example, it has issued guidance noting that creditors that use AI or complex algorithms in aspects of the credit decisioning must still provide a notice to consumers that discloses the specific reasons for taking adverse action, and that creditors must be able to explain the specific reasons for their credit decisions, including when using AI. The CFPB has also published a report highlighting the potential issues and consumer harm arising from the use of AI chatbots. 

  • Federal Trade Commission (FTC) 

Where companies do not collect personal data in accordance with the law, and they use illegally-collected personal data to train AI, the FTC has in some cases required not just the deletion of the ill-gotten data but also the destruction of the AI that was trained using this data. This penalty has been imposed in six cases to date. The FTC has not issued guidance regarding when it may impose this disgorgement remedy.

Additionally, numerous individual states have passed or are considering stand-alone AI laws as well as comprehensive privacy laws which apply to automated processing via AI. According to the National Conference of State Legislatures, at least 45 states and Washington D.C. introduced AI bills this year, and over 30 states have adopted resolutions or enacted legislation pertaining to AI. State stand-alone AI laws (such as in Colorado or Utah) include regulations on generative AI decisioning (without meaningful human oversight in decision-making), in critical areas such as provision of health care services, provision of insurance, education admissions, employment decisions, and provision of loans and other financial services. Notably, California is considering an AI regulation bill, which—if signed into law—would require powerful AI models to undergo safety testing prior to being released to the public and would authorize the state’s attorney general to hold developers liable for serious harms caused by their AI models. We discuss California’s draft Safe and Secure Innovation for Frontier Artificial Intelligence Models Act in “Zooming in on AI – #3: California SB 1047 – The potential new frontier of more stringent AI regulation?”. 

Furthermore, many states now have some type of data protection law or privacy law. Comprehensive state privacy laws regulate automated processing and require notice and, in certain cases, consent. If sensitive information is processed by AI or if sensitive information is used to train AI, some states require data privacy impact assessments prior to commencing use of the AI tool. 

Given the current ad hoc approach of addressing AI risks as they arise, the evolving landscape may make compliance for firms with U.S. operations or a U.S. nexus particularly challenging, and firms should closely monitor developments in this area and take into consideration the recommendations in the Action Plan laid out below. 

EU 

The EU has been the first to develop AI-specific legislation, with the AI Act setting legal requirements specifically for AI systems, focusing on high-risk AI systems. The AI Act is the most comprehensive attempt at regulating the technology undertaken by any legislature globally. The AI Act defines four main players in the AI sector—deployers, providers, importers and distributors. A single entity in this sector might fall within several of these categories. The AI Act also defines different types of AI systems according to the level of risk involved in the use of those systems. How practical this approach is remains to be seen. We discuss the different obligations applying to providers and deployers in “Zooming in on AI – #4: What is the interplay between “Deployers” and “Providers” in the EU AI Act?”

The EU AI Act entered into force on 1 August 2024, and will for the most part apply directly across the EU from 2 August 2026. Certain provisions will apply earlier, for example, the prohibition on certain “unacceptable” AI systems will apply from 2 February 2025, GPAI models must comply from 2 August 2025 and the provisions on high-risk systems will apply from 2 August 2027. We set out more details on when various aspects of the AI Act will apply in, “Zooming in on AI: When will the AI Act apply?

In the meantime, the European Commission has launched the AI Pact, which encourages industry to voluntarily start implementing the requirements of the AI Act before they are legally applicable. The Commission has conducted a targeted consultation on the use of AI in the financial services sector.

The approach of the AI Act to mitigating AI risks is discussed in, “Seizing the AI opportunity in Europe” and “EU AI Act: Key changes in the recently leaked text.” 

U.K. 

The U.K. has not yet adopted any AI-specific legislation. However, that may change under the new Labour government whose manifesto committed to introducing binding requirements on developers of the most powerful AI models (equivalent to what the EU AI Act defines as highly capable GPAI). This was reiterated in the post-election King’s Speech, which sets the legislative agenda for the next 12 months. In the meantime, the U.K. continues to rely on existing laws, which are generally technology-neutral, and regulatory pronouncements or guidance in some sectors. Matters are largely left to sector-based regulators, who must interpret and apply to their sectors the government’s AI principles. Regulators are encouraged to be transparent about the actions that they are taking. 

The previous government’s strategy, set out in “A pro-innovation approach to AI regulation,” was presented as a “context-based” approach that focused on where and how AI is used. The approach is founded on common law principles of only imposing legal and regulatory obligations where necessary to address identifiable risks. It was also based on the five principles (set out in the above summary table), with a preferred approach of initially not putting those into statute. Certain regulators were requested to update their strategic approach to AI, including the financial services regulators. Regulators are also encouraged to develop their policy approach as needed, to issue guidelines and to use technical standards to assist AI developers and deployers to implement the principles. There is no indication that policy will change on these matters with the change of government. The previous government had also established an AI Safety Institute to carry out research on AI safety and develop and conduct evaluations on advanced AI systems. The House of Lords has indicated that it wishes the AI Safety Institute to be put on a statutory footing, although no bill has been proposed for this. 

The financial services regulators’ approach to regulating AI used or intended to be used in the financial services sector is technology-agnostic, principles-based and outcomes-focused. Before the change in government, the U.K. financial services regulators described how the previous government’s AI principles fit with their rules, high-level principles and expectations, and how those apply to regulated firms using AI. These include:

  • Safety, security and robustness 

The Financial Conduct Authority’s (FCA’s) Principles for Business apply. For example, firms must conduct their business with due skill, care and diligence (Principle 2) and take reasonable care to organise and control their affairs responsibly, effectively and with adequate risk management systems (Principle 3). Some of the Threshold Conditions apply – these are the minimum conditions a licenced firm must satisfy to obtain and maintain its licenced status. For example, a firm’s business model must be suitable, compatible with the firm’s affairs being conducted in a sound and prudent manner and consider the interests of consumers and the integrity of U.K. financial system. In the area of operational resilience, firms must be able to respond to, recover, learn from and prevent future operational disruptions.

  • Appropriate transparency and explainability 

High-level requirements and principles relating to the information firms must provide to consumers apply, including the Consumer Duty for retail business, and for wholesale business, the principle requiring firms to communicate information in a way that is clear, fair and not misleading (Principle 7).

  • Fairness, which includes data protection 

Various Principles apply, such as the Consumer Duty under which firms providing retail services or products, must act to deliver good outcomes for retail customers, and ensure this is reflected in their strategies, governance and leadership. For wholesale business, treating customers fairly (Principle 6) applies. For all firms, the principles of managing conflicts of interest (Principle 8) and respecting the customer relationship of trust (Principle 9) apply.

  • Accountability and governance 

The FCA’s Principles apply, in particular on management and control. The requirements for firms to have senior management arrangements, systems and controls as well as the Senior Manager and Certification Regime apply.

  • Contestability and redress

For example, firms are required to have complaints handing procedures and policies.

The FCA notes that a more proactive approach to supervision is warranted where a firm uses AI systems. It has said that it would adapt by placing a strong focus on testing, validation and explainability of AI models, vigorous accountability principles, and openness and transparency. The regulators are monitoring the situation, including wider technology trends such as quantum computing, and future adaptations have not been ruled out. 

The Bank of England’s Financial Policy Committee is engaged in considering how AI innovations may impact financial stability. The risks here include magnifying herding or broader procyclical behaviours, increasing cybersecurity risk and intensifying interconnectedness.

The U.K. ICO last year updated its guidance on AI and Data Protection to provide great clarity on fairness requirements. 

Action Plan

A significant concern for companies adopting AI systems is how to control against unwanted outcomes, since the AI has the potential to operate unexpectedly in future or unknown factual situations. Linked to that is the question of where responsibility lies for AI and the actions required by registered individuals in senior management positions and legal entities. Companies can take steps to promote appropriate use of AI, including the following measures, which are broadly consistent with regulatory guidance and the EU.’s AI Act. For companies using or intending to use AI systems in the EU market, an early review is recommended to ensure compliance with the relevant requirements of the AI Act.

  1. Prepare and socialise internally an AI policy, that embeds the core principles of fairness and transparency with concepts of human oversight, explainability, security and safety. An AI policy should be modular with multiple layers to ensure accessibility and usefulness across the wide range of audiences.
  2. Develop an AI framework and governance structure that sets out clearly roles and responsibilities across the lifecycle of each AI system. If appropriate, establish a separate AI policy and compliance team. The framework should be orientated around specific use cases for AI systems and be interdisciplinary, combining the business, compliance, operations, infosec, IT and in-house legal in a single forum. U.K. financial services firms are required to have strong governance oversight with the board promoting robust risk management, and clear organisational structures that demonstrate transparent and consistent lines of responsibility.
  3. Undertake a mapping exercise of the AI systems in use and intended to be used (i.e., an AI inventory), and where it will be used and or placed, including third-party systems. Assess the potential risks involved for each AI system, including the level of risk, and map those against the relevant regulatory and legal requirements. This will include determining the firm’s role, taking into account a jurisdiction’s definitions and requirements. Document how each risk from an AI system will be controlled and mitigated. U.K.-incorporated banks, building societies and PRA-regulated investment firms approved to use an internal model for calculating their regulatory capital requirements must satisfy the Prudential Regulation Authority’s (PRA’s) Model Risk Management (MRM) Principles. The PRA is clear that the MRM Principles, which came into effect in May 2024, apply to AI models, including the requirement to provide a comprehensive model inventory. All U.S. companies, including those operating in the financial services sector, should consider enhancements to their compliance program to address the risks associated with the risks of AI.
  4. Adopt and implement measures to manage the risks. It may be helpful to do so thematically in the following three risk management pillars.
    Use Case +
    Clearly define the use case because it will drive the risks. For example, a system that is involved in pricing brings additional risks relating to price collusion that will not be relevant in other use cases, whereas a customer-facing chatbot that directly interacts with customers about financial products raises privacy and ethical issues that will not be relevant to, say, using AI to generate software. Assess whether using AI will result in a better outcome than the existing solution, taking into account relevant factors such as efficiency, cost, accuracy and security.
    Operational
    Implement operational steps to align and integrate AI into a business. This includes security measures (e.g. BYOK/encryption), configuration of the model and user profiles, and privacy enhancing technologies. The interdependencies between legal, operational and security stakeholders is greater than in non-AI based technology deployment.
    Contractual
    Contract terms help to mitigate legal risk, both in the contract between the organisation deploying an AI model and the model provider, as well as in contracts between an organisation and its customers. In negotiations with foundation model providers, there are likely to be red lines for each organisation, such as risk allocation or customer data/trade secrets not being exposed or used to train the models for others.
  5. This note generally assumes that you are deploying AI in your business, but not developing models. Where businesses develop (e.g. training or customising, via fine-tuning or retrieval augmented generated (or RAG)) the risks change—and, in most cases, increase.
  6. Assess how AI is used by third-party service providers, and the impact of its use on the recipient’s business and clients. Consider the impact of any specific legal regulatory requirements.
  7. Conduct an audit of all existing commercial agreements to identify those requiring updates to address AI-specific risks. These will include, as a minimum, all service agreements and technology access agreements. Update and revise these agreements as necessary to include AI-specific protections relating to privacy, data usage rights, IP infringement, IP ownership, liability and indemnity clauses, compliance with laws and the recipient's AI policies. Ensure that regulators also have access.
  8. Establish a monitoring and review process to ensure ongoing risk mitigation and compliance.

For additional information, read, “Desire to harness potential of generative AI drives rising interest in data as an asset” in which Allen & Overy, now A&O Shearman, discuss steps for mitigating the risks involved in generative AI.

Elevate your AI strategy | Network with peers from global businesses

Are you ready to take your AI strategy to the next level? Join us for Phase 3 of the AI Working Group, where we will explore the latest developments and challenges in AI regulation, cyber security, and M&A. Whether you are in IP, data, cyber, tech, or life sciences, you will benefit from our market-leading, cross-practice, and multi-jurisdictional AI advisory practice.

Phase 3 starts in October 2024, including topics such as:

  • Cyber Security and AI: How to navigate the heightened cyber security risk landscape in the context of AI, from incident prevention to response.
  • AI Act Compliance: How to meet the specific compliance obligations under the EU AI Act in different scenarios, such as deploying, providing, or developing high-risk AI systems or general-purpose AI models.
  • AI in M&A: How to conduct legal and strategic risk assessments for transactions driven by AI technology acquisitions.

If you would like more details about joining the AI Working Group, email AIWorkingGroup@AOShearman.com

Don't miss this opportunity to learn from our AI experts and network with your peers from the largest global businesses across various sectors.