Article

Seizing the AI opportunity in Europe

Boy looking through glass prism with one eye.
In December 2022, MIT Technology Review named generative AI as one of its 10 breakthrough technologies for 2023. Less than a year later, respondents to a KPMG survey of CEOs ranked generative AI as their top investment priority globally. AI innovation is continuing at breakneck speed, with studies showing that AI models are now capable of learning from human behavior.

I recently read that if all AI technology advancement were halted tomorrow, there would be about 20 years’ worth of high GDP growth still to come simply from implementing current systems. I think that is very plausible, given there is so little adoption.

According to research from the International Data Centre (IDC), spending on artificial intelligence, including hardware, software and services for AI-related systems, could more than double to USD300 billion annually by 2026. And while the majority of that investment will flow to the U.S., 20% will be directed into Europe.

We are using AI to innovate within our business. As a vertically integrated company, we make machines and we have a huge amount of patient data from our clinic and our machines – more than a pharma company or a medical device company – because they don’t get into direct contact with patients daily, unless they are doing a clinical trial. AI will be a great tool to optimize healthcare in a personal way, which is a huge opportunity to make lives better.

Among respondents to our survey, nearly four in five (77%) said their business viewed AI as a strategic priority. Indeed, many said they were beyond the strategy phase and firmly into integration, with AI already being used to write code, support customer services and create content.

Our in-depth interviews painted a fascinating picture of how AI is being deployed across sectors, with businesses deploying AI models to boost operational efficiencies, mitigate disclosure risk, manage their contractual estate and support M&A execution. Others were pursuing more disruptive applications, for example by applying AI to proprietary data sets to develop new digital products, increase the efficiency of market-facing activities, and to create more personalized and informed experiences for their customers.

While the potential of generative AI is enormous, the technical, regulatory and legal complexities involved are equally large, with potential for significant challenges if they are not adequately managed. Our survey showed that of the 77% of businesses that saw AI as a strategic priority, only 26% were very confident they would achieve their aims over the same period. One in three were either not very – or not at all – confident.

We have our own AI tool where we’ve filed every company report we’ve published over the past 50 years. If I want to find what we’ve said on any issue, I can.

Further, only 26% of our respondents said the governance of AI was a business risk that they currently have systems in place to mitigate. Considering the high numbers of corporates who said they were already working with AI, it appears that many are doing so without the right checks and balances in place. Even among the one in four businesses that had implemented risk mitigation systems, fewer than half (41%) were very or fairly confident that they work effectively. And only one in five of our respondents felt fully aware of – and well prepared for – the evolving legal and regulatory landscape around AI.

E.U. AI Act proposes tiered approach to regulation

Here, the E.U. AI Act is the most comprehensive attempt at regulating the technology undertaken by any legislature globally. The proposed law is intended to align with the Lisbon Treaty to focus on the level of risk a given AI implementation could pose to the health, safety or the fundamental rights of a person – including the right to non-discrimination, data protection and privacy – as well as the rights of the child.

In December 2023 the E.U. Parliament, Council and Commission reached political agreement on the AI Act after protracted negotiations. While the final text of the legislation is awaited, the key principle underpinning it is to target applications of AI based on whether they pose minimal, limited, high or unacceptable risks.

Minimal risk systems will be free from any additional regulatory obligations, while those deemed limited risk will need to follow basic transparency requirements. 

Alongside this risk-weighted, application-focused approach, there will be separate requirements that apply to certain types of AI models, including general purpose systems such as ChatGPT and Gemini. Businesses developing or adapting these models will need to keep a record of how their systems are trained, including what type of data was used, whether any of that data was protected, and what consents they had in place to use it. They will also be required to inform end users that they are interacting with an AI system rather than a human being. 

Developers of high-risk systems (which include CV-screening tools for job applications and robotic surgeons) will be subject to a conformity assessment and must be registered on a special E.U. database before these products and services can enter the E.U. market. Once in use they will be overseen by national authorities and the European Commission. 

Negotiations expose differing approaches between MEPs and Member State governments

The negotiations around the Act sparked intense debate between MEPs keen to protect fundamental rights, and Member State governments keen to use AI to protect national security. In the end, real-time AI facial recognition systems will now be permitted for a narrow set of law enforcement purposes including to search for victims of human trafficking and counter-terrorist threats, but the use of AI to categorize people in relation to sensitive characteristics such as gender, religion, race or ethnicity – as well as for social scoring, predictive policing and emotional recognition in workplace and educational settings – will be banned. 

It is not solely through the AI Act that Europe is attempting to influence the evolution of AI. Cooperation between the Big Tech companies and AI developers remains under intense scrutiny from antitrust authorities across Europe as a whole. In the U.K, the Competition and Markets Authority (CMA) has stated its intention to scrutinize such partnerships to assess their impact on competition and consumer protection. Speaking at a Fordham University antitrust event in New York in September 2023, Andreas Mundt, the head of the German Competition Authority, expressed similar concerns when he said: “… we should be extremely alert on the terms of cooperation between ‘Big Tech’ and these new AI companies.”

AI is helping us crunch data around M&A deals and lower the costs of due diligence.

AI’s risks for business

Generative AI models create two broad areas of legal risk for the companies that deploy them. The first relates to the expectation of errors (the so-called “black box” problem). Here, there is a likelihood that the models “hallucinate” and give incorrect responses that, in certain contexts, could lead to legal liability for tort, breach of advisory duties, consumer harm and/or regulatory violations.

Hallucinations could be a function of incorrect or out of date data in the model’s training set, inaccurate mathematical predictions based on the weighting of sources of randomization, or historical bias in the information used to develop the model. They are also simply a product of the way the technology works. The output generated by any AI system is nothing more than a prediction, and no prediction will be 100% accurate. The models underpinning generative AI systems are no different, save that the risks are likely amplified in practice given their general-purpose nature and their wide range of potential use cases.

At the same time, the outputs of generative AI models are inconsistent and unpredictable, making it extremely difficult to ensure standards of quality and accountability are met. The same questions will produce different answers, and where AI models are deployed to deliver (or assist in delivering) financial advice for example, this can lead to variances in outcomes for consumers.

The second risk derives from the fact that AI models take human-generated content and account for it in a mathematical response. This – coupled with the fact that AI developers are incentivized to access as much data as possible to train their models – raises the possibility that someone else’s data may be used without permission or credit, which in turn creates real risks for both the developer and the AI user, and raises the possibility that the user may not be able to assert ownership over the model’s output. Crucially, the model may also automatically retain and learn from a user’s own IP. Questions in relation to data privacy and protection also arise in instances where the model has been trained using personal data (which is often the case for large language models) or where users input personal data in their prompts.

  • If an AI model has been fed with illegally scraped information, not only would the AI developer be likely to infringe third party IP rights (most likely copyright) at the point of training the model, but so would the user of the AI model at the point of use. This is because AI models “memorize” their training data and there is a risk that they reproduce a substantial part of an individual copyright work in an output. These issues are currently being tested through litigation, with the likes of OpenAI, Microsoft and Stability AI (the developer of AI image generator Stable Diffusion) being sued for copyright infringement, among other things, in various actions in the UK and the U.S.
  • There are various tools to mitigate these risks, from internal governance to operational controls and contract terms. Some developers have created what they refer to as “IP safe” AI by training their models only on licensed content, proprietary IP and rights-free information in the public domain, and are offering to indemnify users against any IP claims linked to content created by their tools. However whether this will become standard practice remains to be seen.
  • Another risk is that in in most jurisdictions, the outputs of AI systems do not benefit from copyright protection. As a result, any user looking to protect the commercial value of the outputs of an AI model will need to consider alternative forms of protection, such as trade secrets. The test for whether something qualifies as a trade secret is both evidentiary and practical, and any trade secret strategy requires careful consideration across multiple stakeholders in a business. Who has access? How is that access controlled? What security protections and encryption protocols are deployed? Are the appropriate non-disclosure agreements in place? 
  • Investors will also need to assess open-source software risk given that the model may have been trained using publicly available source code repositories such as Github. A proposed class action lawsuit has been launched against major AI developers in the U.S. alleging license breaches, fraud, negligence, “unjust enrichment”, unfair competition and privacy violations linked to the use of open source code to train large language models (LLMs). Open source software risk is particularly important to consider where the AI is being used to generate software code, as the output from the model may reproduce parts of the open source software from the training data set which in turn can raise broader IP risks for the business.
  • Appropriate governance can help reduce the IP risks once the model is in use, for example by designing controls to ensure users avoid prompting the model with an instruction to copy, or via reference to any known trademarks or individuals. Likewise, clearly labelling outputs as the products of generative AI can guard against them being deployed for purposes beyond their intended use case. Other issues to consider include whether the outputs are for public or internal consumption, and whether they can be used verbatim or “for inspiration”. 

The big factor at play right now is the AI race. The valuations might be crazy, but people feel the need to have a foot in the door through a relationship with AI vendors. Strategic investments in this space have picked up as a result.

Deploying AI: three risk management pillars

Use case+

Businesses deploying AI must articulate clearly and exactly what the model is to be used for, and, given the sweeping abilities of LLMs and the risk that they can be deployed for other purposes, implement strict governance controls to keep the system’s use within the original design. This means the use case needs to be reinforced with playbooks, training, system settings and working practices to reduce the likelihood that they are used in ways that were not intended (eg it is not enough to implement a contractual restriction designed to protect trade secrets if operational steps such as encryption aren’t also introduced).

Operational

Businesses must also implement operational measures to integrate generative AI safely into their operations. Here, legal functions need to work closely with information security and technology teams. This includes in relation to security measures, the configuration of the model, and the use of privacy enhancing technologies such as homomorphic encryption and differential privacy, a process which adds “statistical noise” to a data set so AI models can still look for patterns without breaching privacy rules. The interdependence between legal, operational and security stakeholders is higher in generative AI rollouts than for other types of IT project.

Contractual

Various terms help mitigate legal risk, both in contracts between the company deploying the AI and the model’s developer, and between the deployer and any end user (where generative AI is built into consumer-facing products or services). Businesses deploying AI systems will need to adapt their contracts with developers around sector-specific requirements and conduct pre-contract due diligence – for example exploring how the model was trained and what data was used to quantify the nature and extent of any IP infringement risk. The market continues to evolve in novel areas of contracts negotiation.

While we are awaiting the final text, the E.U. AI Act is based on developers keeping a record of what they do, for example what data was used for training, how the model was trained, and if they have used protected data.

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger

Related capabilities