There are various ways to describe ‘synthetic media’.
A broad definition could be any content that has been digitally created or altered. A narrower definition is content generated by artificial intelligence (AI) that recreates human subjects (sometimes called ‘deepfakes’).
A preferable middle ground, however, is the use of generative AI to create or alter digital content to recreate something real (such as a landscape, animal, or person), which is not a recorded video, image or audio of that thing.
Importantly, the term ‘synthetic’ implies that the content is artificial.
When used in the advertising context, this appears to introduce an inherent risk of misleading the consumer (see further below).
There are opportunities and risks in using synthetic media in advertising.
AI technology isn’t new to marketing. Machine learning-powered recommender models are continuously targeting consumers, sending them personalized communications to cater for individual customer preferences.
However, generative AI and the creation of synthetic media is a step-change in the industry because it transforms the whole creative process and the type of content directed at potential buyers.
Advertising, like the rest of the creative industries, involves the production of original content such as images, designs, videos and music while taking inspiration from previous ideas or cultural references.
This process has now become much easier thanks to AI. Generative AI can turn an initial marketing concept into a fully-fledged advertising campaign, more quickly, and far more cheaply, than ever before. Traditional content creation methods, such as recording or filming, may become redundant and whole marketing campaigns can be purely comprised of “synthetic” original or modified content. This has obvious attractions for the tech giants, such as Meta which is rolling out generative AI tools for advertisers that can create content such as image backgrounds and variations of written text.
This is exciting, but there are also numerous risks, including those relating to intellectual property.
Risk of IP infringement
There is an ongoing concern that output from generative AIs may incorporate copies or substantive parts of third-party content, which may have been taken without permission to train the system. This, in turn, presents an IP infringement risk for the user of these systems, particularly from a copyright perspective.
To date, there is no substantive court decision on this issue, but there are ongoing actions in the US and UK Andersen v Stability AI Ltd in California, MidJourney and DeviantArt and Getty Images v Stability AI in Delaware). In these cases, artists have asserted copyright infringements, violations of rights management laws, trademark infringements, and, in the UK, action involving Getty (at the England and Wales High Court), database rights infringement regarding the use of their works in training generative AI models.
Such actions are a hidden risk for those who use output from generative AIs, including marketing teams using synthetic media. It is unlikely that such users will know if copyrighted content is reproduced in the AI’s output and in what proportion, which makes it particularly difficult to quantify the underlying risk. A small number of AI providers are starting to offer a level of protection – some sort of indemnity against third-party IP infringement claims arising from its model’s output. The wording of these indemnities needs to be considered carefully in each use scenario, particularly in relation to any exclusions or carve-outs from the scope of the indemnity. Market practice on infringement indemnities (and other risk allocation terms more broadly) has not yet crystallized.
Uncertainty regarding IP ownership
Currently, there are no legal regimes that specifically address the subsistence of copyright in AI-generated works. But it is likely that most synthetic media will not be protected by copyright because generating it involves too little human creativity. This is a problem for advertising agencies which almost always need to warrant that they have the necessary rights to exploit content they produce for their clients.
U.S.
The current guidance from the U.S. Copyright Office is that the output from most current generative AI models is not copyrightable, unless human authors substantially modify the work by editing or arranging it.
This is premised on the understanding that it is the AI, not the human user, that executes the elements of authorship.Therefore, the office, in decisions in February and August 2023, has rejected copyright claims for images created by the AI image generator Midjourney, even though the user allegedly used hundreds of prompts to iterate and refine the output.
This was because Midjourney’s output cannot be predicted by the user, which is different from other tools used by artists. Furthermore, the image editing was judged to be too minor and imperceptible to supply the necessary creativity for copyright protection.
In August 2023, the District Court for the District of Columbia corroborated this position in Thaler v Perlmutter, stating that human authorship is a “bedrock requirement of copyright”.
UK
The UK is one of a very small number of jurisdictions where copyright can subsist in computer-generated works in which there was no human creator. But it is not clear who owns the copyright of AI-generated work. UK law states that the author (and therefore the first owner) of a computer-generated work is the person who made the arrangements necessary for the creation of the work. The problem is that identifying who this person is in relation to commercialized synthetic media is not easy, because there will be competing claims. Arguably, both the AI’s developer, who designed and trained the model, and the user, who conceived the prompts to create a specific work, could be the one who made the arrangements.
Ultimately, the answer will probably depend on the user’s level of input and the predictability of the output. It would be hard to argue that a user who inputted a simple prompt to create a generic piece of marketing would own any copyright in the work, because most of the creative process was left to the AI. Conversely, a user who inputs highly detailed prompts and clearly defines what the output should be is more likely to be viewed as having made creative choices that entitle them to be the author of the work and so own the copyright.
Other IP risks
Advertising agencies or marketing departments that manipulate existing content to create synthetic media need to be aware that the content they input into generative AI models can be incorporated into the system’s training data. This presents a risk of putting their own valuable IP or confidential information into the public domain. Without appropriate restrictions, the AI could output that content for use by others, which could have severe reputational consequences.
In addition to wider ethical considerations, the recreation of a real human subject in deepfake media brings other possible IP actions into play: passing off (UK), image rights (US), breach of confidence, breaches of the moral right of integrity and the false attribution of a copyright work, and more general claims such as misleading advertising (see below), breaches of the UK Human Rights Act 1988, and defamation.
Wider issues
While this article focusses on IP, it is worth noting that there are UK advertising codes that state that marketing communications directed at consumers must not “materially mislead or be likely to do so”.
These rules apply to traditional forms of advertising, but their risk is enhanced in relation to synthetic media because of its inherent artificial nature (as mentioned above) and its capacity to exaggerate the capability or performance of goods or services.
It might be possible to reduce the risk of misleading consumers by labelling content as AI-generated. This is the approach currently favored by the European Commission, which is advocating for AI-generated content to be clearly marked. This is not mandatory at present, so there is a balance to be struck with the reduced effectiveness of the advertisement if consumers know that it is in some way “not real”.
Finally, as mentioned above, the use of deepfakes has the potential to cause harm. AI experts and industry leaders are therefore taking collective action to recommend shared values and practices for the responsible use of this form of synthetic media. One example is the PAI’s Responsible Practices for Synthetic Media, which is a framework for those who develop, create, and share AI-generated content to adopt good practices and counter the harmful use of synthetic media.
Mitigating risks
To mitigate the risks outlined above, businesses using synthetic media in their advertising need to consider the appropriate levels of governance and compliance.
Depending on the use case and individual risk profile, this may include some or all of the following:
- Conducting due diligence when choosing generative AI tools, platforms, and partners.
- Securing adequate IP infringement warranties and indemnities.
- Adopting appropriate policies, internal guidelines and training.
- Choosing which synthetic media is permitted in advertising, checking that it is not misleading, and obtaining consent from the subject depicted.
- Collecting evidence to support the claims and messages conveyed in synthetic advertising materials.
- Labelling media that includes synthetic elements, especially when this may change the way that the content is perceived.
- Getting involved in industry actions and complying with shared recommended practices regarding the ethical and responsible use of synthetic media.
This article was first published in ManagingIP on November 2, 2023.
We invite you to delve deeper into this subject by listening to our dedicated podcast episode titled The power of synthetic media in advertising and mitigating risks from generative AI with our AI communications Lead, Clemency Wells.