In January, stock photo provider Getty Images (“Getty”) announced that it had commenced legal proceedings in London against Stability AI, creators of popular artificial intelligence (“AI”) tool Stable Diffusion, a text-to-image diffusion model capable of generating photo-realistic images in response to text inputs.
Getty, which sells the right to use images from a catalogue of photographs and illustrations, is claiming that Stability AI has infringed its intellectual property rights by unlawfully copying and processing 12 million photographs and associated captions and metadata without permission. A lawsuit has subsequently been filed in the United States.
In a press release announcing its decision to raise legal proceedings, Getty acknowledged that AI has the “potential to stimulate creative endeavours” but noted that Stability AI did not seek licenses from it and instead “chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests”.
These proceedings are illustrative of the increasing tensions between rights holders in the creative industries and developing new technologies, tensions that are likely to benefit from judicial scrutiny.
The UK government has been grappling with these tensions in recent years. Between September and November 2020, the Intellectual Property Office published a call for views on the relationship between intellectual property and AI. The consultation was borne partly as a result of questions being raised about the balance in the copyright system between the protection of human works and AI works.
A further consultation ran between October 2021 and January 2022 and invited responses in relation to a number of areas of focus including text and data mining (“TDM”), which entails using AI to analyse large amounts of information to identify patterns, trends and other useful information (and is effectively the model employed by Stable Diffusion, which mines the Internet for images and artwork in response to text inputted by users, often capturing and then generating images owned by Getty in the process).
TDM automates and accelerates what would traditionally be done by eye-reading a document, making notes, and understanding relationships and trends, and usually requires some degree of copying the material to be analysed (and therefore possibly infringes copyright). It is a particularly useful tool for researchers who might view such “copying” as merely incidental to the way TDM works, as opposed to an activity designed to exploit copyrighted material. With this distinction in mind, the UK has had, since 2014, a specific copyright exception for TDM, which (as the law stands) can only be performed on third party materials for non-commercial purposes or otherwise with the permission of the rights holder.
In October 2022, and following the consultation, the UK government announced its intention to relax the existing TDM framework and implement “a new copyright and database rights exception which allows text and data mining for any purpose”. The government’s view is that relaxing these restrictions will encourage investment in AI projects in the UK and ensure that the UK is at the forefront of AI innovation.
These proposals are controversial, particularly within the creative industries. In its January 2023 report At Risk: our creative future, the House of Lords Communications and Digital Committee explored the proposals, noting that they have “generated significant concern within the creative industries about potential loss of revenue”. UK Music, for example, commented that it was deeply concerned about the proposals, which would allow AI music to be created using copyright content that those controlling the AI do not own, with no compensation to the artists and rights holders whose investment created it. Similarly, the Publishers Association described the proposals as using “a sledgehammer to crack a nut”.
Taking on board this input, the Committee’s view is that the government must change its proposed approach to TDM to avoid undercutting creative industries’ business models. The Committee has suggested that the government should pause the implementation of the changes and first conduct and publish an impact assessment. It remains to be seen whether the government will take on board the Committee’s feedback.
It is unclear what will transpire with the government’s proposals or indeed with the Getty litigation, which has TDM and copyright infringement at its heart. Getty’s CEO Craig Peters has compared the current legal landscape in the generative AI scene to the early days of digital music, when companies like Napster were offering popular (but illegal) services. His position is that the legal proceedings are less about damages and stopping the development of AI art tools, and more about creating a new legal status quo in which AI systems respect intellectual property. If however the government’s proposals are adopted, they are likely to make it significantly harder for Getty to make persuasive arguments about copyright infringement – on the contrary, Stable Diffusion’s use of the images may end up falling within the ever widening exception for TDM (and indeed, Stable Diffusion’s position is that it already does).
It is clear that there is a need to strike an appropriate balance between AI systems, which are likely to benefit from a degree of unrestricted access to data, and rights holders, who deserve to be able to capitalise on their work. The government’s legislative proposals should be announced later this year and may or may not impact the direction of these legal proceedings and any litigations that might follow.
Regardless of the government’s proposals, it is with interest that those in the creative and technology industries await the outcome of the dispute between Getty and Stability AI, which should provide some much needed clarity in a constantly evolving landscape.
If you would like to discuss the matters above or any surrounding creative works, please get in touch with the BTO BeCreative Team on 0131 222 2939 or email: firstname.lastname@example.org.