Six-time Archibald Prize finalist Kim Leutwyler says “it feels like a violation” that their art was used without their consent to train the artificial intelligence (AI) technology behind increasingly popular text-to-image software.
The Sydney-based, American-born artist is one of thousands of illustrators who are frustrated by their work being used to train AI image generators, which are now being used to create profit-making apps.
There is a heated debate between artists and technology companies because creators haven’t been compensated, leading many to take part in online protests to raise concerns about AI’s ethical and copyright implications.
There is still very little artists can do to protect their work from being used by AI, but some are beginning to opt out of certain systems. Others, however, are keen to opt in.
Let’s take a look at the situation and hear from people working in this complex and emerging space.
How can artists know whether their work is being used to train AI?
Artists are beginning to use online tools to check if their work is being used to train AI image generators.
Leutwyler used a site called haveibeentrained.com to find out if their work had been included in something called LAION-5B — a dataset of 5.85 billion images and their text captions taken from the internet (including some copyrighted artworks), which have been used to train various AI systems.
“I found almost every portrait I’ve ever created on there, as well as artworks by many Archibald finalists and winners,” Leutwyler said.
“It was very upsetting to see so many great Australian artists and emerging artists having their work used without their consent and then replicated in some form or another.”
Sydney-based visual artist and performer Tom Christophersen says it was “a bit of a shock” when they searched for their own art on the same website and discovered their work had also been captured by LAION-5B.
“I didn’t think I would care as much as I did. It was a bit of a rough feeling to know that stuff had been used against my will, without even notifying me,” they said.
“It just feels unethical when it’s done sneakily behind artists’ backs … People are really angry, and fair enough.”
Tensions escalated during the Lensa app controversy
A mobile app called Lensa became popular late last year when it allowed users to create AI-generated portraits of themselves by combining their selfies with AI-generated art styles.
Artists shared their copyright concerns after noticing what appeared to be their styles being replicated by the Lensa algorithm. Others noticed that some images created by the app had what appeared to be poor attempts at artists’ signatures.
Lensa uses an AI text-to-image platform called Stable Diffusion, which itself was trained on images and captions from LAION-5B.
Christophersen says apps such as Lensa are “moving wealth and value away from independent makers and freelance artists” by generating revenue without reimbursing those whose works have been used to train the underlying technology.
“It’s already so hard to carve out a niche for yourself and get a buyer-ship as a visual artist that it feels a tiny bit like a kick in the guts when people are just going on these apps,” they said.
Leutwyler says they are concerned about the impact of apps such as Lensa on up-and-coming artists.
“The AI is replicating the brushstrokes, the colour, the technique and all of those unique things that make an artist’s practice so compelling,” they said.
“It’s then mass-producing it into something that is arguably great because it is accessible to so many people, however there should be some sort of copyright laws in place to help protect artists from having their work just completely ripped off.”
Artists have protested online following certain companies launching AI image generators of their own or allowing AI-created images on their platforms.
Software company and creator of Photoshop, Adobe, has been criticised for allowing AI-generated images to be sold in its libraries of stock images.
In November, users of the art-sharing site DeviantArt spoke out when the company launched an AI image generator that could be trained using images already posted on the platform. The site then backtracked and went with an opt-in approach.
In December, artists posted images denouncing AI on the art-sharing site ArtStation, after AI-generated images began appearing on that platform.
Are AI image generators breaking copyright laws?
The short answer is no — at least not as the laws currently stand.
An artist’s individual artworks are protected by copyright law, but their overall style is not.
So to show an AI image generator had breached copyright laws, an artist would need to prove that one of their artworks had been copied into the system.
That’s difficult because such systems are opaque and largely made up of algorithms full of numbers that humans don’t understand.
Queensland University of Technology senior law lecturer Kylie Pappalardo says there can also be some exemptions that allow for temporary copies to be made incidentally as part of a system’s technical processes.
“Some copies are for non-expressive uses, meaning that no one ever really sees them,” she says.
“Instead, the copies are used to train algorithms. For example, to enable the search functions on the internet to work correctly. These are what we sometimes call non-consumptive uses.”
Dr Pappalardo says she believes copying images so an algorithm can learn from them to create new art could be seen as a non-consumptive use.
“Arguably, it’s not much different from a human looking at many different artworks in order to appreciate how to paint better,” she says.
“The difference with AI is that the only way it can ‘look’ at art is by making a copy, even if that copy is temporary.”
Dr Pappalardo says whether or not AI-created images actually constitute non-consumptive use remains “untested ground” in copyright law.
Prisma Labs, the company behind the Lensa app, says artworks created by AI “can’t be described as exact replicas of any particular artwork”.
“The AI learns to recognize the connections between the images and their descriptions, not the artworks,” the company wrote on Twitter.
Can artists opt-out of AI?
While many AI image generators are trained with content taken without the original owners’ consent, artists are slowly finding ways to get more control over the use of their work.
In December one of the organisations behind Stable Diffusion, Stability AI, confirmed that artists would be able to opt-out of having their work used to train the next version of their system, Stable Diffusion 3.
Activist organisation Spawning, which runs haveibeentrained.com, says Stability AI will honour requests to opt-out that are made through its site.
Spawning’s Mathew Dryhurst says haveibeentrained.com is being used by thousands of artists and has the potential to set “a pretty remarkable precedent” in the AI space.
“It is clear to us that it is of economic and cultural consequence to establish a means for consenting artist data,” he says.
“We would also argue that thinking longer term, consenting data is going to be more useful to AI companies than not.
“Consenting data is something that everyone can feel good about, and is better for AI development generally.”
Mr Dryhurst says Spawning is working on better ways to verify that people aren’t opting out artworks that aren’t their own, and will soon offer an opt-in service “which will require robust verification”.
He says while opt-out systems are imperfect, they are a step in the right direction and “a hard-fought concession” as Spawning continues its work with other AI organisations.
“Due to the recent culture war waging around this particular iteration of AI media, I feel we may be perceived as being driven by an anti-AI sentiment. This couldn’t be further from the truth,” he says.
“In truth, our team have been thinking about and experimenting with AI art for many years longer than many of the people currently speaking for this recent phase of AI art.
“AI is a geopolitical economic arms race. The internet is global. We feel our efforts will either augment legislative efforts or compensate for their absence.”
Some artists argue AI systems should only involve opt-in processes, while others are more flexible.
Tom Christophersen says they “probably will opt-out, shortly”.
“Having an opt-in or opt-out option, as a bare minimum, is definitely a positive,” they said.
Kim Leutwyler says they may have allowed their work to be used to train AI, had they been asked.
“I love technology. I use it in almost every stage of creating work,” they said.
“There are some great possibilities with it but there just has to be some sort of morality and ethics when it comes to training the AI.
“I think every artist will want something different. Some will want compensation, some will want acknowledgement, and some will want to completely opt out.
“You can’t dictate what everybody wants, and there needs to be that option.”
‘This just feels so incredibly not right’
Leutwyler says they hope to start a conversation about how to better prevent artists’ work from being used to train AI systems without their consent.
“As of now it’s completely legal … It doesn’t seem like the laws are able to move as quickly as technology does,” they said.
Christophersen says there needs to be “constant work” in the intellectual property and copyright space so artists aren’t taken advantage of.
“It’s maybe an inhibition to tackle stuff like copyright law that allows these companies to always keep artists on the back foot, which I guess is where a lot of this rage is coming from,” they said.
“Even though we can’t define what’s going on at the moment, this just feels so incredibly not right.”
While some artists are calling for copyright laws to be altered, Dr Pappalardo says regulators may run the risk of responding too strongly to something which is yet to be fully understood.
“Sometimes it’s best to wait and see,” she said.
“I know that’s not very reassuring to artists right now but we don’t know how disruptive this is yet, and until we know that the law is not going to be very well placed to respond.”
Christophersen says they are supportive of people using AI to create new and innovative works, but things are on “shaky ground” when it comes to supporting artists.
“How do you define creativity in a quantitative way so that you can place some sort of value on it?
“And then how do you protect that and make sure it’s sustainable for human beings while being an exciting landscape where AI can grow as well?” they said.