Artists angry after discovering artworks used to train AI image generators without their consent

Home Arts Artists angry after discovering artworks used to train AI image generators without their consent
Artists angry after discovering artworks used to train AI image generators without their consent

Six-time Archibald Prize finalist Kim Leutwyler says “it feels like a violation” that their art was used without their consent to train the artificial intelligence (AI) technology behind increasingly popular text-to-image software.

The Sydney-based, American-born artist is one of thousands of illustrators who are frustrated by their work being used to train AI image generators, which are now being used to create profit-making apps.

There is a heated debate between artists and technology companies because creators haven’t been compensated, leading many to take part in online protests to raise concerns about AI’s ethical and copyright implications.

There is still very little artists can do to protect their work from being used by AI, but some are beginning to opt out of certain systems. Others, however, are keen to opt in.

Let’s take a look at the situation and hear from people working in this complex and emerging space.

How can artists know whether their work is being used to train AI?

Artists are beginning to use online tools to check if their work is being used to train AI image generators.

Leutwyler used a site called haveibeentrained.com to find out if their work had been included in something called LAION-5B — a dataset of 5.85 billion images and their text captions taken from the internet (including some copyrighted artworks), which have been used to train various AI systems.

“I found almost every portrait I’ve ever created on there, as well as artworks by many Archibald finalists and winners,” Leutwyler said.

“It was very upsetting to see so many great Australian artists and emerging artists having their work used without their consent and then replicated in some form or another.”

A white person in a dark singlet with short blonde hair, standing in an art studio near paintings and shelves of art supplies
Artist Kim Leutwyler says the use of their work without their consent “feels like a violation”.(Supplied: Kim Leutwyler)

Sydney-based visual artist and performer Tom Christophersen says it was “a bit of a shock” when they searched for their own art on the same website and discovered their work had also been captured by LAION-5B.

“I didn’t think I would care as much as I did. It was a bit of a rough feeling to know that stuff had been used against my will, without even notifying me,” they said.

“It just feels unethical when it’s done sneakily behind artists’ backs … People are really angry, and fair enough.”

A white person with long hair and a beard sits in a chair in an art gallery and leans forward, clasping their hands
Artist Tom Christophersen says they will likely opt-out of having their work used to train AI.(Supplied: Laura Du Vé)

Tensions escalated during the Lensa app controversy

A mobile app called Lensa became popular late last year when it allowed users to create AI-generated portraits of themselves by combining their selfies with AI-generated art styles.

Artists shared their copyright concerns after noticing what appeared to be their styles being replicated by the Lensa algorithm. Others noticed that some images created by the app had what appeared to be poor attempts at artists’ signatures.

Lensa uses an AI text-to-image platform called Stable Diffusion, which itself was trained on images and captions from LAION-5B.

Christophersen says apps such as Lensa are “moving wealth and value away from independent makers and freelance artists” by generating revenue without reimbursing those whose works have been used to train the underlying technology.

“It’s already so hard to carve out a niche for yourself and get a buyer-ship as a visual artist that it feels a tiny bit like a kick in the guts when people are just going on these apps,” they said.

A composite of many images, some identical, of a drawing of a girl with long hair, taken from haveibeentrained.com
Tom Christophersen used haveibeentrained.com to find which of their artworks had been used to train AI.(Twitter: Tom Christophersen)

Leutwyler says they are concerned about the impact of apps such as Lensa on up-and-coming artists.

“The AI is replicating the brushstrokes, the colour, the technique and all of those unique things that make an artist’s practice so compelling,” they said.

“It’s then mass-producing it into something that is arguably great because it is accessible to so many people, however there should be some sort of copyright laws in place to help protect artists from having their work just completely ripped off.”

Artists have protested online following certain companies launching AI image generators of their own or allowing AI-created images on their platforms.

Software company and creator of Photoshop, Adobe, has been criticised for allowing AI-generated images to be sold in its libraries of stock images.

In November, users of the art-sharing site DeviantArt spoke out when the company launched an AI image generator that could be trained using images already posted on the platform. The site then backtracked and went with an opt-in approach.

In December, artists posted images denouncing AI on the art-sharing site ArtStation, after AI-generated images began appearing on that platform.

Are AI image generators breaking copyright laws?

The short answer is no — at least not as the laws currently stand.

An artist’s individual artworks are protected by copyright law, but their overall style is not.

So to show an AI image generator had breached copyright laws, an artist would need to prove that one of their artworks had been copied into the system.

That’s difficult because such systems are opaque and largely made up of algorithms full of numbers that humans don’t understand.

Leave a Reply

Your email address will not be published.