Artists' new strategy to protect their copyrights against AI: data poisoning

It is well known that, among other professionals, artists have been seriously affected by the development of artificial intelligence tools, whose development companies have made use of their works without any authorization to train their models. Precisely, different lawsuits have already been filed by several artists against the technology companies OpenAI, Meta, Google and Stability AI for these events.

Well, faced with this reality, artists have taken action on the matter and have chosen to make use of specialized tools to “poison” these training data.

Specifically, these tools allow artists to add invisible changes to the pixels of their works before uploading them to the internet. Thus, if these works are incorporated into an artificial intelligence (AI) training set, the resulting model can break in a chaotic and unpredictable way, potentially damaging future interactions of the image-generating AI models, making their outputs useless.

The purpose of creating such tools is to create a deterrent against the violation of artists' copyrights and intellectual property. Since, despite the fact that AI companies have developed generative text-to-image models, they have offered artists the possibility that their images will not be used to train future versions of the models, artists consider that this is not enough.

Read more

Related posts that might interest you

All our news