Unfortunately, Glaze does not seem to work. When I've trained a simple style LoRA on a few sets of glazed images using SDXL, the LoRA was still able to reproduce their style.
Another unfortunate consequence of the introduction of Glaze and Nightshade is that some artists which I follow have now started glazing all of their new works which they publish, leading to quite ugly results due to the noise that Glaze produces on high settings, despite questionable efficacy.
If OpenAI steals all your work, that's copyright infringement - but if you tried to stop them through technical means and they do it anyway, that's felony DRM circumvention.
It's snake oil, and it'd be snake oil even if it worked.
I've yet to hear of it doing anything. I've never heard anyone in an AI group worried about it in any way. No "damn, Glaze ruined my LoRA". To the extent anyone talks about it, it's either non-technical artist groups, or AI groups where somebody intentionally sets out to play with it to see if they can actually make it do something.
But even if it worked in its intended scope, even then it'd be snake oil. Because you can't defeat every AI system simultaneously. Flaws can be exploited, but flaws aren't guaranteed to (and almost certainly won't be) conserved on the long term. So anything that works now isn't going to work tomorrow. And defending against known models today is pointless because they were already successfully created.
The whole idea of attacking an already finished product is a fundamentally flawed approach, and would only possibly work in extremely unlikely and contrived cases. Like v1 not being very good, so the model's maker for some reason decided to pull in additional data, long past a well published adversarial attack on v1, and incorporate that into v2.
>> Glaze works by understanding the AI models that are training on human art, and using machine learning algorithms, computing a set of minimal changes to artworks, such that it appears unchanged to human eyes, but appears to AI models like a dramatically different art style.
That's supposed be the single most important sentence for the entire article, but ended being a mouthful which hardly makes sense.
>> So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.
"when" and "then" don't work like that.
I' still trying to see a crisp solution statement beyond "is a system designed to protect human artists by disrupting style mimicry.".
From what I remember Glaze is using some small CLIP model and LPIPS (based on VGG) for their adversarial loss, that's why it's so ineffective to large, better trained model.
It use SD to do a style transfer on the image using image-to-image, then it use gradient descent on the image itself to lower the difference between CLIP embeddings of the original and style transfer image + trying to maintain LPIPS, then every step is normalized to not exceed a certain threshold from the original image.
So essentially it's an adversarial attack against a small CLIP model, even though today's models are much robust than that.
I came here thinking that AI glaze is what non-AI products use to make their products look shiny to the audience.
Even if this did work now, there is nothing that AI can't adapt to. It'll take just a thousand such images in a random large image dataset for AI to quickly adapt to it, and then it'll be utterly pointless. As such, the effective half-life of any such approach is a year, with any further adversarial adaption yielding a diminished effect.
Snake oil. Even if it worked in a way that wouldn't be bypassed quickly, it was too late, and the few artists who've applied it aren't enough to matter in the next training runs. Watching artists pull down years, sometimes decades of already scraped galleries to apply sketchy anti-AI magic was distressing.
Their objective is not so much to fight mass scrapping but to prevent fine-tunes with their name on Civitai, copying them specifically. Which happens a lot.
Sadly I agree that Glaze doesn't really work for it.
Another unfortunate consequence of the introduction of Glaze and Nightshade is that some artists which I follow have now started glazing all of their new works which they publish, leading to quite ugly results due to the noise that Glaze produces on high settings, despite questionable efficacy.
If OpenAI steals all your work, that's copyright infringement - but if you tried to stop them through technical means and they do it anyway, that's felony DRM circumvention.
I've yet to hear of it doing anything. I've never heard anyone in an AI group worried about it in any way. No "damn, Glaze ruined my LoRA". To the extent anyone talks about it, it's either non-technical artist groups, or AI groups where somebody intentionally sets out to play with it to see if they can actually make it do something.
But even if it worked in its intended scope, even then it'd be snake oil. Because you can't defeat every AI system simultaneously. Flaws can be exploited, but flaws aren't guaranteed to (and almost certainly won't be) conserved on the long term. So anything that works now isn't going to work tomorrow. And defending against known models today is pointless because they were already successfully created.
The whole idea of attacking an already finished product is a fundamentally flawed approach, and would only possibly work in extremely unlikely and contrived cases. Like v1 not being very good, so the model's maker for some reason decided to pull in additional data, long past a well published adversarial attack on v1, and incorporate that into v2.
That's supposed be the single most important sentence for the entire article, but ended being a mouthful which hardly makes sense.
>> So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.
"when" and "then" don't work like that.
I' still trying to see a crisp solution statement beyond "is a system designed to protect human artists by disrupting style mimicry.".
Are they still pushing the "security through obscurity"?
Don't quite have the domain knowledge to evaluate, but the claims are outlandish
It use SD to do a style transfer on the image using image-to-image, then it use gradient descent on the image itself to lower the difference between CLIP embeddings of the original and style transfer image + trying to maintain LPIPS, then every step is normalized to not exceed a certain threshold from the original image.
So essentially it's an adversarial attack against a small CLIP model, even though today's models are much robust than that.
Even if this did work now, there is nothing that AI can't adapt to. It'll take just a thousand such images in a random large image dataset for AI to quickly adapt to it, and then it'll be utterly pointless. As such, the effective half-life of any such approach is a year, with any further adversarial adaption yielding a diminished effect.
I hope they mean tablets here, and not phones. I can't imagine any artist being more productive or effective on a tiny screen vs a large screen.
Sadly I agree that Glaze doesn't really work for it.