French immigrants are eating our pets!
honestly, its pretty good, and it still works if I use a lower resolution screenshot without metadata (I haven’t tried adding noise, or overlaying something else but those might break it). This is pixelwave, not midjourney though.
Clearly AI is on the verge of taking over the world.
I wanted to get a cat but I discovered I was allergic to the slime trail.
You have never seen the all elusive catsnail? Bummer you should look harder.
It’s a snat. They are not easy to catch, because they are fast. Also, they never land on their shell.
That’s a normal housecat. Not sure what people are confused about
There are a bunch of reasons why this could happen. First, it’s possible to “attack” some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won’t be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven’t been trained to be more discerning. But it’s all in how you construct the training dataset, and I don’t think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since “AI artists” seem to be overly focused on anime and related styles…
That is a weird looking rabbit
Miao
I don’t get it. Maybe it’s right? Maybe a human made this?
The picture doesn’t have to be “real”, it just has to be non-AI. Maybe this was made in Blender and Photoshop or something.
Or maybe your expectations from ai detection are too high.
Check the snail house. The swirl has two endings. Definitely AI
deleted by creator
deleted by creator
Deleted by creator
created by deletor
It has been
0
days since classified military gene research has been leaked by interrogating ai detecting models…you know people made fake pictures before image generation, right?
They made fake pictures before computers existed too.
I’ve seen the cave paintings deer and horses everywhere but when I look around nothing but rocks and trees.
This obviously can’t be true, how did they do it without Photoshop? /s
I mean, optical illusions have been used to fool audiences for years.
https://youtube.com/shorts/bsyRnmdceqQ?si=SfUM2mssqc1sjvNt
This is a motion picture of course but the idea is “faking” an image isn’t too far off.
Hehe, I know, I’m just being silly - the /s on my message means it’s in a sarcastic tone :) but thanks for taking the time to share that video!
I’m an idiot, thank you for the explanation, I got got
cute snat…
We get them a lot around here. They don’t make for good pets, but they keep the borogoves at bay.
Which is great, honestly. Borogoves themselves are fine, but it’s not worth the risk letting them get all mimsy.
That’s clearly a cail
Are we all looking at the same snussy?
Honestly, they should fight fire with fire? Another vision model (like Qwen VL) would catch this
You can ask it “does this image seem fake?” and it would look at it, reason something out and conclude it’s fake, instead of… I dunno, looking for smaller patters or whatever their internal model does?
The AI was trained on a combination of cat videos and sponge bob