There’s something hilariously poetic about a ~2,500 year old fable being relevant today, because of AI.
"South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city.
The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection"
To cry wolf is to say there’s a wolf here when it’s actually located elsewhere. The AI photo said there was a wolf at a certain intersection when it was actually located elsewhere.
In fact crying wolf is doubly appropriate because it means disturbing an operation looking for a wolf.
This is misdirection while there is a wolf
Similar but different
That's not pedantic, that's the meaning of the idiom.
¹ Following pronoun variant used in the fine article here.
Original comment was clever and subsequent commenters were uninteresting to me. In this case, I only saw it because I’m on my phone which doesn’t have Chrome extensions. Turns out I’d already blocked them.
Did they? The article says it's unclear as to their intent.
> Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online.
And you'll be shocked what the kids have been doing with databases and API calls
Like most important advances like plastics, nuclear power, diesel engines, synthetic fertilizers, computers and the internet, good and bad things came out of it.
It is like saying that plastics screw up everything they touch, for example when a plastic part is used to replace a more durable metal part, but before realizing that plastics are everywhere in our lives, often without a suitable replacement material.
Willfully diverting limited public service resources, that might potentially be assigned to saving someone's life or health?
Practically a social DoS
This is an accurate criticism of the boy in the fable, if... an unnecessary way to express the idea.
I don't care enough to bother finding out, but seems like the BBC could have done some more journalism, if they were so inclined.
If it was true and police saw it but didn’t act, the fallout for them could be much worse depending on the outcome.
With the info presented in the article, it sounds like the cops jumped to conclusions, got publicly embarrassed and are now going after him to either save face or get revenge (depending on how credulous you are of LEO).
The only reason you are seeing this right now is because it has AI in the title.
Hypothetically, if a hacking tool was released that let non-technical people hack into sensitive databases, and then a journalist wrote the headline "local man hacks IRS", without any mention of the tool, wouldn't that be a bit irresponsible, to purposely leave that information out?
Photoshop? I don't think you need much skill.
Have you used Photoshop before? You come across as commenting on something you don't understand.
It’s a crime of opportunity¹, one where you have the idea and act on it on a whim. No opportunity, no crime, and the technology provided the opportunity.
So yes, the technology used matters.
We need to learn/adapt what we post, see, believe in photos to avoid arrest. Especially so in the AI reality because generating these images, and these pranks, has become increasingly easy for anyone to do with no skills and minimal time.
I think the part I find most fascinating though is it’s not clear if he took this picture to the police, actively wasting their time, or if he just posted it and they found it and mistakenly took it as truth. I have no insight to SK laws but for me it’s going to be unfair if they were the ones that used this picture as evidence when if it was never meant to be taken seriously.
If Tesla (insert any car manufacturer you hate) ran over a kid I'd like to see the title say it, instead of "Tesla fined for violating traffic laws."
[1] waiting for some example where fool policemen where outsmarted with simple tricks /s
To answer your question, relative ease is a function, in part, of one's skills & resources, so, it's certainly a reasonable claim to make, but will be different person to person.
Sure a little bit more involved than the two second AI prompt, but 3 min job for the lulz photoshoppers.
There are significantly more people able to type a few words into a prompt than people who can use an image editor fast and convincingly and would be inclined to waste their time on this kind of fake.
But would you? People grumble about $0.99 for an app they’ll use everyday, I doubt paying even $5 (and waiting for a result!) for a fake image to mislead police is high on anyone’s list.
Making this image was likely fast and free. It’s a crime of opportunity.
And there are literally billions of everyone else.
Do you not see that the amount of fake images has exploded with free access and ease of use? That’s what a tool does. It’s silly to argue generative AI doesn‘t make a difference in the proliferation of fake images, just like it’d be arguing that digital photography on a small multi-purpose device that is always with you doesn’t make people take more pictures.
What I actually said couldn't be any clearer, and it's rather silly to twist my words into a strawman you can argue against.
I don't know why people are so determined to miss the point that "people can do [image manipulation] faster with AI" does not magically mean that people weren't doing it before AI, at scale mind you. Did y'all really unironically believe EVERY single image you saw on the internet prior to the past few years was entirely real and entirely what it was presented as? My goodness
Did Orwell teach anything? What will they do with the next Visitors' spaceship photo?
Needs to be supported by smartphones, of course.
I don't understand, shouldn't they have let him go if the idea is that they still roam in the wild? Why forcing it back to a zoo?
The zoo provides a controlled environment needed to restore the species.
EDIT: typo/word ordering
Our local children's museum is part of a network of sites working to restore red wolf [1] populations. Every few years they get new wolves as the coordinators move young wolves around to optimize mating pairs.
You could adjust the firmware of a wildlife tag to start transmitting location every 10 minutes when the animal leaves a geo-fence.
They are also not implanted in the birds, but are a relatively large "backpack" or leg tag.
He would have been arrested even if the image wasnt AI.
The title and article are very...tabloid-y
“Authorities are investigating him for disrupting government work by deception, an offence that carries up to five years in prison or a maximum fine of 10 million Korean won ($6,700; £5,000)”
Somewhat harsher than the UK at least, where “wasting police time” would only get you six months or around a £2500 fine.
AI is plagiarism—full stop—nothing more, nothing less.
Of course, this point could have been made without sarcasm (and AI tells for parody)—I’m aware—but that would remove a certain… texture from the argument. And where, exactly, is the fun in that?
If it helps, imagine the text more as a work of art than an instruction manual. Art matters.