Anthropic’s Claude AI reportedly used in ‘vibe hacking’ scheme

The company says that extortionists used AI to target 17 organizations.

By Shelly BrisbinSeptember 4, 2025 3:27 pm,

Anthropic, the maker of the popular Claude AI model, said this week that it uncovered a massive extortion scheme, using Claude to attack 17 targets, including government agencies, emergency services and religious organizations.

Tech expert Omar Gallaga says the attackers used a technique called “vibe hacking” that AI experts have warned about but didn’t believe was currently possible.

Highlights from this segment:

– Vibe hacking takes its name from “vibe coding,” a method of writing code with AI.

– Anthropic reported that the extortionists used Claude to coordinate attacks.

If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.