Anthropic, the maker of the popular Claude AI model, said this week that it uncovered a massive extortion scheme, using Claude to attack 17 targets, including government agencies, emergency services and religious organizations.
Tech expert Omar Gallaga says the attackers used a technique called “vibe hacking” that AI experts have warned about but didn’t believe was currently possible.
Highlights from this segment:
– Vibe hacking takes its name from “vibe coding,” a method of writing code with AI.
– Anthropic reported that the extortionists used Claude to coordinate attacks.














