Technology that until recently was the domain of advanced labs has now become a common weapon for ordinary criminals. The latest reports out of Michigan point to a massive wave of algorithm-generated extortion. Scammers are ruthlessly exploiting residents’ trust by masquerading as their friends or family members.
Detroit journalists summed up the current crisis in the local market:
“Artificial intelligence is making it easier than ever before for scammers to deceive ordinary people — and Michigan residents are paying the price”
Voice cloning has become the focal point of these attacks. Criminals no longer need hours of studio-quality recordings to pull off a successful scam. A mere few seconds of audio scraped from a victim’s public social media profile – like an Instagram story or a TikTok video – is all it takes. After feeding the model this sample, scammers get a digital copy of the victim’s voice capable of saying anything while maintaining the original tone and even the right inflection. It is obviously not a perfect clone, but over a phone call, it can be shockingly realistic.
The criminals then call relatives and act out highly emotional scenarios. They simulate crisis situations – a car crash, a sudden arrest, or an urgent need for bail money – and use time pressure to demand an immediate cash transfer or account login details.
You can learn more about this technology in our dedicated article.
The online threat isn’t limited to audio, however. Simultaneously, residents are being bombarded with videos created using deepfake technology. Criminals are churning out hyper-realistic clips where cloned likenesses of recognized authority figures, celebrities, or local officials encourage people to hand over cash or invest in bogus crypto schemes.
The rapid evolution of these methods is forcing citizens to change their habits. Cybersecurity experts advise against relying solely on the sound of a voice on the other end of the line, urging people to implement verification methods instead. The most effective at-home defense against tech-driven fraud is establishing a physical “safe word” with loved ones – a specific word or verifying question that helps distinguish a real family member from a computer-generated clone.

