Skip to Content

Encryptionless Digital Extortion: The New Tactic That Grew 11-Fold in One Year

Arctic Wolf 2026 report reveals structural shift in cybercrime: silent data theft replaces file encryption
February 21, 2026 by
Encryptionless Digital Extortion: The New Tactic That Grew 11-Fold in One Year

The phone rings. It's your boss. The voice is unmistakable, with the same tone and cadence you already know. He asks for an urgent favor: a bank transfer to secure a new contract or the sending of confidential information from a client. Everything seems normal, and your confidence speaks louder. You start to act.

What if it's not really your boss? What if every inflection, every word you think you recognize, has been perfectly imitated by a cybercriminal? In seconds, a routine call can turn into a costly mistake, with lost money and compromised data.

What was once science fiction is now a real and growing threat to companies of all sizes. Criminals have evolved from poorly written phishing emails to sophisticated AI voice cloning scams, signaling a new and alarming evolution in corporate fraud, known asVishing(voice phishing).

The new frontier of fraud: when the voice of leadership is the weapon of crime

We have spent years learning to identify suspicious emails, but we have not trained our ears to question the voices of people we know. It is exactly this trust that voice cloning scams exploit. The barrier to entry for these attacks is surprisingly low: just a few seconds of audio from a person, easily obtained from press releases, interviews, podcasts, or social media posts, is enough for AI tools to create a voice model capable of saying anything a criminal types.

This tactic represents a direct evolution of BEC (Business Email Compromise), which traditionally relied on spoofed emails. The voice, however, adds a sense of urgency and authority that text cannot replicate, exploiting the natural tendency of employees to trust and quickly respond to a request from leadership.

The financial impact is real and frightening.

The numbers show the scale of the problem. According to Deloitte, losses from generative AI-enabled fraud could jump from$12.3 billionin 2023 to$40 billionby 2027. Cases of deepfake fraud in North America surged by 1,740% between 2022 and 2023, with losses exceeding $200 million just in the first quarter of 2025, according to data from the World Economic Forum.

Two recent cases illustrate the destructive power of these scams:

  • The $25 million scam in Hong Kong:An employee of a multinational was deceived in a video call with deepfake avatars of their colleagues, including the company's CFO, resulting in a massive transfer.
  • The case of the Swiss businessman:In January 2026, a businessman was convinced to transfer "several million Swiss francs" after a series of phone calls with the cloned voice of a business partner.

The reality is that, as pointed out by a study from Queen Mary University of London, most people can no longer distinguish a real voice from one cloned by AI, making human detection an unreliable defense.

How to protect your company from voice fraud.

If technology advances to the point of deceiving our senses, defense must evolve beyond technology. The most effective protection againstvishinglies in processes and organizational culture. Below, we present an action plan to safeguard your operation.

1. Implement a "zero trust" policy for voice requests

No request for fund transfers, changes to banking information, or sharing of sensitive information should be fulfilled based solely on a phone call or voicemail, no matter how authentic it seems. The rule is clear:trust, but verify.

2. Create a secondary channel verification protocol

This is the most critical step. If an executive calls requesting urgent action, the standard procedure should be:

  1. Hang up the initial call:Thank the requester and inform them that you will call back to confirm.
  2. Initiate a new communication:Call back the person's official internal phone number, or send a direct message through a secure corporate communication channel (such as Microsoft Teams or Slack) to validate the request.
  3. Wait for explicit confirmation:Proceed only after receiving confirmation on the secondary channel.

3. Adopt safe words

For high-risk transactions, establish challenge phrases or "safe words" known only to a restricted group of people. If the requester on the call cannot provide the correct safe word when asked, the transaction is immediately denied and a security alert is triggered.

4. Elevate the level of awareness training

Your team is the first and last line of defense. Cybersecurity training needs to go beyond passwords and phishing. It is essential to include simulations of vishing attacks to test how finance, HR, and executive assistant teams react under pressure. Educating about how easily voices can be cloned and caller IDs can be spoofed is crucial.vishingto test how finance, HR, and executive assistant teams react under pressure. Educating about how easily voices can be cloned and caller IDs can be spoofed is crucial.


AI voice cloning technology is not a future threat; it is already being actively exploited by criminals. Ignoring this risk is not an option. The good news is that defense does not require astronomical investments in new technologies, but rather a change in mindset and strengthening of internal processes. By adopting a healthy skepticism and implementing rigorous verification protocols, your company can turn the main human vulnerability into its greatest strength.

Is your company prepared for the next generation of cyber threats? Zamak Technologies offers a Strategic IT Diagnosis to assess your vulnerabilities and build a robust and resilient defense.

in News
Encryptionless Digital Extortion: The New Tactic That Grew 11-Fold in One Year
February 21, 2026
Share this post
Tags
Our blogs
Archive