Skip to content Skip to sidebar Skip to footer

AI-powered deepfake and voice cloning scams are targeting payroll systems, putting 13th-month bonuses at risk this holiday season.

‘Tis the season of scammers. As the 13th-month pay season nears for millions of Filipino workers, a new digital threat is emerging: an AI-powered fraud network capable of draining bank accounts in minutes. Welcome to the era of the Christmas cyber-grinch.

Targeting holiday budgets is a common practice for bad actors, many of whom now have access to AI.

In 2024, during the five-day Black Friday sales from November 28 to December 2, the Philippines recorded some of the highest suspected digital fraud rates in the world. 

According to Trans Union Philippines, the 15 percent figure applied specifically to Black Friday itself, while the five-day average stood at 13.6 percent, still significantly higher than the global average of 4.6 percent.

Those fraud rates were actually on a decline, said Yogesh Daware, chief commercial officer of TransUnion Philippines.

TransUnion defines “suspected digital fraud” as transactions flagged either in real time due to risky indicators or those later confirmed as fraudulent after investigation, which means not all flagged transactions result in monetary loss.

This trend continues into the Christmas season, with the Philippine National Police Anti-Cybercrime Group (PNP ACG) reporting intensified operations against online scams.

The question now is how fraudsters might attempt to intercept 13th-month pay windfalls.

Deepfakes and voice cloning

The Cybercrime Investigation and Coordinating Center (CICC) and Scam Watch Pilipinas recently outlined the “12 scams of Christmas,” including impersonation scams in which fraudsters pose as bank representatives.

These actors typically claim that an account has been compromised and that a One-Time Password must be verified, leading to drained savings for victims who unknowingly hand over credentials.

This year, between January 1 and November 13, the PNP recorded 431 cases of vishing. Vishing is the fraudulent practice of making phone calls or leaving voice messages purporting to be from reputable companies to induce individuals to reveal personal information, such as bank details and credit card numbers.

Although the PNP has not recorded any vishing incidents involving voice cloning—a subcategory of deepfake technology—the risk remains high.

A McAfee report from 2023 states that scammers need only three seconds of audio, often scraped from platforms like TikTok, Instagram, and Facebook, to create convincing AI-generated voice clones.

Computer hacker can produce AI deepfakes
Fraudsters are using deepfake voice technology to impersonate individuals and bypass security checks

Scammers using these clones could hypothetically call payroll departments and request last-minute changes to bank account details before salaries or bonuses are released.

IBM cybersecurity researchers have demonstrated how AI-generated audio can tamper with live calls and impersonate individuals. While IBM has not reported overwhelming volumes of voice-cloned calls on their call centers, financial institutions are preparing for this possibility.

Worse still, tools for creating voice clones are readily available and often free, allowing individuals with minimal technical skill to create AI-powered impersonations.

With more than half of adults sharing their voices online at least once a week, fraudsters have ample material to exploit.

Real threats, real actors

Generative AI tools have become sophisticated enough to bypass multiple authentication layers worldwide.

According to Jan Sysmans from mobile app security firm Appdome, AI has defeated biometric authentication methods such as facial recognition, fingerprints, and voice scans.

Advances in voice cloning technology have prompted cybercriminals to conduct more impersonation scams. The Federal Trade Commission in the US reported in 2024 that there were over 845,000 fraud cases, many of which involved identity spoofing.

Internationally, cybercriminals have already used deepfake tools to bypass security systems. In China, deepfake-generated fake tax invoices enabled fraudsters to bypass state facial recognition systems and steal the equivalent of $75 million.

In the United Arab Emirates, criminals cloned a bank executive’s voice to authorize a fraudulent $35 million transfer.

What regulators are doing—and why it matters

With deepfake tools becoming cheaper and more accessible, banks and regulators worldwide are reassessing identity-verification systems long considered reliable.

IBM researchers recently highlighted a new tactic called “audio-jacking,” in which generative AI intercepts a live call and replaces spoken information with altered details, creating a seamless impersonation.

For Filipino workers receiving 13th-month pay, this shift highlights a new era in financial fraud. Scammers are moving beyond simple phishing messages and now targeting the very systems designed to protect funds—from call centers to biometric verification—using AI tools that are increasingly easy to access and deploy. 

As these technologies progress, the risks increase: fraudsters can manipulate even routine transactions and standard payroll procedures, underscoring the importance of vigilance, robust security measures, and awareness not only for individuals but also for the institutions responsible for protecting earnings. 

Traditionally a time for celebration, the holiday season serves as a reminder to treat cybersecurity with the same seriousness as financial planning.

 
 

As the Philippines enters the 13th-month pay season, fraudsters armed with AI are moving beyond phishing to exploit call centers, biometric systems, and payroll procedures. 

 
 

READ: