AI Phishing:
Why Your Staff Cannot Tell Fake from Real Anymore.
The email looks real. The voice sounds familiar. The video call seems genuine. In 2025 and 2026, none of these are guarantees.
🚨 A Wire Transfer Your CFO Never Approved
It is a regular Tuesday afternoon. Your accounts executive receives a WhatsApp message from what appears to be the CEO’s number. The profile photo matches. The name matches. The writing style matches. The message reads: I am in a meeting and cannot call. Please process an urgent payment of RM800,000 to this new vendor account. I will explain later. Time sensitive.
She hesitates for a moment, then transfers the funds. The vendor account belongs to a criminal. The CEO was never involved. The message was generated by AI, trained on months of the CEO’s past communications, social media posts, and voice recordings scraped from a company webinar.
This exact scenario played out at a Kuala Lumpur finance firm in 2025, according to the SimplyData Malaysia Cybersecurity Threat Report 2026. RM800,000 was transferred before the fraud was detected. By then, the money was gone.
Welcome to the new era of AI-powered phishing, where the threat is no longer a poorly written email from a foreign prince. It is a message, call, or video that sounds and looks exactly like someone you trust.
82.6%
1,265%
Most
3 seconds
🤖 How AI Has Completely Changed Phishing
For years, phishing emails were easy to spot. Bad grammar. Suspicious links. Generic greetings. Staff could be trained to catch them. That era is over.
Today, attackers use large language models (LLMs), the same technology behind tools like ChatGPT, to generate phishing messages that are personalised, grammatically perfect, and contextually accurate. They reference real projects, use your company’s internal terminology, and match the tone of the person they are impersonating. Standard email filters, built to catch keyword patterns and formatting anomalies, cannot distinguish these messages from genuine communication.
AI phishing in 2025 and 2026 operates across four distinct attack methods that every Malaysian business owner needs to understand:
🌐 What the World’s Experts Are Saying
KnowBe4: Phishing Trends Threat Report 2025
KnowBe4, the world’s largest security awareness training platform, confirmed that 82.6% of phishing emails detected between September 2024 and February 2025 contained AI-generated content, a 53.5% year-on-year increase. This represents a fundamental shift: AI phishing is no longer an emerging trend. It is the dominant method.
KnowBe4 also found that organisations implementing regular security awareness training reduced successful phishing click rates by more than 40% within 90 days, and by up to 86% within a year. The human element remains both the greatest vulnerability and the most effective defence.
Source: KnowBe4 Phishing Trends Threat Report 2025
Business Today Malaysia: Closing the Phishing Gap in the Age of AI (2025)
A major study on Malaysian organisations published in Business Today Malaysia in September 2025 revealed the scale of the local AI phishing crisis:
-
- The majority of Malaysian organisations reported encountering AI-powered phishing threats in the past year
- Of those affected, more than half reported the volume of attacks had doubled, while nearly a quarter said it had tripled
- Only 19% of Malaysian organisations expressed strong confidence in their ability to defend against AI-powered phishing
- More than one in four admitted their detection capabilities are lagging behind attacker sophistication
- One in five confessed to having no ability to monitor AI-based attacks at all
The report concluded: “The gap between attacker innovation and defender preparedness is widening.”
Source: Business Today Malaysia: Closing the Phishing Gap in the Age of AI
Hoxhunt Phishing Trends Report 2026: The 14x Holiday Surge
Hoxhunt, which analyses phishing threats across more than 4 million users monthly, documented a landmark event in December 2025: a 14-fold surge in AI-generated phishing attacks, ending the year with AI phishing making up 56% of all threats detected in that month, up from just 4% in November.
Hoxhunt’s Co-Founder and CTO Pyry Avist stated: “AI is fuelling a new era of social engineering tactics. This report illustrates how AI-driven insights and automation can directly correlate higher employee engagement to reduced phishing risk.” The finding that carries into 2026: AI phishing is no longer a niche capability. It is now the standard attack method.
Source: Hoxhunt Phishing Trends Report 2026
Brightside AI and World Economic Forum: Deepfake CEO Fraud Data 2025
Deepfake-related fraud losses exceeded USD$200 million in the first quarter of 2025 alone, according to World Economic Forum data. For full-year 2025, deepfake fraud losses in the United States alone reached USD$1.1 billion, tripling from USD$360 million in 2024.
The scale of the threat:
- Voice cloning fraud rose 680% in the past year
- The average loss per deepfake fraud incident now exceeds USD$500,000
- CEO fraud using deepfakes now targets at least 400 companies per day globally
- AI can clone a person’s voice with 85% accuracy using just 3 to 5 seconds of audio from any public source
- 80% of companies have no established protocols or response plans for a deepfake-based attack
Sources: Brightside AI: Deepfake CEO Fraud Report 2025 | WEF Global Cybersecurity Outlook 2025
CyberSecurity Malaysia Cyber999: Phishing Remains Top Threat in Malaysia (2025)
CyberSecurity Malaysia’s Cyber999 Incident Response Centre confirmed that phishing represented 73% of all fraud incidents reported in Malaysia in the most recent reporting quarter, making it by far the leading cyber threat category. Key phishing patterns identified in Malaysia in 2025:
- Government aid scams: AI-generated emails and SMS impersonating bantuan kerajaan programmes to steal personal data and banking credentials
- Brand impersonation: Shopee, Lazada, Maybank, and CIMB are among the most frequently spoofed brands in AI-generated phishing campaigns
- Traffic summons and LHDN scams: Messages claiming unpaid police fines or tax notices directing victims to fake payment portals
- WhatsApp CEO fraud: AI-generated messages impersonating business owners to instruct staff to make urgent transfers
- Vishing (voice phishing): Callers impersonating PDRM, LHDN, and even CyberSecurity Malaysia staff to extract sensitive information
Source: CyberSecurity Malaysia Cyber999 Q4 2024 Incident Report
🇲🇾 How AI Phishing Is Targeting Malaysian Businesses Right Now
🔴 The RM800,000 WhatsApp CEO Fraud
A Kuala Lumpur finance firm received a WhatsApp message impersonating the CEO, requesting an urgent wire transfer of RM800,000 to a new vendor account. The message was AI-generated, trained on the CEO’s writing style from months of captured communications. The transfer was made before the fraud was detected. (Source: SimplyData Malaysia Cybersecurity Landscape 2026)
🔴 The Selangor Manufacturer Supply Chain Attack
A Selangor-based manufacturer received a convincing email appearing to come from an established customer, requesting urgent supplies with updated bank account details. The email referenced real past orders and used the customer’s actual company letterhead design, cloned by AI. Fraudulent payment was processed before verification could be completed.
🔴 Shopee and Lazada Brand Impersonation
Throughout 2025, Malaysian consumers and business procurement staff received AI-generated promotional emails from convincingly spoofed Shopee and Lazada accounts offering exclusive bulk purchase discounts. Clicking the embedded links led to near-identical cloned websites that captured login and payment credentials.
🔴 LHDN and PDRM Vishing Campaigns
Scam callers using AI voice cloning impersonated LHDN (Inland Revenue Board) and PDRM officers, threatening recipients with arrest for unpaid taxes or traffic summons. The calls used realistic hold music, department transfer audio, and convincing official-sounding scripts generated by AI. Many victims paid fraudulent settlements under duress.
💸 The Real Business Cost of AI Phishing
What AI Phishing Really Costs a Malaysian Business
📧 Phishing-Originated Breach Cost
According to IBM’s 2025 Cost of a Data Breach Report, the average cost of a breach that originated from phishing is USD$4.88 million (approximately RM22 million). Phishing is the most expensive breach category, exceeding even ransomware in total downstream costs.
💰 Business Email Compromise (BEC) Losses
BEC caused USD$2.77 billion in reported losses globally in 2024, according to the FBI’s Internet Crime Complaint Center. BEC is the costliest cybercrime category per incident, because it targets high-value financial transactions directly. A single spoofed supplier payment request can redirect hundreds of thousands of ringgit.
🎙 Deepfake Voice Fraud
The average financial loss per deepfake voice fraud incident globally now exceeds USD$500,000. In Malaysia, a single successful AI CEO fraud attack, such as the documented RM800,000 case, can wipe out months of business profit in one transaction.
⏱ Detection Delay Multiplies Cost
According to SimplyData’s 2025 Malaysia Threat Report, the average time to detect a breach in Malaysia is 187 days. AI-generated phishing attacks are designed to avoid triggering automated alerts, meaning they can operate inside an organisation for months before detection. Every additional day of undetected compromise adds to the total cost.
📉 Reputation and Customer Trust
Research shows that 60% of breaches involve a human action as a contributing factor (Verizon DBIR 2025). When phishing succeeds, customer data is often exposed. Under Malaysia’s PDPA 2024, the business is responsible for that exposure, regardless of whether it was an employee error.
Phishing does not just start with an email. It ends with a balance sheet.
👁 Why Even Your Most Careful Staff Are Being Fooled
🛡️ What Effective AI Phishing Protection Looks Like
5 AI Phishing Readiness Questions to Ask Your IT Provider
Bring these to your next IT security review. The answers will quickly reveal whether your current defences are keeping pace with the AI phishing threat:
- Does our email security system use AI and behavioural analysis to detect phishing, or does it rely on keyword filters and known malicious domain lists?
- When did our staff last receive phishing simulation training, and what was our click rate on the simulated attacks?
- Do we have a written policy requiring staff to verify any financial transaction or sensitive request received by phone or messaging app through a separate channel before acting?
- Are our high-risk staff, particularly finance team members, trained on AI voice cloning and deepfake video call fraud?
- Does our current MFA solution protect against session cookie theft and adversary-in-the-middle attacks, or is it standard SMS or app-based authentication?
Any gap identified from these questions is a gap that attackers are already aware of and actively exploiting. The good news is that most can be addressed quickly with the right guidance.
One Action You Can Take Right Now
AI Phishing Is Evolving Daily. Your Defences Should Too.
The best phishing defence is one your staff have actually practised.
We encourage every Malaysian business owner to have an honest conversation with their current IT adviser or cybersecurity provider about AI phishing readiness. Ask whether your email security uses AI-based detection. Ask when staff last completed phishing simulation training. Ask whether your finance team has a verified call-back protocol in place. These are straightforward, practical questions that any good IT partner should be ready to answer.
If you would like a second perspective on your current phishing defences, or would like to understand what AI-era email security looks like in practice, BigBand is available for a no-obligation advisory conversation. We work alongside your existing team, not in replacement of it.
FREE BUSINESS TOOL
Cyber Security
Risk Review Checklist
Most organisations are unsure of their actual cyber risk exposure. BigBand’s self-assessment tool evaluates your protection across 7 critical areas and places your organisation into one of four risk levels.