Southeast Asia has become a global epicenter of cyber scams, where high-tech fraud meets human trafficking. In countries like Cambodia and Myanmar, criminal syndicates run industrial-scale “pig butchering” operations—scam centers staffed by trafficked workers forced to con victims in wealthier markets like Singapore and Hong Kong.
The scale is staggering: one UN estimate pegs global losses from these schemes at $37 billion. And it could soon get worse.
The rise of cybercrime in the region is already having an effect on politics and policy. Thailand has reported a drop in Chinese visitors this year, after a Chinese actor was kidnapped and forced to work in a Myanmar-based scam compound; Bangkok is now struggling to convince tourists it’s safe to come. And Singapore just passed an anti-scam law that allows law enforcement to freeze the bank accounts of scam victims.
But why has Asia become infamous for cybercrime? Ben Goodman, Okta’s general manager for Asia-Pacific notes that the region offers some unique dynamics that make cybercrime scams easier to pull off. For example, the region is a “mobile-first market”: Popular mobile messaging platforms like WhatsApp, Line and WeChat help facilitate a direct connection between the scammer and the victim.
AI is also helping scammers overcome Asia’s linguistic diversity. Goodman notes that machine translations, while a “phenomenal use case for AI,” also make it “easier for people to be baited into clicking the wrong links or approving something.”
Nation-states are also getting involved. Goodman also points to allegations that North Korea is using fake employees at major tech companies to gather intelligence and get much needed cash into the isolated country.
A new risk: ‘Shadow’ AI
Goodman is worried about a new risk about AI in the workplace: “shadow” AI, or employees using private accounts to access AI models without company oversight. “That could be someone preparing a presentation for a business review, going into ChatGPT on their own personal account, and generating an image,” he explains.
This can lead to employees unknowingly uploading confidential information onto a public AI platform, creating “potentially a lot of risk in terms of information leakage.”

Courtesy of Okta
Agentic AI could also blur the boundaries between personal and professional identities: for example, something tied to your personal email as opposed to your corporate one. “As a corporate user, my company gives me an application to use, and they want to govern how I use it,” he explains.
But “I never use my personal profile for a corporate service, and I never use my corporate profile for personal service,” he adds. “The ability to delineate who you are, whether it’s at work and using work services or in life and using your own personal services, is how we think about customer identity versus corporate identity.”
And for Goodman, this is where things get complicated. AI agents are empowered to make decisions on a user’s behalf–which means it’s important to define whether a user is acting in a personal or a corporate capacity.
“If your human identity is ever stolen, the blast radius in terms of what can be done quickly to steal money from you or damage your reputation is much greater,” Goodman warns.
Credit: Source link