Get the hottest Fintech Singapore News once a month in your Inbox
iProov has sounded the alarm on a highly coordinated cybercrime wave sweeping across Asia-Pacific’s financial sector. At the centre of this threat is Grey Nickel, a sophisticated cybercriminal group using deepfakes, synthetic identities, and AI-powered attack tools to breach banks, crypto exchanges, and digital payment platforms.
What makes Grey Nickel especially dangerous isn’t just their technology but also it’s their precision.
These are not opportunistic hackers; they’re running well-planned operations designed to outmanoeuvre outdated security systems and exploit weak KYC protocols. As APAC’s digital economy accelerates, these kinds of attacks are becoming alarmingly common.
And not to forget, far harder to detect.
A Global Threat With an Asia-Pacific Focus
While Grey Nickel’s operations have stretched into North America and Europe, iProov’s investigation shows the Asia-Pacific region remains their main arena.
The group has been active since mid-2023, running coordinated campaigns that exploit weaknesses in remote identity verification systems. Their techniques aren’t just smart. They’re built to outpace current defences.
Dr. Andrew Newell
“These criminal groups understand that banking, crypto exchanges, e-wallets, and digital payment platforms represent some of the highest-value targets for identity fraud,” said Dr. Andrew Newell, Chief Scientific Officer at iProov.
“These aren’t opportunistic attacks—they’re existential threats to digital banking,” he continued.
The region’s fintech ecosystem is growing at lightning speed, but that growth has outpaced regulation and security in many places.
Fragmented compliance requirements, inconsistent reporting standards, and widespread adoption of remote onboarding all combine to make APAC an ideal testing ground for cybercriminals.
What makes these approaches by the threat actor, Grey Nickel, so dangerous is how industrialised it’s become.
This isn’t a lone hacker in a basement. Based on the research, it’s a full-fledged operation engineered to fake identity verification at scale. The group uses a blend of face-swap technology, metadata manipulation, and virtual camera applications to simulate real-time KYC processes, deceiving even well-defended platforms.
According to iProov, the threat doesn’t stop with one technique.
It spans a network of interrelated tools and services. Criminals now deploy advanced mobile apps capable of injecting pre-recorded videos into ID verification processes. These apps are often used in tandem with Deepfake-as-a-Service platforms, which offer customisable AI-generated personas complete with convincing visuals and behaviours.
To make matters worse, open-source AI tools are being abused to produce hyper-realistic video and audio clips that can easily defeat conventional liveness checks.
Some of these tools have even advanced to the point of simulating accurate lip-syncing, allowing attackers to bypass voice-based authentication systems.
Altogether, it’s not just about faking a face but rather about crafting a digital persona so convincing that even human reviewers and traditional tech struggle to detect the fraud.
The Rise of Frankenstein Fraud Is Turning Fiction Into Reality
This wave of attacks is being powered by a sinister trend called synthetic identity fraud, dubbed “Frankenstein Fraud” by iProov. Unlike old-school identity theft, this involves creating completely new digital identities by stitching together real and fake information.
Now, imagine giving that identity a lifelike face, voice, and movement using generative AI and deepfake tech. That’s exactly what cybercriminals are doing. And once a synthetic identity is inside the system, it’s nearly impossible to remove.
These fake identities aren’t just passing KYC. They’re also opening credit accounts, taking loans, and committing fraud for years before vanishing. In the U.S., synthetic identity fraud already accounts for up to 85% of all identity fraud cases.
APAC is on the same trajectory.
Many financial platforms still rely on outdated liveness detection tech that can only catch a static image or a fake document. That’s a big problem when criminals are injecting full-motion video directly into the verification stream.
Even worse, some use piggybacking tactics, linking synthetic identities to real customers’ credit accounts to build credibility before busting out. Because the data looks legitimate, these tactics often fly under the radar until it’s too late.
Fighting Fire with Fire Might Be the Only Way Forward
This makes it hard to understand the full scale of the threat or to coordinate a meaningful defence. As iProov points out, these gaps are being exploited every day, and cybercriminal innovation is outpacing regulatory enforcement.
So what can be done?
According to iProov, it starts with better biometric technology, specifically, advanced liveness detection that can determine if a user is actually present in real time. These tools analyse micro-expressions, subtle movements, and behavioural cues to sniff out injected video or manipulated feeds.
Some cloud-based platforms even offer continuous monitoring, which could become the gold standard in a world where fraud happens mid-session, not just during sign-up.
As Gartner puts it, “Liveness detection technologies are becoming critical for defending against deepfakes and verifying the genuine presence of an individual.”
Time Is Running Out for Financial Institutions to Act
But without urgent upgrades in fraud prevention, that growth could be undermined by a rising wave of AI-enabled attacks.
The lesson from Grey Nickel is clear: the fraudsters are evolving. If financial institutions don’t evolve with them, they’ll find themselves constantly playing catch-up.
And in this new reality, the biggest threat to your platform isn’t a password breach or stolen credit card.
It’s a perfectly crafted synthetic identity. Complete with a face, a voice, and a plan to disappear with millions.
Featured image: Edited by Fintech News Singapore, based on images by Freepik and Freepik.