At this year’s Credit Scoring and Credit Control Conference in Edinburgh, colleagues Ben Archer and Peter Szocs presented on a topic gaining significant attention: how federated learning can support banks in addressing fraud.
Fraud has risen sharply in recent years, both in scale and sophistication. Cifas reported more than 217,000 cases to the National Fraud Database in the first half of 2025 alone — the highest figure ever recorded. Over 118,000 were identity fraud, increasingly
involving synthetic IDs, while misuse of facility cases rose by 35%, highlighting the role of mule accounts.
Together, these pressures underline the need to detect fraud earlier, even as regulators expect firms to minimise disruption for genuine customers. Collaboration is vital, yet privacy laws, security concerns and operational barriers have long slowed information
sharing. This is where federated learning shows potential.
Why traditional data sharing is limited in fraud prevention
Three factors continue to limit traditional approaches to fraud information sharing:
-
Fraud is cross-institutional. Synthetic IDs and mule networks exploit multiple banks, but most detection remains siloed.
-
Privacy and security introduce friction. Laws and operational concerns slow information exchange to the point that criminals can exploit delays.
-
Data fragmentation weakens insight. Each bank sees only part of the picture, missing connections that could enable earlier intervention.
These barriers don’t make collaboration impossible. They highlight the need for a different model. One that allows banks to combine insight while protecting sensitive customer data. This is the space where federated learning has the potential to add real
value.
A hybrid approach: data sharing plus federated learning
A pragmatic path forward is a hybrid strategy. Traditional data sharing still has value, particularly at an aggregated level for compliance and transparency. But federated learning adds a further dimension.
Here, sensitive customer data remains within each bank. Local training is performed on internal data, and model updates are shared back collaboratively using confirmed fraud cases. The result is a privacy-preserving framework that complements, rather than
replaces, existing practices.
The outcome is stronger, earlier detection with less regulatory friction.
Why federated learning is gaining traction
The method offers clear advantages:
-
Privacy by design: customer-level data stays in place.
-
Scalability: models can be trained across many institutions simultaneously.
-
Adaptability: the approach evolves as fraud tactics change.
Most importantly, no single bank has a complete view of the problem. By combining intelligence across firms without exposing raw data, federated learning produces a more accurate and timely understanding of risk.
Importantly, federated learning does not require banks to rip out existing fraud systems. It can be layered on top of current defences, running alongside traditional data-sharing frameworks. This makes it a privacy-preserving method of collaboration that
strengthens what banks already have, rather than asking them to start again.
Lessons from proof-of-concept testing
As Ben and Peter emphasised, testing federated learning in practice is essential. It demonstrates functionality but also reveals practical hurdles.
Their own work highlighted three key areas:
-
Technical: ensuring data consistency across institutions.
-
Operational: integrating federated models with existing fraud systems.
-
Cultural: overcoming reluctance to collaborate and concerns about reciprocal benefit.
None of these areas can be ignored, but all of them can be overcome. This mirrors wider industry studies, such as those carried out by SWIFT and IBM, which found that federated models can perform at levels close to centralised ones. Taken together, these
projects show that the method is workable and deserves further exploration across the industry.
Is federated learning the future of financial crime prevention?
Over the next three to five years, we velieve federated learning has the potential to move from testing into mainstream adoption. Fraud is evolving too quickly for any single institution to tackle in isolation. A privacy-preserving, collaborative method
is one of the most promising directions for the future of fraud prevention.