Can federated intelligence solve the privacy problem in compliance?

In an era where data fuels innovation but privacy regulations like GDPR and CCPA loom large, organizations grapple with a paradox: harnessing collective intelligence without compromising sensitive information. Enter federated intelligence—a decentralized approach to AI that trains models across distributed datasets without ever centralizing raw data, potentially revolutionizing compliance landscapes. But can it truly bridge the gap between robust analytics and ironclad privacy?

According to RegTech firm Salv, when attempting to define the idea of federated intelligence, you can think of it on a spectrum.

“On one end, you’ve got data sharing based on customer consent — for example, if I open a new account and tell the bank to pull all my KYC documents from my existing institution. That’s entirely permission-based. It’s not suspicious, it’s not about crime — it’s just a customer-authorised data transfer,” the firm said.

“On the other end of the spectrum is what Salv Bridge enables: data sharing based on strong suspicion. If there’s evidence of fraud, money laundering, or a sanctions hit, that creates a legal basis to share information without customer consent. There’s a documented trigger, a specific purpose, and an audit trail.

“Federated intelligence sits somewhere in the middle. There isn’t yet strong suspicion. It’s not about good behaviour either. You might see something that seems off — something that could become a fraud case down the line — but you’re not ready to open an investigation.”

In the view of Salv, this is where federated intelligence becomes useful. It enables institutions to share behavioural insights or detection logic, without exposing personal data. This, Salv claims, is the principle behind its own federated learning work.

Work in practice

 The Estonian RegTech firm suggested the clearest example of how federated intelligence works in practice is Salv’s Monitoring Rule Library.

The firm said, “Our customers — financial institutions — can create rules that describe suspicious behaviour. For example: if a customer receives funds from a high-risk country, followed by a rapid outbound transfer. These are not full investigations. There’s no sensitive data. They’re just detection patterns that help other institutions improve their own monitoring.”

These rules can then be shared – with permission – through the library. “Other institutions can adapt them, use them, or improve them. They can even feed them back into the community with updated parameters. It becomes a kind of open-source AML.

“And soon, we’re making that process even more powerful. In our upcoming roadmap, we’ll be adding AI-powered pattern suggestions that help analysts spot new suspicious behaviours based on shared rule logic. It’s a way to crowdsource insight — without ever sharing data.” 

The biggest risks

For Salv, when it comes to the biggest risks of federated intelligence, there are two common misconceptions. The first is that federated intelligence requires a new or exotic legal basis. It doesn’t. “If you’re not sharing personal data — only logic — then you’re not bound by the same constraints,” said Salv.

The second is that you need complex infrastructure or cryptographic techniques to make it work. You don’t. “If the right controls are in place, regular encryption works just as well as multi-party computation. It’s not about the tech — it’s about the purpose, the process, and the legal foundation,” Salv explained.

Emerging forms  

A key question to ask is whether other forms of federated intelligence are emerging. Here, Salv agrees – and one example they suggest is entity-based scoring.

“Some companies use a model where institutions submit partial suspicion indicators. So if Bank A says, “We’re seeing odd behaviour from this person,” and Bank B independently reports the same thing, the system increases that entity’s risk score — say to minus two or minus five,” said Salv.

The idea, Salv suggests, is that the bank doesn’t see who submitted what – they just see that other institutions have raised concerns. Salv claims it is similar to suspicious entity sharing, in that it lets you see whether someone is raising red flags elsewhere in the system.

“That model can be useful for flagging cases that wouldn’t be visible in isolation, especially when it’s fed back into monitoring or onboarding processes,” suggests the company.

Regulatory and technical hurdles

Which regulatory or technical hurdles must federated intelligence overcome? For Salv, the most vital place to start is with the legal basis – not the technology.

“You can share data through encrypted messaging, multi-party computation, or even paper and envelopes — it doesn’t matter. If the underlying sharing isn’t lawful, the tech can’t make it lawful,” said the firm.

For Salv, there are three key areas that matter. The first is purpose – why they are sharing, what is the business value and does it help reduce crime. Secondly, security – is the data protected against external threats? Finally, privacy, and whether a firm is respecting data protection laws and bank secrecy.

“If you have all three in place, federated intelligence becomes not just feasible — but essential,” Salv concluded.

Read the daily FinTech news

Copyright © 2025 FinTech Global

The post Can federated intelligence solve the privacy problem in compliance? appeared first on FinTech Global.

Leave a Reply

Your email address will not be published. Required fields are marked *