AI is changing the world – from how we shop and bank to how we manage compliance and mitigate risk. But as businesses race to harness the potential of AI, one uncomfortable truth is becoming harder to ignore – not all technology can be trusted, claims Phil Cotter, CEO of SmartSearch.Â
The recent scandal surrounding Builder.ai – once a high-flying UK-based startup valued at $1.5 billion – is a cautionary tale for anyone working in a regulated industry. The company’s much-hyped AI platform promised to build apps at the push of a button, guided by a virtual assistant called Natasha. In reality, according to whistleblowers and investigative reporting, the ‘AI-built’ software was mostly created by 700 human engineers in India, with little actual automation involved.
This is what’s now being referred to as AI-washing – when companies promote their services as being driven by artificial intelligence, while delivering little to no genuine AI functionality. In some cases, this is done to attract investment or press coverage; in others, to secure contracts with clients looking to appear innovative. Regardless of the motivation, the result is the same: eroded trust and increased risk for businesses that rely on false promises.
If this sounds familiar, it’s because we’ve seen it before. The crypto boom – and subsequent bust – was riddled with exaggerated claims, opportunistic founders, and a lack of regulatory clarity. AI is now following a similar trajectory: rapid growth, minimal oversight, and enormous investor appetite. But when regulation lags behind innovation, businesses – especially those in regulated sectors like property, legal services, or finance – are left exposed.
The issue isn’t AI itself. When used properly, AI is a powerful force for good – particularly in fields like anti-money laundering (AML) and compliance, where automation, pattern recognition, and data analysis can significantly reduce risk and improve efficiency. The problem is that without clear standards, anyone can claim to be using AI – and many do, with little evidence to back it up.
In regulated industries, that’s more than just a marketing problem. It’s a compliance issue. If you’re using AI to verify identities, screen for risk, or conduct due diligence, then the technology behind it needs to be explainable, auditable, and aligned with your regulatory obligations. If it’s not – and if you can’t prove how it works – then it’s not just your reputation on the line; it could be your licence.
For businesses operating in sectors like estate agency, legal services, or financial advice, embracing AI is no longer optional. Regulatory pressures are increasing, customer expectations are rising, and traditional, manual methods simply can’t keep pace with the demands of modern compliance. But the decision to adopt AI shouldn’t be driven by marketing hype or glossy promises. It should be driven by trust – in the technology, in the provider, and in the evidence.
That means asking tough questions like, Is the AI explainable and auditable, can it demonstrate compliance with existing and future regulations and Is the provider experienced in regulated markets?Â
At SmartSearch, we believe that’s the only responsible path forward for AI. We’ve been innovating in digital compliance for over two decades. Our latest advancement, the enhanced SmartDoc identity validation technology—developed in partnership with Daon—uses biometric verification, facial matching, and advanced AI fraud detection to deliver a secure, transparent, and regulator-trusted AML solution, with auditable checks and explainable results built in by design.
Keep up with all the latest FinTech news here
Copyright © 2025 FinTech Global
The post The trouble with trusting AI appeared first on FinTech Global.