‘Set It and Forget It’ Won’t Cut It: Insurers Warned on AI Oversight

Many parts of the insurance sector, which have previously been marred by legacy technology, are now undergoing rapid digital transformation. AI, automation, and embedded insurance are just some of the technologies driving change in everything from underwriting and claims to customer engagement, leading many industry firms and leaders to rethink their approach.

Having already delved into whether AI-driven claims automation poses any risks, we now turn our attention to a broader use of AI by insurers. While the emerging technology is clearly having a dramatic impact on all aspects of the insurance industry, are some firms being tempted to lean too far into it?

Could overuse or overreliance on AI pose significant challenges, or are most firms already wary enough to fall victim? To find out, we reached out to industry participants to get their take.

Common sense must prevail

Martyn Mathews, MD at broker software house SSP Broker, explains the importance of always ensuring human oversight, even when embracing AI technology.

Martyn Mathews, MD at broker software house SSP Broker,
Martyn Mathews, MD at SSP Broker

“When an insurance firm considers the opportunities behind AI, common sense must prevail. There is no doubt we must embrace AI, but we must do so with a careful and considered approach, ensuring consumer trust, regulatory compliance and that human skills in actuarial, underwriting, and claims functions are maintained.

“There is a possibility that insurance industry regulation could, at least initially, be at odds with AI and the regulator’s expectations around pricing might not be met when AI is used to support pricing decisions. Insurance providers have an obligation to make pricing explainable and clear to the regulator. Many firms using AI embed AI explainability frameworks to meet the FCA’s General Insurance Pricing Practices and Consumer Duty requirements. This may become more of a challenge as AI starts to influence more and more aspects of insurance, and this puts consumer trust at risk.

“Overreliance on AI is dangerous in any area of insurance, but this risk is amplified when it comes to the bottom line of pricing and cost. Regulators will simply not accept an explanation that puts the blame on AI. Insurance providers must therefore use human oversight to understand and manage machine learning algorithms.

“The way we see it, no matter how advanced current formats of AI become, it is essential to maintain human oversight. AI should not become the only answer to tricky questions in insurance. It should be used as another string in the bow of a well-equipped insurance provider or broker.”

Approach with caution

Sam Knott, business development director at insurance software provider Fadata, also appears to agree, explaining why a careful and balanced approach to AI is best.

Sam Knott, business development director at Fadata
Sam Knott, business development director at Fadata

“The insurance industry is naturally risk-averse. So, although there is a real possibility that AI could be over-relied upon, it is highly unlikely that insurers and insurance tech providers wouldn’t approach AI with extreme caution.

“AI is an excellent process workhorse, reducing mundane workloads, with the potential to enable insurance to be significantly more automated, and if used correctly, in a positive way. It speeds up service provision so that humans can focus on the tasks they are best suited to. However, AI cannot empathise, and although the digital market demands more digital services, losing human interaction completely, immediately diminishes customer trust. Insurers that approach AI with both awe and concern, are already making good decisions.

“The growing application of AI for insurance also helps to create a wealth of more interesting job roles within an insurance company, attracting, for example, critical IT talent, which is particularly relevant as insurers increasingly look to better utilise internal IT resources for digital transformation.

“The central point to any AI strategy is having a very clear, definable picture of what to achieve. A blanket approach of AI functionality without a clear vision of the purpose will create more issues than it seeks to solve. Understanding this will allow insurers to identify the best areas and use cases of AI. Looking at the core principle of insurance, AI should be used to create operational efficiencies that allow complex risk understanding and human intervention to occur. For example, empower underwriters with tools that allow them to gain a true vision of the risk and remove all the administrative burden.”

AI blind spots

“Yes, especially if AI models aren’t properly trained, governed, or aligned with compliance strategies,” answers Steve Marshall, director of advisory services at FinScan, which provides AML and KYC compliance solutions to financial institutions, insurers, and fintechs.

Steve Marshall, director of advisory services at FinScan
Steve Marshall, director of advisory services at FinScan

“While AI may help insurers detect suspicious patterns and streamline onboarding and claims processing, poor quality data or inadequate model tuning can lead to serious blind spots. For example, without historical examples of questionable behaviour, models may miss signs of trade-based money laundering, high-risk third parties, or complex ownership structures. That’s why insurers must pair AI with model risk management – monitoring for drift outside of risk tolerances, validating detected risks, and ensuring explainability.

“A ‘set it and forget it’ approach to AI may put insurers at risk of missing key compliance triggers and falling short of regulatory requirements.”

Are insurance firms already being cautious enough? 
Alastair Mitton, partner at RPC
Alastair Mitton, partner at RPC

However, for Alastair Mitton, partner at law firm RPC, there is little risk of overreliance on AI in the insurance industry. He explains: “At this stage, there is little evidence to suggest that insurance firms are over-relying on AI.

“For instance, the BoE and FCA AI survey found over 60 per cent of reported AI use cases are low risk, with only 16 per cent considered high. Industry discussions reflect a cautious stance, with many firms waiting for clearer guidance from regulators before using AI in more sensitive areas.

“A key concern is supply chain risk, particularly around accountability when third-party AI tools fall short. This is a common issue we’re seeing across multiple insurers: the high regulatory standards they must meet are often not reflected in the terms offered by AI vendors. While some insurers are exploring emerging insurance products to help manage these risks, the market is still in its early stages.”

AI FOMO

“There’s definitely a risk, but it’s not the risk most people think,” warns Andrew Harrington, CIO at insurance fintech Ripe. “The real danger isn’t that AI will make mistakes – it’s that companies will implement AI without proper strategy, governance, or understanding of what they’re trying to achieve.

Andrew Harrington, CIO at insurance fintech Ripe, Insurers AI
Andrew Harrington, CIO at Ripe

“Too many insurers suffer from ‘AI FOMO’, rushing to implement artificial intelligence because competitors are doing it, rather than because of a clear business case. This can lead to bolted-on solutions that create more problems than they solve.

“At Ripe, we follow a ‘build backwards’ methodology, starting with the end result we want to achieve, and working backwards to determine if AI is the right tool. Sometimes, a simple automation or process improvement can deliver better results than complex AI implementation. Governance is crucial when it comes to AI. Strict guardrails are key, including limitations on what data AI systems can access and clear protocols for human oversight. Even so, regular auditing of AI outputs is essential.

“The companies that will succeed are those that view AI as one tool in a broader technology stack, not as a silver bullet. Smart implementation requires deep understanding of your data, having clear objectives and maintaining human oversight. The goal should be augmenting human capabilities, replacing more manual, repetitive tasks, not replacing human judgment.

“Done right, AI enhances customer experience and operational efficiency. Done wrong, it creates customer frustration and can lead to regulatory headaches.”

Implementing the necessary safeguards

“There’s a temptation to assume GenAI will solve every operational problem,” says Daniel Huddart, CTO at home insurance specialist Homeprotect. “It won’t.

Daniel Huddart, CTO at Homeprotect, Insurers AI
Daniel Huddart, CTO at Homeprotect

“It can be a powerful productivity tool, but you can’t successfully apply AI to messy or unstructured processes. Before we introduced AI into operations, we spent time building out detailed process maps and documentation, so we knew exactly what the tech was improving or automating. If a task only exists in someone’s head, you can’t train or supervise an AI to do it properly.

“There’s also the issue of trust. If an AI tool gets things right 99 per cent of the time, people might stop questioning that one per cent where it makes a mistake. For complex claims, that’s a real risk. That’s why we’ve separated our AI development into two tracks – one focused on innovation, and one focused on governance and control.

“We’ve also learned that GenAI creates some new challenges. For instance, it can give slightly different results each time, even when fed the same data. This unpredictability makes it harder to test, compare or audit than more traditional models. And the pace of change is rapid. We recently tested a generative AI tool for fraud detection, only for the provider to deprecate it halfway through – meaning it was replaced with a different model without warning. The new one didn’t perform as well, forcing us to start over. It’s a good example of how complex managing GenAI can be behind the scenes.

“Ultimately, generative AI holds huge potential, but it’s still early days in terms of scaling it across insurance operations. To do it properly takes careful planning, the right safeguards, and a lot of investment in both people and tools.”

The post ‘Set It and Forget It’ Won’t Cut It: Insurers Warned on AI Oversight appeared first on The Fintech Times.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *