Many parts of the insurance sector, which have previously been marred by legacy technology, are now undergoing rapid digital transformation. AI, automation, and embedded insurance are just some of the technologies driving change in everything from underwriting and claims to customer engagement, leading many industry firms and leaders to rethink their approach.
When exploring some of the biggest emerging trends in the insurtech industry, one recurring theme was AI. While its benefits are extremely useful, are there any risks associated with using the tech in the insurance sector? We reached out to the industry to find out.
Bias in automated decision-making
Phillip McGriskin, CEO and founder of Vitesse, a global treasury and payments provider for the insurance industry, highlighted how some of the most important elements in a business-consumer relationship, trust and transparency, can be called into question if poor oversight causes biases to develop in the decision-making.
“AI-driven claims automation is clearly reshaping the insurance industry, mainly in enabling faster decisions, reduced costs, and a more responsive customer experience. However, while the upside is considerable, there are still important risks insurers must actively manage.
“Chief among these is the potential for bias in automated decision-making. AI systems trained on historical data can inadvertently replicate or even exacerbate existing inequalities, especially if oversight is lacking. Another key concern is transparency. Many AI models operate as ‘black boxes,’ making it difficult for insurers to explain decisions to regulators or customers, which erodes trust at a time when transparency is paramount.
“Our recent State of Claims Finance report found that AI-powered portals and chatbots are being embraced by 40 per cent of insurers as a key enabler of better service delivery. Notably, just 25 per cent cite productivity gains as their primary goal, which suggests insurers are focused more on elevating customer experience than replacing human expertise.
“The lesson is clear: AI should augment, not replace, judgment, empathy, and accountability. Used thoughtfully, automation can enhance claims without undermining fairness or service. But it must be deployed alongside rigorous oversight, explainability, and continuous evaluation to ensure it truly serves both the business and the customer.”
Overreliance is a recipe for failure

In addition to risks surrounding trust and transparency, Justin Hwang, COO and head of AI project at RNA Analytics, a global actuarial and risk management consulting firm, also noted that data quality issues can also be at risk if AI is relied on too heavily and not kept up to date.
“AI-powered claims automation offers significant advantages in speed, efficiency, and cost reduction, but it also introduces notable risks. One of the primary concerns is bias AI systems trained on historical data may unintentionally replicate discriminatory patterns, leading to unfair treatment of certain policyholders.
“Additionally, many AI models lack transparency, making it difficult for insurers to explain or justify automated claim decisions. This lack of explainability can conflict with regulatory requirements, especially in jurisdictions that demand fairness and accountability in automated processes.
“Other key risks include data quality issues, which can skew AI outputs if training data is outdated or incomplete, and over-reliance on automation, which can result in large-scale errors if systems fail without human oversight. There’s also the risk of missing new fraud patterns or eroding customer trust due to impersonal, unexplained decisions.
“To mitigate these challenges, insurers must implement strong AI governance, ensure ongoing human oversight, and maintain robust auditing and monitoring systems to uphold fairness, transparency, and regulatory compliance.”
Need for an emotional element
No matter how good AI is in its current state, one thing it definitely cannot do is account for emotions. Rajeev Gupta, co-founder and chief product officer, Cowbell, the cyber insurance firm, highlights that in some instances, a human touch is needed in the insurance sector, and in these moments, AI’s automatic response will harm the consumer’s interaction more than help it.
“Yes, I believe overreliance on any tool brings risk, and AI-driven claims automation is no exception. The biggest risk is that pure automation can’t grasp the nuance of a real crisis – where there’s always a human, emotional element at play. That’s why we believe AI’s role should be that of a ‘co-pilot’, where it augments our expert claims handlers. In complex claims like cyber, AI should support efficiency by handling routine tasks, but the response itself must be managed by trained, in-house cyber claims experts who can provide the context-sensitive guidance a business needs to recover.”

Echoing similar thoughts, Charles Clarke, group vice president at Guidewire, a P&C insurance software and technology provider trusted by more than 570 insurers in 42 countries added: “AI and automation deliver efficiencies and improve speed, but this should not come at the expense of customer satisfaction or accurate decision making.
“That is the risk, which is real and present. It’s critical that insurers build human interaction into the process for when customers want it, and leave the customer on the automated sunny-day path when they don’t. That is a hard balance to strike. Insurers also need to implement a thorough data governance program to guarantee that any decisions made are truly unbiased and transparent.”
Integration with legacy infrastructure

Manoj Pant, senior director, strategy and business development, at Pegasystems, the AI-powered decisioning and workflow automation platform, notes the impact that irresponsible AI usage can have on the sector, however, he also notes that integration challenges with legacy systems are another hurdle that needs to be overcome in a traditional, very stagnant industry.
“Yes, AI claims automation still poses significant risks that insurers must carefully manage, as the AI is only as good as the data it is trained with.
“One major concern is algorithmic bias, where AI models may perpetuate or even amplify existing biases, leading to unfair claim assessments or discriminatory outcomes. Maintaining model accuracy will require continuous monitoring, training with new data and updating of AI models to ensure consistency in decisions and outcomes.
“There’s also the risk of overreliance, as heavy dependence on AI across large operational areas could create systemic vulnerabilities if the technology fails or makes incorrect decisions. With increasing scrutiny from regulators, it is essential for insurers to ensure that their AI systems are transparent and explainable.
“Integration challenges also persist, particularly when connecting AI systems with legacy infrastructure, which can introduce technical vulnerabilities and operational disruptions.”