A couple of years after its initial boom, artificial intelligence (AI) still remains a huge buzzword in the fintech industry, as every firm looks at a new way of integrating the tech into its infrastructure to gain a competitive edge. Exploring how they are going about doing this in 2025, The Fintech Times is spotlighting some of the biggest themes in AI this February.
Regulations are a big talking point in the AI world, with different countries taking different approaches to policing the technology. However, even a fully compliant company with the best intentions can experience a failure with AI. But from a financial decision-making perspective, how would failure impact the decision-making process? Are firms too reliant on the tech and find themselves lost after a failure? We hear from industry experts to find out.
Monitoring AI to ensure failures can be addressed early
Maya Mikhailov, CEO at SAVVI AI, the firm helping organisations deploy AI, notes the different ways in which AI can fail a company in the decision-making process. She explains how simply implementing AI isn’t enough for the tech to constantly be performing at its best – it must be constantly monitored.
“There are several types of failure when it comes to machine learning in financial decision-making – bias due to quality issues in the underlying data sets, data drift due to a lack of retraining models, and outlier scenarios such as ‘black swan’ events.
“The most basic failure is if the model is trained on a bad historic data set that has encoded biases in it – these aren’t necessarily social biases, they can also be poor decision-making by people that becomes encoded in the data and then reflected in the model.
“As well, sometimes models fail due to data drift – when the historic patterns they are trained on no longer apply or change. For example, if a model is built to predict loan delinquency and interest rates start rising or falling, the historic pattern no longer reflects reality. The model may start seeing increasing errors in its ability to accurately predict delinquency if it is not retrained on these new changing conditions.
“Finally, models struggle with things they’ve never seen before, think Covid. Black swan events often cause failures as there was no data to train on.
“In a well-built AI system, back-testing, guardrails and continuous retraining are key to preventing failure or correcting errors. Of all the types of AI, ML is the most established and commonly used in financial decision-making so firms are better equipped with managing ML outcomes and failures.”
Over-reliance can be costly

According to James Francis, CEO at Paradigm Asset Management, the asset management firm, one of the biggest issues AI failures can have on a company is draining resources. Exploring how this can be avoided, he says: “Sometimes even the most clever AI can err—sort of like when your computer stops in a game.
“Erroneous financial decisions by artificial intelligence might be costly and cause great stress. Forgetting that people need to keep an eye on things, I have seen businesses become too dependent only on AI. This is why at Paradigm we combine smart people and intelligent technology. It is like running a superhero squad where every member has special skills. We see to it that artificial intelligence aids us but does not take over.
“Exciting though it may be to apply AI in finance, we always remain careful to balance technology with wise, old-fashioned human judgment. Eventually, even robots desire a friend.”
Excluding honest customers

AI has the potential to make the customer experience incredibly simple and enjoyable. However, from a lending perspective, if AI is misused, those deserving of a loan may not be entitled to one. Yaacov Martin, co-founder and CEO at Jifiti, the embedded lending platform, explains how humans must oversee the technology to make sure customers never lose out on any offers.
“When AI fails in financial decision-making for consumer and business lending, the consequences can be significant, impacting all stakeholders. While AI-powered lending has the potential to accelerate credit assessments, improve risk management and personalise loan offerings, these benefits can come with risks if not overseen correctly and if over-relied upon by banks and lenders.
“Although AI applies much wider data parameters, fast-tracks processes, is more advanced than traditional algorithms, and ‘teaches’ itself based on past performance patterns, it runs the risk of operating as a ‘black box’, making it difficult to scrutinise decisions, leading to decision-making failures.
“Its reliance on historical data patterns and lack of subjective ‘human’ oversight can reinforce biases, potentially denying credit to deserving individuals. Lenders placing too much trust in AI without proper oversight and regulation risk exposing borrowers to privacy concerns and unfair lending outcomes.
“Regulation is crucial to safeguard transparency, fairness and data security, and provide checks and balances. Additionally, to ensure that the provisioning of credit is indeed in line with the lender’s principles and avoid colossal failovers, there’s a definite need for periodic sampling by a human.
“As AI becomes more prevalent in lending, financial institutions must avoid complacency and prioritise ethical implementation.”
Undertaking a journey with AI doesn’t need to be done alone

Vikas Sharma, senior vice president and practice lead for banking and capital markets at EXL, the digital acceleration partner, highlights a huge point that firms need to understand before even applying AI: becoming an expert in the tech doesn’t happen overnight, so in order to make sure failures are avoiding, companies should be looking to partner with experts.
“The risks associated with AI failure in financial decision-making are far too grave to not account for safeguards and governing controls. These risks include but are not limited to customer funding impact, regulatory risk, reputational damage and operational challenges. Without reliable controls and a scalable framework, smaller failures may cascade to cause systemic instability and significant financial losses.
“As the financial industry is racing to incorporate AI in their processes and products, fintechs are at the forefront of this change. Fintechs are constantly experimenting to win over the data gap that they have with their big banking peers – and the advent of AI promises to be the final solution.
“Our experience at EXL suggests that most fintechs should kick off their AI initiatives with a partner firm which specialises in assessing, designing and implementing scalable AI roadmaps. The first step to implementing these roadmaps is to set up clear guardrails and to define an AI framework with humans in loop. The integration of human oversight into every critical decision point increases accountability and mitigates any possible failures.
“After all, firms realise that they are using this innovative technology to grow their member base and improve customer satisfaction – both of which will be impacted if strong governance controls are missing.”
Robust frameworks

Mark Dearman, director of industry banking solutions at FintechOS, the firm offering a low-code approach to help others digitise, also noted the different types of failures that can occur when fintechs rely on AI too much and shared his solution. He explains: “The likely consequences of AI failures in decision-making raise significant concerns about overreliance on these technologies. For example, there is a worrying possibility that some companies may become dependent on AI systems without maintaining robust human oversight.
“Some financial institutions have reduced their human risk management teams, creating potential gaps in the monitoring of AI systems and dangerous single points of failure.
“Automation bias is also a risk in financial decision-making, causing humans to trust computer-generated decisions despite contradictions to their judgements, potentially causing obvious errors to go unchallenged because they come from AI-based or traditional internal systems.
“In response to these increased risks, financial institutions must develop more robust frameworks to manage deployments of AI, including better testing protocols and clearer accountability structures. Regulatory bodies are increasingly focusing on AI governance in financial institutions, recognising the systemic risks of overreliance on these technologies which may lead to new requirements for transparency and human oversight in AI-driven financial decisions.
“Ultimately, the key is finding the right balance between leveraging AI’s expanding capabilities whilst maintaining sufficient human oversight to prevent potential failures. Financial institutions should view AI as a tool to improve human decision-making, not replace it entirely.”