A couple of years after its initial boom, artificial intelligence (AI) still remains a huge buzzword in the fintech industry, as every firm looks at a new way of integrating the tech into its infrastructure to gain a competitive edge. Exploring how they are going about doing this in 2025, The Fintech Times is spotlighting some of the biggest themes in AI this February.
There is a huge price to be paid by organisations when AI and machine learning (ML) fail in the financial decision-making process. Having explored some ways in which firms can avoid these errors, we now hear from more experts about the potential pitfalls when AI fails.
Anyone and everyone can be impacted
Mohamed Elgendy, the co-founder and CEO of venture-backed AI testing firm Kolena, notes how the solution to avoiding AI errors doesn’t lie in a lack of usage, but rather more vigorous testing. He says: “AI failures in financial decision-making can have cascading consequences – from incorrect loan approvals that affect individual lives to algorithmic trading errors that can impact entire markets. The real danger isn’t just AI making mistakes, but our potential blind spots in detecting these failures before they affect customers.
“At Kolena, we’ve observed that many companies implement AI without robust testing frameworks to validate their models across diverse scenarios. This creates a false sense of security. While AI can process vast amounts of financial data and identify patterns that humans might miss, treating it as infallible is dangerous.
“The solution isn’t to use AI less, but rather to test it more rigorously. Companies need systematic approaches to evaluate AI performance across different market conditions, customer segments, and edge cases. This means going beyond simple accuracy metrics to understand how models perform in real-world scenarios.
“True AI reliability comes from continuous validation, comprehensive testing, and maintaining human oversight. Financial institutions should embrace AI’s capabilities while building robust testing infrastructure to catch potential failures before they impact customers. Complacency isn’t an option when dealing with people’s financial futures.”
Risks of automation dependance
Sharing a similar view, Adam Ennamli, chief risk and security officer at General Bank of Canada, added: “Failures can have existential consequences, from significant monetary losses to complete loss of market trust and regulatory penalties. Examples include flash crashes in algorithmic trading, biased lending decisions affecting vulnerable populations, and incorrect risk assessments that could destabilise a financial institution.
“When AI tells you what you want to hear, you tend to ‘forget’ or at least minimise the risks that come with automation dependence, whether they relate to cybersecurity or to the higher, compounding transferability of errors due to interconnected systems.
“For instance, if you take Robotic Process Automation (RPA) and machine learning algorithms, these enable enhanced data analysis and improved fraud detection capabilities, but over-reliance on these systems without proper risk controls and human oversight will create systemic vulnerabilities by design. Meaning that you are hardcoding these risks in the fabric of your organisation.
“Some ways to attenuate that include robust testing protocols led by human experts with alternative, reliable and stress tested back options, continuous monitoring systems that are regularly maintained, and simply, a critical challenge to the outputs rather than treating AI as infallible. The sweet spot lies in balancing technology’s benefits with human judgment, particularly in complex scenarios requiring things that AI cannot reproduce yet and involve emotions, and reasoning.
“As a financial decision-maker, my advice is to maintain, for the time being, flexibility in your automated systems while ensuring adequate, capable human oversight and intervention mechanisms.”
Responsible AI development requires a new approach

AI has been used for decades, despite it only picking up huge steam in late 2022/early 2023. Satayan Mahajan, CEO at Datalign Advisory, the firm finding users the right financial advisor, explains that while the tech has huge potential, the approach to successfully implementing the tech must match this potential i.e. a lot of potential means a lot of preparation.
“Failures in the financial industry are expensive and generate low trust with consumers. AI failures on financial decisions may similarly significantly impact markets, industries and consumers. While our discussion of AI today is a hot topic, AI has been used in financial services for several decades.
“For example in 2010, there was the Flash Crash – a poorly designed algorithmic trading system executed a large sell order without considering price or timing, causing other AI trading systems to withdraw from the market, causing a $1trillion market value drop. More recently in 2019, Apple’s credit card algorithm was accused of gender bias for credit limits on its credit card.
“Today’s AI systems have unprecedented power and innovation potential, but this technological leap requires an equal leap in our approach to compliance/risk management/institutional investment in responsible AI development.”
Monitoring process

Michael Gilfix, chief product and engineering officer at KX, the database for real-time analytics, notes how in order to avoid failures, firms must ensure that the correct processes are in place. He says: “Successful application of AI in financial decision-making means putting in place the right controls and process to ensure that AI is performing robust decision-making.
“All successful firms implement robust monitoring to ensure that algorithms are functioning correctly; this monitor detects algorithm drift or manages bias, which in turn require retraining or recalibration of model weights to maintain AI performance. Guard rails can be put in place to ensure that business impact is minimised should AI behave badly or unpredictably.
“Firms can choose how they want to integrate AI output into their business processes: whether they want or can tolerate automated decision making or whether they want the AI to act as an advisor providing recommendations to a person who makes the final judgement. This allows the firm to select the right approach based on risk tolerance, AI effectiveness, business risk, and value. This means that AI is a critical tool within a larger toolkit to enable enterprises to achieve better business performance.”

On the topic of monitoring processes and spotting mistakes, Jay Zigmont, PhD, CFP, founder of Childfree Wealth, the investment advisory firm, added: “In finance, we respond when things go wrong, and they are caught. The fact is that people who give advice make mistakes every day. Hopefully, these mistakes will not be too big and will not negatively impact our clients. AI is only as good as its programming, training, and quality assurance.
“We may catch AI fails, but I wonder if we put humans through the same quality assurance process if they would fail at a higher rate.”