A couple of years after its initial boom, artificial intelligence (AI) still remains a huge buzzword in the fintech industry, as every firm looks at a new way of integrating the tech into its infrastructure to gain a competitive edge. Exploring how they are going about doing this in 2025, The Fintech Times is spotlighting some of the biggest themes in AI this February.
Having explored the different ways in which AI can impact the customer service sector, ranging from the importance of the ‘human touch‘, and the role of AI agents in banking, we now turn our attention to machine learning (ML) in financial decision-making. Regulations are going to be impacting the way AI is used from a customer-facing standpoint, but it will also impact back-office decision-making too. In light of this, we hear from industry experts on how AI regulations are impacting machine learning tools and processes in finance.
The Quality Control Standards for AVMs
For Kenon Chen, EVP of strategy and growth at Clear Capital, a national real estate valuation technology company, one of the most impactful regulations which will have a knock-on effect on machine learning will only take place in October. Specifically, the Quality Control Standards for Automated Valuation Models (AVMs) rule.
“While it doesn’t deal with machine learning directly, it is well known that most modern AVMs utilise machine learning as a method for accurately predicting the market value of residential property. The rule’s handling of AVMs sets a standard for other machine learning models used in financial decision-making, and provides some impetus for industry-wide standardisation.”
“The final rule was jointly filed by the collective government finance agencies in 2024 after years of effort, and provides additional clarity post-Dodd-Frank Act on how AVMs should ensure confidence in the results, protect against the manipulation of data, seek to avoid conflicts of interest, require random sample testing, and comply with nondiscrimination laws.”
“The rule does a good job of defining expectations around model data input and model results, rather than trying to micromanage complex AI calculations, which would greatly constrain innovation. While some parties feel that the rule was not specific enough, it makes a healthy progression in what has been limited additional guidance since the Dodd-Frank Act passed in the wake of the housing finance crisis.”
The Equal Credit Opportunity Act

Historically, AI has been accused of learning patterns which don’t reflect well on consumers. Therefore, organisations have a responsibility to ensure their AI is not developing a bad bias. Helen Hastings, CEO and co-founder of Quanta, the AI-powered accounting service, looks to the Equal Credit Opportunity Act as a means of avoiding discriminatory behaviour.
“AI and machine learning are, at their core systems, pattern matching. They ‘train’ on past data. This is incredibly problematic when we know that past historical decision-making was highly discriminatory, particularly when it comes to the financial industry, which has a history of discriminating against underrepresented groups.
“The most noteworthy to me is the ECOA (Equal Credit Opportunity Act). When a financial institution declines a consumer’s access to credit, it is law that you must understand why you are declining and inform the user why. You simply can’t say ‘the AI said so’. Relying on black boxes is dangerous.
“ECOA makes ‘disparate impact’ illegal. This means you must serve protected classes equally, even if each of your policies does not sound discriminatory in theory. If your AI chooses to favour certain classes of people because it has learned from past history, then you are breaking the law. There will be more regulation soon to ensure that AI does not discriminate, which I believe is a large concern of AI’s pattern-matching based on the past. Access to the financial industry is just too important.
The Federal Housing Administration
Caleb Mabe, global head of privacy and data responsibility at banking services provider nCino, also looked to ECOA as one major regulation which will impact AI’s use in financial decision-making. Additionally, though, he also noted the importance of other regulations like the Federal Housing Administration, in adjusting how ML is used in financial decision-making.
“Fair lending regulations like the Federal Housing Administration (FHA) and ECOA are going to be top of mind for financial institutions (FIs) using ML in financial decision-making. We’ve already begun to see questions of fairness in the use of ML in cases like Connecticut Fair Housing Center v. Corelogic Rental Property Solutions, LLC. Navigating these regulations will be important from a deployer and developer perspective as banks balance efficient decision-making with demonstrable fairness.
“Additionally, the Gramm-Leach-Bliley Act (GLBA) has been a long-standing concern for FIs and will continue to be for FIs using NPI to develop and train models. Institutions should continue to be mindful of their notice and consent obligations as they expand internal data science and ML efforts.
“Banks will be best served by ML when using reputable providers of intelligent solutions who are well aware of bank regs and dedicated to serving the financial space.”
Explainability of AI decision-making

There is a lot going on in the US in regard to regulations as Joseph Ahn, co-founder and CSO at AI risk management firm, Delfi notes. As a result, there isn’t one specific regulation that will impact the industry necessarily. Rather, Ahn explains that with time, compliance standards will become integrated into AI processes as new innovations launch across the globe.
“The acting Federal Deposit Insurance Corporation (FDIC) chair, Travis Hill issued a statement on 20 January 2025 describing the focus for the FDIC moving forward, including an “open-minded approach to innovation and technology adoption”.
“President Trump also issued an Executive Order on 23 January 2025 for the removal of “barriers to American leadership in artificial intelligence.” This approach balances with guidance until this point, which generally emphasises AI safety and transparency, particularly urging caution towards black-box AIs.
“Generally the current regulatory environment is very positive towards AI innovation and adoption. However, in the long run transparency, explainability of AI decision-making, and human monitoring for fairness and compliance standards will likely become integrated into AI processes. This effect is compounded in financial decision-making, where transparency and ability to reproduce analyses and conclusions will be of significant regulatory interest.”
Gradual regulatory rollout

There are not any specific regulations relating to financial services that will impact AI and ML specifically yet according to Ryan Christiansen, executive director of the Stena Fintech Center at the University of Utah. However, he explains that the use of machine learning can be governed by fair lending and anti-discrimination Laws.
“If ML models are being used, the models must be implemented in a way that does not cause disparate impact or other outcomes that could result in violations. Models for ML must also comply with Federal Reserve guidance on model validation, documentation and monitoring.
“It is likely that financial institutions will begin to adopt ML tools for capital planning, this will require robust assessments and ongoing validation of the risks in the ML tools. As the ML tools begin to be adopted, it will be important for FIs to document how they are implementing the tools against existing regulations.
“I expect ML models to be implemented first against lower regulatory risk uses over the next 12-24 months so that FI’s can cycle through regulatory reviews prior to widespread adoption because of a lack of specific ML regulations.”