Building a business case for investment in AI integration requires measurability and observability. For organisations which invest in digital assets, the SDLC (software development lifecycle) is a strong candidate to start building that business case by
demonstrating measurable ROI.
However, just encouraging the adoption of AI tools is not enough. To be able to meet measurability and observability criteria, tools need the wider context of a defined framework to add governance (when, where and how they are applied). Only then can you
build the kind of business and implementation case around demonstrable ROI that will accelerate wider AI transformation across your organisation.
Governance in the AI-assisted SDLC
Governance acts as a rulebook for AI adoption and is the first step towards making impacts measurable, observable and compliant.
It 1. Aligns tool use with regulation, internal privacy policy, security compliance and stakeholder expectations, and 2. Gives teams a framework by which to implement AI and makes its impact visible.
A governance framework is established by speaking with stakeholders, and thoroughly checking existing infrastructure, processes and compliance requirements.
As a first step, leadership should examine regulatory frameworks and internal policies and expectations, and teams can agree on priority tasks.
In the U.K., for example, two external requirements most fintechs must comply with are the UK GDPR (General Data Protection Regulation) and the Data Protection Act 2018, which regulate the use of sensitive data, like end-customer data, for code tests and
to train open-source models, like ChatGPT.
When we consider that 55% of businesses in the financial sector use subscription versions of the main, open-source players, like CustomGPT.ai, it makes sense that a high-priority task for most teams is to add data encryption.
Once leadership accounts for external regulation, a comprehensive audit of the current SDLC gives insight into where AI can be most impactful. From there, leadership agrees on clear objectives for each stage in the SDLC, as well as outlines rules for tool
use, bringing us to the other side of AI governance:
Observability. If a tool’s impact is invisible, it’s impossible to validate outcomes and optimise use.
A company-wide AI governance rulebook explains the “what, how and when” of AI: what tools are used, how and when in the SDLC. Leadership must also define expectations and methods for measuring the accuracy of each tool’s outputs before they are added to
workflows.
As teams familiarise themselves with tools, they slowly validate each output in ‘baby steps’. Once tools are validated, they are added to workflows with a series of rules that regulate their use.
For example, leadership finds an AI-driven code review tool that complies with industry and internal regulation. To verify its accuracy before it’s added to the SDLC, the development team tests and validates its outputs.
Once proven to be accurate, the rules on when, where and how to use that tool are established. Instructions would include that the tool should be used for all code reviews, and require developers to submit PRs and incorporate AI feedback during approvals.
To encourage accountability, use RACI (Responsible, Accountable, Consulted and Informed) matrices and improve clarity on human vs AI responsibilities and spot any gaps in governance by defining who’s
- Responsible – Which team member, or AI tool, is responsible for completing the work?
- Accountable – Who is held accountable for the ultimate success or failure of that step or task?
- Consulted – Who will provide information before and during the project?
- Informed – Exactly who will be informed of updates and progress?
Once rules are defined and tools implemented, businesses can also use GenAI to spot gaps in governance in adoption.
For example, a business can train Gen AI on its internal AI policy and compare it to industry standards and regulatory changes. Or it can analyse implementation, giving leadership a second opinion on potential divergences from policy, like insufficient logging
of activities or a lack of human-in-the-loop.
Measurability
Without clear outcomes and established ‘before’ measurements, AI-driven productivity lacks quantifiability and is nearly impossible to track.
To make progress observable, leadership must first define and measure their team’s current SDLC metrics. A metrics discovery phase clarifies at which stages and to what extent a business applies a majority of its time, R&D and resources.
Imagine you measured a full sprint and found it takes 4 months to complete, of which documentation ate up 20% of your developers’ time. Leadership sets a modest goal to cut time spent on documentation to just 10% with AI-assisted code documentation.
When developers validate the selected code documentation tool’s outputs, the tool is added to workflows and priority metrics are monitored—in this case, time spent on documentation and cycle time—over a full sprint.
After completion, the original benchmark (20% of time spent) is referenced to check they’ve met their goal (cutting that time to 10%) and measure the value that tool provided.
Tools are updated and new versions are released frequently, so it’s important to continue testing and measuring the impact of updated and new versions.
Consistent adoption
AI is another new tool. Like any new tool, it presents an opportunity to learn and improve current processes.
In unfamiliar territory, teams are naturally hesitant and at times overly cautious.
Businesses can encourage adoption and build confidence by providing teams with resources like prompt libraries, practical recipes and workshops.
Given learning materials and transparent guidelines for how tools should be used, technical teams more easily and comfortably adopt new tech and processes, building confidence necessary for long-term use and ownership of outcomes.
Confidence and ownership amplify a business’s culture of innovation, pushing teams to be more ambitious in their application of AI tools.
Why that matters
As I
mentioned in Part 1, a framework that prioritises observability and testing outcomes, the average SDLC for a large, complex project can be completely realistically accelerated by 30%+ within 6 months, without adding headcount.
Even after modest changes to just a handful of stages, we’ve seen a minimum 20% acceleration across all projects. Efficiencies do vary by stage and context, differences which I’ll briefly highlight below.
Impact on ROI: Coding
We use AI-driven code completion, documentation and explanation tools to accelerate coding by minimum 20%, freeing up about one full day per week that our developers can use to work on other projects.
Impact on ROI: Internal communication
GenAI makes complex codebases easily understandable—meaning team members can move to work on unfamiliar projects more flexibly and diminish time spent on internal comms by about 25% on highly complex projects.
Impact on ROI: Testing
Results are especially persuasive at the testing stage—AI tools have slashed our QA teams’ manual workload by 60%, and increased test coverage to over 70%.
But AI’s impact doesn’t end at the SDLC. Looking past software development, a business case for broader AI integration across an organisation comes into view:
- Implementing a GenAI model trained on company policy and HR materials to reduce manual onboarding efforts.
- Adding an AI notetaker to both internal and client-facing meetings to consolidate comms and minimise repetitiveness.
- Checking messaging and outputs to unify messaging across teams like marketing and sales, even acting as a singular database for both teams while minimising manual overviews.
These are all relatively low-complexity projects. In Fintech, automation around compliance areas like KYC and anti-money laundering is quickly becoming standard. Almost every business will be able to identify parts of their processes that the application
of AI, from LLMs to computer vision, could deliver significant enough improvements or efficiencies to build a business case around.
A strong framework for the SDLC can be expanded to guide the integration of AI for these use cases as well.
In summary
Outcomes like the ones I’ve mentioned here make a case for swiftly implementing AI into the SDLC and beyond.
But the extent to which AI actually impacts productivity, KPIs and ROI is not dependent on the tools themselves, rather the framework leadership builds and executes to guide
- Governance; including regulations, privacy policy and rules for observability.
- Measurability; including tracking when and where tools are used and ‘before and after AI’ metrics that make impacts quantifiable.
- Adoption; including providing instructions, resources and clear expectations to help teams be well-informed and organised.
At the point we are at right now, even a handful of tools grounded in a strong framework can already accelerate software development projects by minimum 20%. But on average that acceleration is more like 30%+
It’s worth imagining what will be made possible by AI in the near future and encouraging teams to act now and invest the time and effort necessary to build the solid foundations needed to leverage that potential efficiently and without unnecessary delays.