Earlier this week, Technology Secretary Liz Kendall MP announced
plans for an AI Growth Lab – an ambitious shift in how the UK approaches AI regulation. The initiative would create regulatory sandboxes where firms can test new AI products and services under real-world conditions, with certain rules temporarily relaxed under
strict supervision. It’s a smart idea, and a logical next step after the success of regulatory sandboxes in financial services.
When the Financial Conduct Authority (FCA) launched its first sandbox in 2016 – the first initiative of its kind globally – it allowed firms to test new products live with real consumers under modified regulatory conditions. An
independent study has shown impact: FCA sandbox firms were 50% more likely to raise funding than their peers, and on average secured 15% more investment.
It’s promising that the government now wants to extend this model to AI – especially as the next wave of AI adoption unfolds. A recent global
survey found that 78% of organizations use AI in at least one business function. Yet “using AI” can mean very different things: from simply bolting off-the-shelf tools onto existing workflows to something more fundamental. Tools like ChatGPT have already
created efficiencies, but the real value will come when AI is integrated across every layer of business systems, not just individual tasks.
That level of innovation will inevitably push at the boundaries of regulations conceived long before the advent of AI. Most financial rules were built for static, rule-based systems. They assume models can be fully documented, decisions traced
to identifiable humans, and performance validated before deployment. But AI systems evolve, adapt and interact in ways traditional frameworks were never built to govern, leaving firms and regulators struggling to apply the rules.
What’s novel about the government’s AI Growth Lab proposal is its cross-economy scope. That matters for general-purpose technologies like AI, which often engage more than one regulatory framework. For instance, in finance, data protection and model-risk
rules pull in opposite directions: GDPR demands data minimisation, while the Prudential Regulation Authority (PRA) demands explainability. An AI sandbox could help reconcile the two, allowing firms to prove that privacy-preserving models can still meet supervisory
standards.
This is a bold policy that will need legislation to enact – and if implemented, it must be done responsibly. Lifting regulations comes with risks, so it’s vital there are robust safeguards and there is appropriate supervision from the relevant
regulators. It means taking certain rules off the table, like fundamental rights, so that they can never be modified in sandbox pilots.
But the UK is right to be thinking big to capture the benefits of AI. Just as the FCA’s regulatory sandbox helped make the UK the go-to place for fintech experimentation, the AI Growth Lab could do the same for the UK’s AI ecosystem – while offering
a route to wider regulatory reform. It’s difficult to prove that regulations can be adapted responsibly without first building the evidence base, and an AI Growth Lab would let the UK do just that – flex rules and experiment safely while shaping the standards
for responsible AI governance.