Simulations are important. It was ever thus since the dawn of computing, but they have become even more important in AI-infused complex business environments. Simulations are core to any organization’s toolkit for making optimal decisions.
Combining simulations with domain models (ontologies) rooted in an organization’s platform, model and data architecture, allows businesses to explore and evaluate “what if” scenarios – like changes in market conditions, competitive moves, or operational disruptions
without real-world consequences. Simulations serve many use cases across many industries:
Industry: Banking
- Credit & Market Risk: Stress-test portfolios under adverse economic and risk scenarios to anticipate defaults and downturns.
- Operational Risk: Model internal processes to identify vulnerabilities and improve internal controls.
- Threat Intelligence: Simulate emerging risks (e.g., cyber fraud) to enhance proactive risk management.
Industry: Insurance
- Catastrophic Events: Simulate geopolitical and natural disasters, extreme events, to forecast potential claims distributions.
- Underwriting Optimization: Use simulation to help price premiums and design reinsurance opportunities.
Government
- Cybersecurity: Test how cyber-attacks can impact critical infrastructure and public safety, e.g. Heathrow.
- Policy Impact: Simulate long-term economic and social effects of policy, facilitating central bank economic policy making.
- Regulatory Oversight: Inform regulation design to bolster national resilience and appropriate welfare.
Corporations
- Supply Chain Resilience: Model disruptions from geopolitical events and external shocks to identify bottlenecks and optimize logistics.
- Geopolitical Event Modelling: Anticipate impacts from global events to support strategic planning.
- Threat Intelligence Simulation: To improve resilience
Simulation has transformed decision-making in a multitude of ways, and continues to transform it. In this blog, we’ll explore simulation’s productive and occasionally problematic past in financial services, and how simulation is evolving as it learns from
the mistakes of history to futureproof tomorrow’s innovation, evolving the stochastic into the contextual.
Simulation and the History of Computing: 5 Key Moments
1. The Birth of Computing Through Simulation Needs
The origins of computing are coupled with the need for simulation. Early computers, such as the ENIAC (1940s) for the US Army Laboratory performed ballistic trajectory simulations
for military applications. In the UK Alan Turing’s codebreaking work at Bletchley Park involved computational simulations to test cryptographic hypotheses.
2. Monte Carlo Simulation
As computing power grew, so too did its ability to model complex systems. The Monte Carlo method (1940s–50s), developed at Los Alamos for nuclear research, an original large-scale application of computers for probabilistic simulation. Over
time, weather forecasting, computational finance, engineering, and physics particle acceleration leveraged these capabilities, demanding more powerful hardware and software.
Natural Gas, Risk-Neutral Price Monte Carlo Simulation
3. The Expansion of Simulation in the late 20th Century
From the 1960s, NASA performed extensive computer simulations to model spacecraft behavior under different conditions. This era also saw the rise of Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), and, more recently, Model-Based
Design (MBD) approaches, allowing engineers to simulate structures, aerodynamics, and control systems before physical prototyping.
4. Simulation and Synthetic Data in AI and Machine Learning
AI and machine learning rely on simulated environments to train models. Reinforcement learning, for example, uses simulated worlds (e.g., OpenAI’s Gym) to learn without real-world risks. Meanwhile, autonomous vehicles and robotics industries rely heavily on
synthetic data and digital twin simulations for development.
5. Quantum Computing and AI (the Future)
Quantum computing will revolutionize simulations, particularly in chemistry, materials science, and cryptography, by modeling molecular interactions at unprecedented levels of accuracy, and inform future AI-driven simulations to enhance everything from financial
risk modeling to climate change forecasting.
Simulation Caused a Financial Crash, Then Helped Fix it
In financial services, simulation is everywhere. It underpins risk management, pricing derivatives, insurance liabilities projection, macro-economics, capital markets trade simulations, and optimizing investment strategies. Monte Carlo simulations are particularly
popular to model asset price movements, estimate portfolio risk, and evaluate complex financial derivatives by generating statistically thousands of potential future market scenarios.
It was particularly prominent in the 2008 global financial crisis, through the misuse of David X. Li’s Gaussian copula function. This mathematical formula was widely adopted
by financial institutions to model the correlation between defaults in complex financial products like collateralized debt obligations (CDOs) and mortgage-backed securities (MBS).
The Gaussian copula assumed, in this casem that defaults across different assets followed a normal distribution and were correlated in predictable ways. Banks and rating agencies would thus simulate based on this formula to estimate default probabilities,
structuring tranches of CDOs with supposedly low risk. The approach, however, made flawed assumptions, being untransparent or obfuscated to practitioners and users for whom copula stochastics was a dull technical detail. Mathematically, it underestimated the
likelihood of extreme, systemic events and failed to account for real-world dependencies and relationships between mortgage defaults, such as the nationwide collapse of the U.S. housing market.
As a result, financial institutions underpriced risk and overleveraged themselves. House prices fell, defaults soared and with the Gaussian copula’s assumptions broke down, there followed widespread collapses in CDO valuations, bank failures,
and a global credit freeze. The crisis really highlighted the dangers of blind reliance on quantitative models without accounting for economic fundamentals, systemic risks, and tail events.
More positively, after Global Financial Crisis, regulators encouraged greater use of simulation in risk regulation and systemic risk assessment. Stress testing, mandated by financial regulators in both banking and insurance,
relies on simulations to assess resilience under extreme economic scenarios, impacting regulatory risk and its constituents: operational risk, credit risk, counterparty risk and market risk. Systemic risk, meanwhile, was incorporating network analysis, what
is now called graph data science, to start to model and construct relationships between economic participants. In this, we see the emergence of current simulation trends.
AI Needs Reasonable Transparency and Governance
Indeed, with the recent rise of agent-based modeling and AI-driven financial simulations, firms are deploying increasingly sophisticated models of market behavior, incorporating human decision-making and adaptive strategies, all of which demand context and
understanding. Irrespective of geopolitics – laissez faire(-ish, noting the tariff hypocrisies) of Trumponomics, or regulation-centered
EU AI Act governance, lessons of the past have largely been learned. Responsible executives know that governance, validation and transparency is critically important for a firm
to ensure it doesn’t become the next Lehman Brothers, Northern Rock or Bear Sterns.
With the quantitative methodologies of the past combining with the AI innovations of the present and future, the demand for transparency and good governance continues to increase, which simulation facilitates in controlled environments, such as:
- Bias and Risk Assessment – Simulating AI outputs under different conditions can identify biases, unintended consequences, and compliance risks.
- Regulatory Stress Testing, running scenarios to align AI-based decisions, automated or augmented, with evolving legal frameworks.
- Explainability and Trust – Running simulations on synthetic or real-world-inspired data can improve model transparency.
- Robustness and Security – Simulated adversarial testing to expose vulnerabilities and empower resilience.
The interconnectedness of Entities, revealing relationships in a more contextual way, powerful structures to seed and incorporate into scenarios and simulations
Here is the good news. Graph technologies, seen on the right in the image above, are helping simulation become more contextual, relationship-oriented and of the real world, and helping improve on the statistical assumptions of models, such as those that
caused the Global Financial Crisis.
What if in the early 2000s we had been able to connect and quantify the relationships on the “real” side of mortgage and credit derivatives, explore the homeowners, their locations, housing characteristics, and their income and mortgage commitments? On the
financial services side, what if we had Graph insights into the diffusion of sensitive products among the leveraged financial organizations? What if we had been able to simulate all of this in advance? With the marriage of compute and knowledge, in the form
of knowledge graphs (traditional tables don’t cut it so well), no longer do we need to constrain ourselves to blunt (and still useful) statistical
approximations, but provide – and cite – context, bringing transparency of real relationships on the way. In this way, simulation can better incorporate:
- Contextual Data for Better Models – Knowledge graphs organize relationships between entities (e.g., people, organizations,
events), enabling simulations to mirror real-world complexity more effectively. - Dynamic Scenario Generation – By linking diverse data sources, knowledge graphs help create realistic, evolving simulation environments for predictive modeling, risk assessment, and AI training.
- Causal Inference & Explainability – Unlike black-box models, simulations powered by knowledge graphs can explain why specific outcomes occur by tracing relationships and dependencies.
- Adaptive & Real-Time Simulations – Graphs support dynamic updates, allowing simulations to evolve as new information becomes available—essential for areas like financial modeling, cybersecurity, and supply chain resilience.
- Enhancing AI & Digital Twins – Knowledge graphs provide structured inputs for AI-driven simulations and digital twins, improving decision intelligence and scenario planning.
By combining the public knowledge in LLMs, your organization’s enterprise knowledge (including its wider unstructured data sources) captured in graphs, an expert user’s domain expertise, and algorithms based on Monte Carlo methods like, for example, the
Monte Carlo Tree Search, we can massively expand the breadth and depth of explored futures—leading to better risk understanding, management, and early warning systems, especially for macroeconomic, systemic and geopolitical risks.