This means the synthetic data the digital twin can generate for training AI models can be more extensive and richer than traditional synthetic data techniques.
Another potential use case for digital twins that might become more relevant this year is to help with understanding and scaling agentic AI systems. Agentic AI allows companies to automate complex business processes, such as solving customer problems, creating proposals, or designing, building, and testing software. The agentic AI system can be composed of multiple data sources, tools, and AI agents, all interacting in non-deterministic ways. That can be extremely powerful, but extremely dangerous. So a digital twin can monitor the behavior of an agentic system to ensure it doesn’t go off the rails, and test and simulate how the system will react to novel situations.
Today, 75% of large enterprises actively invest in digital twins to scale AI solutions, McKinsey said in an April report. “In the next decade, successful Fortune 1000s will run their businesses and test their boldest strategies using digital twins,” says McKinsey partner Alex Cosmas.
These digital twins will be replicas of their operations and simulations of their full value chains. As Cosmas says, they’ll be living, breathing, expanding assets that solve a growing number of problems.