Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More In the race to harness the transformative power of generative AI, companies are betting big – but are they flying blind? As billions pour into gen AI initiatives, a stark reality emerges: enthusiasm outpaces understanding. A recent KPMG survey reveals a staggering 78% of C-suite leaders are confident in gen AI’s ROI. However, confidence alone is hardly an investment thesis. Most companies are still struggling with what gen AI can even do, much less being able to quantify it. “There’s a profound disconnect between gen AI’s potential and our ability to measure it,” warns Matt Wallace, CTO of Kamiwaza, a startup building generative AI platforms for enterprises. “We’re seeing companies achieve incredible results, but struggling to quantify them. It’s like we’ve invented teleportation, but we’re still measuring its value in miles per gallon.” This disconnect is not merely an academic concern. It’s a critical challenge for leaders tasked with justifying large gen AI investments to their boards. Yet, the unique nature of this technology can often defy conventional measurement approaches. Why measuring gen AI’s impact is so challenging Unlike traditional IT investments with predictable returns, gen AI’s impact often unfolds over months or years. This delayed realization of benefits can make it difficult to justify AI investments in the short term, even when the long-term potential is significant. At the heart of the problem lies a glaring absence of standardization. “It’s like we’re trying to measure distance in a world where everyone uses different units,” explains Wallace. “One company’s “productivity boost”’ might be another’s “cost savings”. This lack of universally accepted metrics for measuring AI ROI makes it difficult to benchmark performance or draw meaningful comparisons across industries or even within organizations. Compounding this issue is the complexity of attribution. In today’s interconnected business environments, isolating the impact of AI from other factors – market fluctuations, concurrent tech upgrades, or even changes in workforce dynamics – is akin to untangling a Gordian knot. “When you implement gen AI, you’re not just adding a tool, you’re often transforming entire processes,” explains Wallace. Further, some of the most significant benefits of gen AI resist traditional quantification. Improved decision-making, enhanced customer experiences, and accelerated innovation don’t always translate neatly into dollars and cents. These indirect and intangible benefits, while potentially transformative, are notoriously difficult to capture in conventional ROI calculations. The pressure to demonstrate ROI on gen AI investments continues to mount. As Wallace puts it, “We’re not just measuring returns anymore. We’re redefining what ‘return’ means in the age of AI.” This shift is forcing technical leaders to rethink not just how they measure AI’s impact, but how they conceptualize value creation in the digital age. The question then becomes not just how to measure ROI, but how to develop a new framework for understanding and quantifying the multifaceted impact of AI on business operations, innovation, and competitive positioning. The answer to this question may well redefine not just how we value AI, but how we understand business value itself in the age of artificial intelligence. Summary table: Challenges in measuring gen AI ROI Challenge Description Impact on Measurement Lack of standardized metrics No universally accepted metrics exist for measuring gen AI ROI, making comparisons across industries and organizations difficult. Limits cross-industry benchmarking and internal consistency. Complexity of attribution Difficult to isolate gen AI’s contribution from other influencing factors such as market conditions or other technological changes. Introduces ambiguity in identifying gen AI’s true impact. Indirect and intangible benefits Many gen AI benefits, like improved decision-making or enhanced customer experience, are hard to quantify directly in financial terms. Complicates the creation of financial justifications for gen AI. Time lag in realizing benefits Full benefits of gen AI might take time to materialize, requiring long-term evaluation periods. Delays meaningful ROI assessments. Data quality and availability issues Accurate ROI analysis requires comprehensive and high-quality data, which many organizations struggle to gather and maintain. Undermines reliability of ROI measurements. Rapidly evolving technology Gen AI advances rapidly, making benchmarks and measurement approaches outdated quickly. Increases the need for continuous recalibration. Varying implementation scales ROI can differ significantly between pilot tests and full implementations, making it difficult to extrapolate results. Creates inconsistencies when projecting future returns. Integration complexities Gen AI implementations often require significant changes to processes and systems, making it challenging to isolate the specific impact of gen AI. Obscures direct cause-and-effect analysis. Key performance indicators for gen AI ROI To better navigate these challenges, organizations need a blend of quantitative and qualitative metrics that reflect both the direct and indirect impact of gen AI initiatives. “Traditional KPIs won’t cut it,” says Wallace. “You have to look beyond the obvious numbers.” Among the essential KPIs for gen AI are productivity gains, cost savings and time reductions—metrics that provide tangible evidence to satisfy boardrooms. Yet, focusing only on these metrics can obscure the real value gen AI creates. For example, reduced error rates may not show immediate financial returns, but they prevent future losses, while higher customer satisfaction signals long-term brand loyalty. The true value of gen AI goes beyond numbers, and companies must balance financial metrics with qualitative assessments. Improved decision-making, accelerated innovation and enhanced customer experiences often play a crucial role in determining the success of gen AI initiatives—yet these benefits don’t easily fit into traditional ROI models. Some companies are also tracking a more nuanced metric: Return on Data. This measures how effectively gen AI converts existing data into actionable insights. “Companies sit on massive amounts of data,” Wallace notes. “The ability to turn that data into value is often where gen AI makes the biggest impact.” A balanced scorecard approach helps address this gap by giving equal weight to both financial and non-financial metrics. In cases where direct measurement isn’t possible, companies can develop proxy metrics—for instance, using employee engagement as an indicator of improved processes. The key is alignment: every metric, whether