Forrester

Factories, Foundries, or Refineries? Seeking The Right Metaphor For Agentic AI

Enterprise architecture (EA) professionals have mastered deterministic systems: predictable roadmaps, proven adoption patterns, and engineered outcomes. But AI obliterates this paradigm. Probabilistic computing demands fundamentally different architectural approaches that legacy EA frameworks can’t deliver. EA pros must architect for uncertainty, not predictability. They must design expertise-amplification systems, not data-processing pipelines. Organizations that architect AI around human knowledge will dominate competitors trapped in deterministic thinking. Big Tech Foundries Are Crucial But Not Differentiators Big tech platforms such as AWS, Google, and Microsoft offer foundries for AI development: scalable, standardized infrastructure, tools, and models. They provide enterprises a base to build upon, granting access to cloud computing, ML models, and data processing capabilities. But here’s the catch: Foundries don’t offer differentiation. The real value lies in how these tools are used, and that’s where your tacit knowledge — the intangible expertise that only your people have — comes into play. Factories Are Antipatterns Driving Standardization Without Innovation In the rush to deploy AI, enterprises may default to factorylike processes: repetitive, efficient, and standardized. The goal is often speed: moving quickly from prototype to production. While this mentality supports operational scale, it often deprioritizes the contextual nuance needed for innovation. The danger lies in overextending factory thinking into areas where differentiation matters, such as customer insight, decision support, or adaptive workflows. Internal Refineries Turn Raw Data Into Strategic Assets Like refineries that transform crude oil into high-value products, successful AI implementations require systematic refinement of raw inputs (data, models, and algorithms) through the application of tacit knowledge to deliver strategic business outcomes. Tacit knowledge is more than just facts or tactical knowledge — it’s the deep, experience-driven expertise your team possesses. It’s the ontologies and the working language they use to solve problems, which evolves in ways AI will never completely be able to keep up with. And while tactical knowledge can be written down, tacit knowledge is lived and embodied in the human experience — it can’t be fully codified. But it can be surfaced and embedded through structured collaboration. Practices such as cross-functional annotation, domain-specific ontologies, and human-in-the-loop refinement pipelines help translate lived expertise into AI behavior. Take customer behavior predictions: An AI model can process large datasets to predict churn rates, but it’s the tacit knowledge of your team that brings nuanced insights to refine the model. They are alert to the emergence of new interests, cultural shifts, and ways of thinking among your customers. This is where AI becomes contextually valuable, because it’s not just the facts that it processes — it’s the human expertise embedded in it. Build Your Own Future AI Refineries As AI evolves, your ability to build and scale internal refineries will be pivotal. Refinement is the knowledge capacity, the culture, and the practice. Beyond processing data, it’s about creating a knowledge ecosystem that allows you to continuously evolve AI solutions in alignment with your strategic goals. As you embark on your own journey of building AI refineries, it’s crucial to bring your people along for the ride. While technology enables AI innovation, the human aspect — ensuring that employees feel involved and empowered — is what makes the system sustainable. People need to be taken on the journey, encouraged to bring their tacit knowledge to the process, and shown that their insights are valued and integrated into the AI systems. Your People Are The Critical Rare-Earth Minerals For AI Success In the age of AI, your people and the tacit knowledge they hold is your most valuable asset. While public data and big tech’s foundries provide essential tools and infrastructure, it’s your people’s insights, experience, and judgment that turn AI from a generic model into a strategic asset. As an EA professional, your role will evolve to ensure that AI isn’t only built but refined: constantly updated and adapted by the ever-growing body of knowledge your team brings to the table. That’s where the true competitive advantage lies. (This blog post was written in collaboration with Chiara Bragato, senior research associate.) source

Factories, Foundries, or Refineries? Seeking The Right Metaphor For Agentic AI Read More »

What To Expect At Viva Technology 2025

Paris will again become the center of innovation and technology next week, as the 2025 edition of Viva Technology takes place from June 11–14 at Paris Expo Porte de Versailles. VivaTech has become the largest technology and innovation event in Europe, with 160,000 delegates — including many C-level executives, over 13,000 startups, and 3,000-plus investors — attending from all over the world. The lineup of speakers this year is again quite impressive, featuring Jensen Huang (NVIDIA’s CEO), Joe Tsai (cofounder and chairman of Alibaba), Arthur Mensch (CEO of Mistral AI), Yann LeCun (VP and chief AI scientist at Meta), Mike Krieger (Anthropic’s chief product officer), and Thomas Dohmke (CEO of GitHub) but also CEOs of large firms such as LVMH, L’Oréal, Sanofi, Instacart, Verizon, and many more! I have been attending VivaTech since the beginning, and it is fascinating to see how the event has evolved since the first edition in 2016. If you read my previous blogs from 2018, 2022, or 2024, you really get a sense of the technology evolution. Only a few months after Paris hosted the AI Action Summit, there is no doubt that AI will continue to be front and center. Hopefully, the discussion will focus on the strategic value of the technology and how business leaders can help their organizations make the most of the AI opportunity. Business leadership of AI strategy is a weak spot for most organizations. According to Forrester’s State Of AI Survey, 2024: Fewer than a quarter of CEOs are directly responsible for the AI business strategy (just 21% in North America versus 16% in Europe). In more than two-thirds of firms, the technology function (CIOs/CTOs and their teams) — not the business function — is leading AI efforts across the organization (63% in North America and 71% in Europe). Beyond leadership, there is still a huge cultural and skills gap for organizations to make the most of the latest AI developments. I don’t expect key announcements but mostly debates on the economic, political, societal, and environmental impact of technology innovation. VivaTech is more often than not an opportunity to showcase how large groups innovate with startups and to get a glimpse of the latest technology innovations, especially on deep tech (including cybersecurity and defense tech) and climate tech. VivaTech recently released a list of the top 100 rising European startups. Image source: Viva Technology   Personally, I will pay stronger attention to some of the climate tech players such as 1Komma5° (electrification of buildings), UrbanChain (decentralized energy networks), and Treefera (decarbonization solutions). Interestingly, among the French Tech Next40/120 Class of 2025 that the French government just announced, 29 players are categorized in the green tech/agritech sector, including players such as Chargemap, Electra, ElicitPlant, Energy Pool, FAIRMAT, GravitHy, Metron, TSE, and Voltalis. Many of the startups in attendance will hope to attract new funding and reach unicorn status. Unfortunately, there is still too much hype and fascination around unicorns. Don’t get me wrong: Financing innovation matters, but it is not the panacea. While funding the next generation of green tech will help, technology innovation alone will certainly not be enough to live within the nine planetary boundaries. Low-tech solutions, frugal innovation, regulation, new business models, evolving consumer behaviors, and mindset change among C-leaders are all even more critical. I’ll be at the event and look forward to meeting many of you there! If you’re a Forrester client, please feel free to contact your account team to set up a meeting or arrange a conversation with me. source

What To Expect At Viva Technology 2025 Read More »

Sports Sponsorships Surge Despite Fuzzy ROI

Following its most-watched season in 2024, the WNBA entered the 2025 season last month with a record-setting roster of 45 sponsors. Five brands, including Ally, Booking.com, and Coach are newcomers to the WNBA. Thanks in part to emergent leagues like the WNBA, sports sponsorships are more popular than ever. According to Forrester’s Q4 B2C Marketing CMO Pulse Survey, 2024, 39% of respondents plan to increase their investment in large-scale sports sponsorships this year, while another 28% plan to enter the space for the very first time. But just over three-quarters (76%) of US B2C marketing executives who invested in sports sponsorships in 2024 agree or strongly agree that they struggle to calculate the ROI of their sports sponsorships. Maximizing The Value Of Sports Sponsorships Is Resource-Intensive CMOs that make the most of their big-ticket sports sponsorships start first with a strong brand, audience, and market alignment. Next, they build a flexible and collaborative partnership with the sports property from the get go. Then, it all comes down to the quality and amplification potential of the sponsorship’s activation(s). For example, take what happened at last month’s Formula 1 drivers’ parade: F1 drivers got behind the wheel of life-sized, drivable LEGO cars — each built from nearly 400,000 LEGO bricks to promote LEGO’s Formula 1 product line. This activation at the Miami Grand Prix kicked off a global tour across F1 races. Introducing Forrester’s Sports Sponsorships Research Collection We just published three interconnected sports sponsorship reports chock-full of best practices, data, examples, and frameworks/templates: Maximize The Value Of Your Sports Sponsorships. This report lays out 12 best practices to help make the most of your sports sponsorships — from selecting them to renewing them and everything in between. [READ REPORT] Case Study: AARP Builds Brand Relevance By Sponsoring The APP Tour. This case study describes how AARP’s sports sponsorship helped shift the organization’s brand perception with a limited budget. [READ CASE STUDY] The Sports Sponsorship Activation Framework. This downloadable template helps identify tactics to activate before, during, and after a sponsored sports event to extend its reach using expanded storytelling. [DOWNLOAD TEMPLATE] Learn More About AARP’s Sports Sponsorship Attending CX Summit North America? Join my sports sponsorship panel, featuring AARP and the Association of Pickleball Players (APP) on Wednesday, June 25 at 2:25 p.m. CDT (in the Marketplace). We’ll go behind the scenes of that particular sports sponsorship, diving deeper into the case study report. Speaking of pickleball … Be sure to stick around right after the panel, as we’ll head a few steps over to the onsite pickleball court to teach you how to play! Forrester clients: Let’s chat more about your sports sponsorship strategy via a Forrester guidance session or on the pickleball court at CX Summit North America. source

Sports Sponsorships Surge Despite Fuzzy ROI Read More »

President Trump Amends Previous Cybersecurity Executive Orders: Here Is What You Need To Know

On Friday, June 6, President Trump issued an executive order (EO) on national cybersecurity. The order amended and struck several provisions in Executive Orders 13694 and 14144, which were respectively issued by President Obama in 2015 and by President Biden in early 2025. The biggest changes were in the areas of software security, post-quantum cryptography, digital identity, fraud management, and AI. In some cases, Trump’s EO dropped technology specifics for certain guidelines. Back in January, Forrester detailed the key topics and technology areas in EO 14144. The Trump administration’s new EO does not revoke EO 14144 entirely, but there are changes to several provisions. Here’s what security leaders need to know. Software Supply Chain Guidance Moves Away From Machine Attestation The latest EO strikes sections 2(a) and 2(b) listed in EO 14144, whose purpose was to operationalize transparency and security in third-party software applications. These sections recommend federal acquisition contractual language to require that software providers provide: “(A) machine-readable secure software development attestations; (B) high-level artifacts to validate those attestations; and (C) a list of the providers’ Federal Civilian Executive Branch (FCEB) agency software customers.” The sections also mandated a process for CISA to validate the attestations and artifacts and recommend companies with failed attestations to the DOJ. It’s worth noting, however, that: The new EO does not remove all software supply chain requirements. The new EO does not specifically repeal EO 14028 or the OMB M-23-16 update to M-22-18, “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices.” Therefore, federal agencies are presumably still on the hook to obtain a self-attestation from software suppliers and, at their discretion, require evidence in the form of an SBOM artifact. Clarification on this point from CISA, GSA, or OMB is anticipated and necessary. Secure software development framework (SSDF) updates are coming. The new EO retains and sets deadlines for NIST to establish an industry consortium that will provide guidance on how software providers can demonstrate the implementation of the SSDF. A preliminary update to the SSDF with practices, procedures, controls, and implementation examples regarding the secure and reliable development and delivery of software, as well as the security of the software itself, is preserved, with a due date of December 1, 2025, set. In addition, NIST will update Special Publication 800–53 to add “how to securely and reliably deploy patches and updates.” Post-Quantum Cryptography (PQC) Migration Remains A Priority, Though Some Changes Could Slow Collaboration And Adoption While the new EO strikes subsection 4(f) from EO 14144, its amended replacement continues to recognize the threat posed by a cryptanalytically relevant quantum computer (CRQC) and upholds the transition to PQC. The amendment also introduces a fixed date of December 1, 2025, for 1) the release of a regularly updated CISA list of product categories that support PQC and 2) NSA (for NSS) and OMB (for non-NSS) to issue requirements for agencies to support TLS 1.3 or a successor version no later than January 2, 2030. Two other notable changes raise some issues, however: PQC support requirements are no longer mandated in product solicitations. The new EO removes certain requirements, including PQC support in product solicitations and adopting PQC or hybrid KEM as soon as practicable. From a procurement and implementation perspective, removing these sections leaves much to the discretion of individual agencies and their risk appetite. This could introduce delays in governmentwide migration to PQC. International collaboration language has been removed. The amendment notably removes the section calling for engaging with foreign governments and industry groups in key countries to encourage transition to NIST’s standardized PQC algorithms. NIST has been a leader in developing new PQC standards, and strong international collaboration has helped to accelerate that work and led many countries to adopt the NIST standards for themselves. If standardized PQC algorithms are found vulnerable or broken in the future (due to CRQC or just because of discovered flaws in the algorithm), new standards will take time to develop, and less international collaboration could slow new standard development and make interoperability more difficult. Other Changes Address Protocols And Emerging Technologies The new EO removes a lot of technology-specific language, which may allow for more flexibility in implementation. For example, EO 14144 originally mandated that the federal government “adopt proven security practices from industry” in the IAM realm and pilot deploying the WebAuthn standard. The new EO removes those sections. The new EO also removes the original references to BGP and its potential vulnerabilities in the internet routing section. But these technology specifics could reappear in some of the published department-level guidance that the EO requires. In addition to those examples, be aware that: Fraud and digital identity provisions have been removed. The new EO completely removes Section 5 of EO 14144, titled “Solutions to Combat Cybercrime and Fraud.” Section 5’s removal marks intent to reduce mandates of specific security technologies that federal agencies should use when it comes to managing fraud and digital identities. The new EO also removes initiatives to use digital ID document verification for citizens when using services of the US federal government. Space system cybersecurity is still in orbit, but trajectory is less clear. While the latest EO preserves most cybersecurity requirements for space systems, it notably scales back mandates for space national security systems (NSSes). These systems remain critical to national infrastructure and security, yet the EO no longer requires the Committee on National Security Systems to identify specific requirements for intrusion detection, secure booting via hardware roots of trust, and patch management. Instead, it tasks the committee to identify requirements for cyber defenses broadly. Space cybersecurity is an evolving domain where defense and civilian operators alike are actively seeking government-backed standards to make it easier to cost-effectively maintain space assets. Removing this language may offer more leeway to address broader requirements, but space NSS operators and government agencies will still need to account for the removed components in their existing procurement- and system-lifecycle requirements. AI provisions include a stronger focus on AI

President Trump Amends Previous Cybersecurity Executive Orders: Here Is What You Need To Know Read More »

Graphic IT Management Requires Clear Language

  Every week, I talk with IT leaders grappling with an all-too-familiar challenge: managing an increasingly complex IT portfolio while avoiding the dead ends of outdated conceptual models. One debate that refuses to die? The endless squabble over applications vs. services. Let’s be blunt: History has passed by the ITIL-era advocacy of “service portfolios.” At the time, ITIL consultants were fond of coming into IT shops and saying things like, “The business doesn’t want an application! It wants a service!” Now, some in the agile and product management communities similarly argue that application portfolio is an outdated construct that should be replaced with product portfolio. But these arguments overlook just how entrenched the concept of “application” is (and no, it doesn’t just equate to a vendor-supplied technology). In practice, application portfolio management is here to stay, because “application” is a well-understood term that anchors IT investment, governance, operational practices, and executive-level decision-making. Instead of fighting over labels, organizations should establish a structured ontology: a shared understanding of how applications, platforms, technologies, and assets interact. Portfolio Rationalization: Beyond Simple Inventories IT portfolio rationalization isn’t new, but the stakes are higher than ever, and the enabling technologies are making a quantum leap. Organizations aren’t just trying to streamline costs or reduce redundancy; they need an integrated model that maps relationships across applications, platforms, and supporting technologies. Our latest research outlines four key domains that organizations must inventory and manage: Applications: the running and operational software systems that directly support business capabilities, often also mapped to enterprise processes. Platforms: internal technology environments that enable the creation or delivery of applications or other platforms (see A Simple Definition Of “Platform”). Technologies: the software and hardware components, frameworks, and vendor products used to build applications and platforms, at a product category (SKU) or type level, often controlled via technology lifecycle management. Assets: the individual instances of hardware, virtual machines, and licensed software deployed within the IT environment. This structure provides clarity without forcing unnatural terminology shifts. Trying to rebrand “application portfolio” as “product portfolio” is an uphill battle when senior executives already think in terms of applications. Similarly, the ITIL-era service terminology never translated well into practical management. Moving Past The Application Vs. Service Debate The application vs. service debate is more about legacy frameworks than actual IT needs. ITIL positioned services as the fundamental unit of IT management, but in practice, most organizations never fully operationalized service portfolios beyond a catalog of requestable things. (And such catalogs are surprisingly decoupled from the operational portfolio in real-world practice, due to their internal marketing aspects. They evolve in response to usage patterns — e.g., how people search and find the services they need — and increasingly are accessed via chatbots, not portals.) Meanwhile, “service” has taken on new meanings, especially in the context of cloud computing, where microservices, APIs, and SaaS offerings dominate. At the same time, applications have persisted as the primary management construct. The term “application” has broad industry use in large IT shops. I get calls every week centering on the concept. This is why enterprise architecture, strategic planning, IT finance and FinOps, and even ITSM products still organize portfolios around applications. The word product may make sense in an agile context, but outside of software companies, “product portfolio management” just isn’t how enterprises talk about large-scale IT investments today. The solution isn’t to keep relitigating terminology. It’s to adopt a clear, structured model that integrates applications, platforms, and supporting technologies into a coherent IT graph. Graph-Based IT Management: The Next Evolution We’re seeing a fundamental shift toward graph-based IT management, where relationships — not just lists — define how organizations understand and optimize their technology landscapes. Major vendors such as ServiceNow and Atlassian are embedding graph models into their platforms, reflecting the reality that IT organizations need dynamic, interconnected knowledge bases rather than static inventories. The goal isn’t just to track applications or services — it’s to understand dependencies, impact, and lifecycle interactions across the entire IT ecosystem. And furthermore, we need to be able to easily integrate such graphs and transcend the limitations of shipping RDBMS tables around via ETL. For IT leaders, this means shifting focus from static portfolio reports to ontology-driven IT management. Organizations should define and maintain a unified graph of IT knowledge, mapping how applications, platforms, and technologies interact to support business capabilities. The Road Forward: IT Leaders Must Take Control To prepare for this shift, IT organizations must: Define an enterprisewide IT ontology. Establish clear relationships between applications, platforms, technologies, and assets. Avoid unnecessary terminology debates. Use labels that align with executive and operational understanding. How are people actually talking? Adopt graph-based portfolio management. Move beyond traditional application performance management approaches by mapping dependencies and lifecycle impacts. The shift toward AI-powered, knowledge graph-driven IT management is already happening. Govern IT portfolios as interconnected systems. IT isn’t just a collection of independent applications. Dependencies between software, infrastructure, and business processes must be explicitly modeled and continuously updated. Conclusion: IT Management Must Evolve To quote Shakespeare, “A rose by any other name would smell as sweet.” If you have a broad consensus and day-to-day operational usage of a structured set of terms (whether centered on services, applications, or products), great! The real opportunity is in ontology-driven IT portfolio management, supported by graph-based approaches that integrate applications, platforms, and supporting technologies (by whatever labels) into a real-time, AI-powered IT knowledge model. It’s time to stop arguing about applications versus services and start building the structured, graph-enabled IT portfolios that will define the future of IT management. Forrester clients can read the full report to build a stronger foundation for IT portfolio management. Personal note: I identified the four-lifecycle model in 2012 in the second edition of my book, “Architecture & Patterns for IT.” Hundreds of conversations with IT leaders, enterprise architects, tools vendors, and portfolio managers since then have validated that most large IT shops tend to converge to it, with minor differences in nomenclature. It may seem

Graphic IT Management Requires Clear Language Read More »

FinOps X Recap — AI, Scopes, And FOCUS 1.2

We just spent an amazing two days at FinOps X, the annual conference hosted by the FinOps Foundation. This year’s conference was the largest ever at 2,000 attendees, a 20% increase from 2024. Some of the biggest enterprises were in attendance — American Express, Electronic Arts, John Deere, Koch Industries, Lockheed Martin, Meta, MGM Resorts, PepsiCo, and Starbucks — along with the major cloud providers: AWS, Google Cloud, and Microsoft Azure. This underscored the fact that FinOps is not just a math exercise to zero; it’s a business-critical process to achieving organizational growth and transformation. New themes were introduced this year including FinOps for AI, scopes and the new FinOps framework, integration with IT asset management (ITAM), and on-premises FinOps. FinOps For AI Is Nascent But Growing AI cost management was a dominant theme at FinOps X, with nearly a quarter of the 73 keynotes, breakouts, and chalk talks focused on the topic. This is unsurprising given the surge of interest to stand up generative AI (genAI) use cases and the significant compute, storage, and database resources required to support it. Managing genAI costs also introduces new levers such as model selection, training, inferencing, token usage, and caching, while adding layers of complexity due to its probabilistic nature. Today, average AI spend remains low, but this will change. In the next two to three years, spend will skyrocket as production-ready use cases scale. Without a dedicated AI cost practice, FinOps teams will get hit by a freight train. Most FinOps practices are still building the necessary skills and instead rely on close collaboration with data teams or business units that are managing costs independently. On-Prem Management Is Here The scope of FinOps has expanded and is reflected in the new FinOps Foundation framework, which added “cloud and technology” to its definition. This change has brought to bear the reality that FinOps practices have a broader mandate to cover on-prem and SaaS costs. Since late 2024, most of my client conversations have centered on this very theme: how to extend FinOps practices to on-prem, SaaS, and even (surprisingly) labor costs! Traditionally, on-prem was seen as outside the FinOps scope due to its sunk capex nature and infrequent refresh cycles. But that view is changing. More frequent data gathering and insight cycles (monthly or quarterly versus annually) are informing smarter refresh planning. And there is a very real-time component: power and heat. Organizations are basing workload runtimes during lower-priced times, like off-peak hours. Workload placement and even rack placement and size affect cooling decisions. But why the sudden change? The ROI of FinOps is now proven. When done well, organizations can see significant savings and cost avoidance. FinOps teams can tie engineering decisions to cloud spend and maximize business value. Now, executives are seeking similar returns across all areas of IT spend. Subsequently, FinOps and IT financial management teams are increasingly collaborating — often reporting to the same leader — and raising new questions. If FinOps works for cloud, why not on-prem? And why couldn’t FinOps handle all of IT cost reporting? In many ways, FinOps has become a victim of its own success. FOCUS 1.2 Is Released With SaaS And PaaS Support The latest release introduces support for SaaS and platform as a service (PaaS), a more unified view of “Cloud+” spend, and enhanced allocation capabilities. SaaS and PaaS billing data can now be folded into the same schema as cloud spend, enabling more centralized cost visibility and management. Key improvements in allocation and currency normalization are also included. FOCUS 1.2 adds a new invoice ID column, directly linking each row to a vendor invoice which enables easier cost allocation. Virtual currency support — including vendor-defined units of costs such as “token,” “credit,” or “DBU” (Databricks Unit) — has been introduced. This is a critical advancement: The ability to normalize charge units into a common cash value brings organizations significantly closer to achieving unified cost management and allocation across all IT spend categories. Final Thoughts FinOps teams have a new mandate: managing and optimizing the total cost of IT. This expansion was reflected in the major themes at X: from the expanded FinOps Foundation framework that encompasses technologies beyond public cloud, to the latest FOCUS enhancements to support SaaS and PaaS, to public endorsement for ITAM integration, and finally to the rising urgency around AI cost management. The original mission to manage and maximize the value of cloud spend has been proven and, in many organizations, successfully delivered. But the road ahead will demand stricter adherence to key FinOps tenets: cross-functional collaboration, individual accountability, and timely, accessible financial data. As the scope of FinOps broadens, so too does its strategic importance. source

FinOps X Recap — AI, Scopes, And FOCUS 1.2 Read More »

FinOps In Government: Why It’s A Different Ballgame

The US government is no stranger to cloud. In fact, the federal government was among the first adopters of public cloud going back to the first federal CIO, Vivek Kundra, who mandated a cloud-first approach to IT as part of the 2012 budget process. Although his motivation to adopt cloud was instrumental in transforming government IT, Kundra mistakenly assumed that it would save billions of dollars. The early promise and hyperscaler marketing hype of cloud as a cost-savings mechanism has been disproven. Despite this, the cloud is a transformation and innovation accelerator and a necessary part of modern IT strategies. The rush to the cloud, particularly following the pandemic years, led to an alarming spike in costs that required attention. Enter FinOps. The practice has existed for almost a decade but has only recently been adopted at scale — especially in the US government. Aligning Cultural Practice With Political Pressure Contrary to popular belief, government leaders have always been mindful of cost optimization. The Federal Acquisition Regulation is designed to prevent malfeasance, abuse, and waste, but there are gaps. In 2019, the US Government Accountability Office discovered that of the 16 agencies reviewed, roughly one-third had inconsistent reporting in cloud investments, even though most agencies had a 10-point or more increase in spending. These agencies had saved $291 million but had also identified issues in reporting and tracking cloud spend and savings. This was due, in part, to a lack of consistent processes. Today’s cost-cutting pressure and implementation of mass-scale efficiency, particularly with the Department of Government Efficiency (DOGE), is not new to IT leaders. FinOps practices and investment value maximization were already in play; DOGE just accelerated that mission. What has changed is the increased scrutiny of cloud spending decisions. This has traditionally been a safe haven for procurement, as public cloud contracts envelop not just the acquisition of cloud services but also the SaaS solutions that can be purchased through their marketplace. This has been displaced as DOGE increased scrutiny on SaaS spend. Still, FinOps practices are rising to this pressure. Up until recently, FinOps practice scopes did not encompass SaaS spend or any other costs outside public cloud infrastructure and services — this has changed. The new FinOps Foundation framework has changed in terminology to include cloud but also “technology.” Increased scrutiny on all tech spend will place a greater onus on FinOps teams to provide cost optimization outside of the public cloud. Overcoming The Structural Challenges Of Budgeting And Procurement Federal acquisition lifecycles are unlike traditional enterprise processes. Government entities typically drive procurement in the public space. Flexibility on spend and methods is highly limited, as these are generally based on earmarked money. In other words, public procurement budgets are more likely to be delegated preemptively and are therefore tougher to alter in distribution. In the federal space, the Antideficiency Act (ADA) presents a huge obstacle to cloud purchases. Agencies are required to obtain appropriations and declare their entire projected cloud consumption in advance. The ADA also prohibits obligating or spending money before it’s received from Congress and does not allow the redistributing of funds to a different purpose than the one declared. This limitation provides little resilience, if any, on managing spend anomalies. It also means that overcommitment is a common occurrence. Securing Savings Through Commitment And Consolidation To combat this, government FinOps teams will employ tactics such as commitment-based discounts (e.g., reserved instances, reservations, savings plans, and commitment-use discounts). Cloud cost management tools are also in play, though common burn-down tactics of penalties on overages are not allowed with these contracts, thus limiting options on tools. Some agencies struggle to benefit from the cost levers available to commercial-side teams. While the federal government is a big spender in IT, its efforts are typically federated, and individual PMOs rarely have the negotiating power to move the needle for big tech negotiations. Without the prerequisite purchase volume or procurement flexibility, substantial committed use discounts are a pipe dream. One proven model that works (for agencies with the executive buy-in to pull this off) is a consolidated procurement. The DoD’s Joint Warfighting Cloud Capability vehicle is a leading example of government cloud. By aggregating requirements across multiple branches of the military, the DoD was able to negotiate discounts and favorable terms that would not likely be available to individual PMO buyers. And like other verticals, government teams should also implement the key tenets of FinOps: individual accountability (though showback is more common, since chargeback is easily resisted), cross-functional collaboration (though standard procedures must be followed versus organic watercooler conversations), timely and accurate decision-making, and cost. For more detail on implementation and execution, use the Forrester Solution Blueprint, Optimize Your Cloud Costs With FinOps. source

FinOps In Government: Why It’s A Different Ballgame Read More »

Your Emerging Technology Questions Answered

Back in May, we held a webinar unveiling our top 10 emerging technologies for 2025. We had a very strong turnout for the interactive event and received a lot of insightful questions from attendees. In fact, we received so many questions that I couldn’t answer them all during the live event. So I took some time to draft responses to those we didn’t get to during the webinar and have posted them here for everyone to read. And by the way, if you’d still like to watch the webinar, it’s not too late — the recorded version is available here on demand. Now, on to the questions. Would you consider a humanoid robot a sophisticated “physical” application of generative AI models? That’s an interesting thought. Humanoid robots are definitely starting to use language models, but I would not call them an application of language models. Interaction in natural language has been a long-standing requirement and a difficult one to meet for earlier designs. Humanoid robots are using more of the emerging reasoning capabilities in advanced foundational models to help them physically respond to external stimuli, but this advancement is new. Check out Gemini Robotics. As more processes are being automated by AI, do you think that there will be too many humans for the number of jobs available in five years? Forrester tracks these trends closely, and my colleagues Michael O’Grady and J. P. Gownder will have an updated forecast on this very soon to reflect the latest changes. But in general, we think that there will be some net job loss due to AI, but we also think that AI will create many new jobs, roles, and even entirely new industries. Certainly, there are many implications for companies and workforces based on these trends, but perhaps the biggest one is that every worker and every company needs to improve their AI quotient (AIQ), which measures how ready they are for AI. What do you see as the most common pitfalls of synthetic data for digital health? Overall, the most common pitfall is expecting synthetic data to do something it was not designed to do, and that would apply to digital health as well as any use case. The production of the right synthetic data sets for each of the use cases I covered in the webinar is as much art as science, often requiring many iterations to ensure that the data set can accomplish the expected goals. Synthetic data to help de-bias the data for a predictive model will be quite different than knowledge distillation data used to train smaller versions of generative models, for example. How does the projected timeline for quantum security to reach its benefit horizon compare to how fast quantum is progressing? We expect quantum computers to have roughly a 33% chance of breaking today’s PKI encryption by 2035, based on the latest consensus of quantum computing researchers. By implementing all the elements of quantum security over the next two to five years in a phased rollout, enterprises will have a high degree of protection, if no engineering breakthrough accelerates this quantum timeline. As a side benefit, the cryptographic agility you get from quantum security will also help your entire security posture because you will be equipped to replace crypto that is vulnerable to good old-fashioned hacking. What are the biggest pain points and benefits for agentic AI use by small- and medium-sized enterprises? In business overall, agentic AI is most used for employee support use cases due to limited trust — looking up data from multiple sources and synthesizing it into a useful answer is most common. TuringBots are also using more agentic AI to automate the software development lifecycle. In consumer businesses, agentic systems are being offered by startups such as Genspark that can plan travel, book restaurants, and even build websites, but these are early and mostly “toys.” Marketers are experimenting with agents that can offer personalized shopping recommendations or build rich content messages. The common thread for all of these use cases is that they are low-risk but also of moderate to low benefit. It is going to take some time to develop the trust and security infrastructure needed for critical business-process steps and decisions or high-impact customer interactions. When it comes to agentic AI, trust is key. How can we sufficiently test agentic AI enough to trust it to be put into production? The simplest answer is that there is no known way to align today’s foundational models to ensure that they always do what we want them to. This is called the alignment problem, and it is a grand challenge in all AI. The key question for most enterprises is: How good is good enough? If you must be right 100% of the time, an agentic system won’t get you there. But you probably aren’t 100% correct and reliable today with any software system you use, and certainly, human employees are prone to errors, as well. Since you already know how to deal with errors created by software and human workers, the challenge is selecting use cases, defining error thresholds, and testing the agentic AI systems for those so you can live with (and trust) the agents in specific contexts. How do you see agentic AI and the agentic web emerging in the future? The “agentic web” is a term being floated about to describe a world where humans interacting with businesses (and each other) on the web are replaced by AI agents interacting with each other on behalf of those humans and businesses to carry out tasks such as information search and completing transactions. Today, this is more concept than reality. As envisioned, it will be more open and fragmented, likely disrupting the entire search and e-commerce industries. But to realize this vision, we need to develop and implement the right standards and do more foundational work. Some of this has started with protocols like the Agent2Agent (A2A) protocol and the Model Concept Protocol (MCP), but many

Your Emerging Technology Questions Answered Read More »

Low Survey Response Rates: Are You Asking the Wrong Question?

I recently binged season four of the HBO series “True Detective.” In it, Jodie Foster plays detective Liz Danvers, who investigates the disappearance of eight men from an Alaskan research facility. Over the course of the investigation, she mentors a junior officer as they work through the evidence. One element that stuck with me is her repeatedly telling him, “You’re asking the wrong question.” For example (spoiler alert), instead of asking “Who killed them?” ask “Why were they outside in the middle of the December night?” This redirect opened them up to new perspectives for exploring the crime. This story comes to mind when clients ask how to improve survey response rates. It’s a valid question, because when too few customers respond, low response rates compromise the validity, credibility, and actionability of your research. But it’s not the right question to start the discussion. We should begin by first determining if your study even requires a higher response rate. Not All Surveys Require Generalizability When your goal is to make inferences about your entire customer base — such as overall satisfaction or the impact of a digital transformation — generalizability is essential. On the other hand, projects whose focus includes exploratory research, service recovery efforts, or studies on small, homogeneous groups may not require statistically representative samples. Results will not be generalized to your entire customer base or key segment. Response rates matter less. To determine whether your survey project is at risk of validity issues, start by asking key questions about your research goals. Will results be: Part of an executive scorecard and tracked regularly? Used for benchmarking? Part of advanced (predictive) analytics work? If the answer is yes, you’ll likely need a representative sample and a solid plan to achieve those goals. If the results will be used for service recovery, pilot testing, experimentation, or exploration, then representativeness, and higher response rates, are less critical. Overcome Response Rate Challenges Low response rates often stem from two main respondent issues: lack of perceived value and poor survey experience. To address these, implement three key strategies: Build relevance. Build a communication plan for both internal and external audiences to explain the purpose of the survey, how it is relevant, and how the results will be used. Improve the survey experience. Stick to your stated project goals, hone your survey structure and content, and ensure that it aligns with your brand promise. After all, your survey is an important touchpoint, as my colleague Maxie Schmidt points out in the Forrester CX Cast podcast episode, Feedback Is A Touchpoint, Too. Optimize survey administration. Pair your survey with an effective distribution plan. Consider aspects such as timing, channel, audience alignment, and personalization. Monitor performance and adjust accordingly. By taking a strategic, data-driven approach to survey design and execution, you can improve response rates, enhance the credibility of your insights, and ultimately drive more meaningful action from your customer feedback programs. To learn more, Forrester clients can: If you’re not yet a Forrester client: source

Low Survey Response Rates: Are You Asking the Wrong Question? Read More »

Step Up To Deliver The Total Experience

We’ve just closed the curtain on another successful CX Summit EMEA. Each year, I come away feeling energized, inspired, and profoundly grateful. I learn so much from our attendees and the many conversations throughout the event, and I’m perpetually awed by the dedication and coordination it takes to pull this off. So to everyone involved: Thank you! This year, we previewed a new Forrester concept that we’ve named the total experience. Total experience encompasses both brand and customer experience (CX), and it shapes the perception that prospects and customers form based on their cumulative interactions with a brand over time. Since brand equity and CX have a compounding effect, companies that improve them both together see a significant revenue uplift. We’ll soon be releasing new research on the total experience, and we’ll unveil our new Total Experience Score at CX Summit North America in Nashville later this month. Brand and CX are two sides of the same coin. Delivering on the total experience promise requires CX, marketing, and digital teams to tightly align — something that we at Forrester have been talking about for some time. The Time To Transform Experiences Is Now You might think that it’s the wrong time to challenge organizations to unite and reimagine their brand and customer experience, given the deeply uncertain business climate. The total experience concept is intentionally bold. CX quality is down across regions and sectors. Too many brands have been fixated on their CX scores rather than the actual experiences that those scores are supposed to reflect. Experience is in need of a reset, and it’s time to go big. Laying the foundation for the total experience takes a collective mindset shift and new forms of collaboration. In a keynote earlier today, a few of my colleagues from across Forrester’s CX, marketing, and digital research teams described the type of work required to replace silos between prospect-focused and customer-focused teams with a shared focus on the end-to-end experience. Building a unified total experience will also take sharing customer insights and adopting new ways of measuring success that support an integrated, continuous experience focus. Humans + AI Must Work In Lockstep The technology needed to power standout experiences is here — and it keeps getting better. As my colleague Aurélie L’Hostis recently wrote, advancing and emerging technologies are accelerating the transformation of digital experiences and reshaping how firms interact with consumers and deliver value. AI-powered interfaces such as chatbots and virtual agents will soon actively observe, learn, and communicate with consumers. Over time, experiences will become more intuitive and empowering, helping companies deliver on the total experience promise. But for this to go right, bolstering the human side of the equation is paramount. At CX Summit, we highlighted the importance of: Sharpening AI skills. Success with AI depends on human capabilities. Leaders need to understand their teams’ AI readiness and foster a culture of learning around AI, providing training, encouraging skill sharing, and ensuring ethical use. Gaining trust. The degree to which your customers trust your AI will make or break your AI initiatives, and the degree to which they trust your brand will determine how much data they’re willing to share for personalized experiences. Understand what drives consumer trust in AI and embrace these trustworthy practices. Shoring up CX fundamentals. As you plan and build innovative new experiences, you can’t lose sight of what customers are trying to accomplish with your company. Journey mapping remains essential to understanding the steps your customers take and removing points of friction that they may be experiencing. Where Will Your Total Experience Lead You? If you’re a Forrester client, be sure to look out for our total experience research in the coming weeks. Whether or not you’re a client, we’d love to partner with you to build and deliver a total experience that’s authentic to your organization. Thank you again to all who made this year’s CX Summit EMEA a success. See you next year. source

Step Up To Deliver The Total Experience Read More »