Forrester

Salesforce Accelerates Its Path To Agentic AI

Salesforce, a leader in CRM applications, has announced its intent to acquire Informatica — a global leader in enterprise data management — for approximately $8 billion. Informatica fuels Salesforce’s hard pivot to reinvent itself as an AI-first company and away from its CRM roots. Informatica also strengthens Salesforce’s data foundation, a critical requirement for scaling Agentforce, Salesforce’s agentic AI capabilities, which have experienced slower-than-expected market traction since their release. Informatica + Salesforce = Faster AI Transition Realizing the full promise of AI requires a tightly integrated approach to data and intelligence. Informatica enhances Salesforce’s strong architecture by adding deep expertise in data fabric, integration, data pipeline, governance, quality, and master data management in combination with Salesforce’s Data Cloud, MuleSoft, and Tableau. Benefits include: A complete data management suite. Informatica fills a gap in Salesforce’s data management capabilities, enhancing Salesforce’s position across all core pillars of modern data management, including cloud-native data integration, ingestion, pipelines, master data management (MDM), metadata management, transformation, preparation, quality, and governance. Expanded customer data fabric for end-to-end AI and analytics. This acquisition significantly advances Salesforce’s vision of a unified customer data fabric. With real-time data integration from a vast array of sources — structured and unstructured, internal and external — Salesforce is now positioned to deliver a truly end-to-end customer data fabric for AI and analytics. This capability is essential to driving hyperpersonalized, high-impact customer engagement at scale. Support for real-time, cross-cloud data capabilities at scale. Informatica’s robust support for real-time, cross-cloud data access and analytics empowers businesses to derive insights from both Salesforce and non-Salesforce data sources across cloud, hybrid, and on-premises environments. A data foundation for agentic AI. Informatica’s advanced capabilities in data integration, metadata cataloging, governance, data quality, and MDM are essential to power intelligent agents that can autonomously interpret and act on enterprise data. By combining Informatica with Salesforce’s Data Cloud, MuleSoft, and Tableau, the unified platform becomes fully agent-ready. Enhanced AI governance for responsible innovation. Informatica is at the forefront of developing advanced AI governance frameworks, building on its strong foundation in MDM and data governance. Its capabilities extend beyond preparing data for AI — introducing tools to govern AI systems with a focus on ethical practices, transparency, and accountability. By leveraging Informatica’s expertise, Salesforce is positioned to deliver agentic AI solutions that customers on both sides can deploy confidently and responsibly at scale. A Win-Win For All Customers The acquisition of Informatica by Salesforce delivers transformative value for customers of both companies, accelerating the journey toward intelligent, data-driven enterprises. For Salesforce customers, the acquisition marks a game-changing leap forward. They will now be able to seamlessly access and unify customer data from across Salesforce and third-party systems — in real time. This breakthrough enables the creation of a truly unified customer data fabric, empowering businesses with actionable insights across every channel and touchpoint. Most importantly, it supercharges Salesforce’s ability to deliver agentic AI: low-code, low-maintenance AI solutions that reduce complexity, accelerate deployment, and scale impact. With a fully integrated data management backbone, Salesforce customers can expect faster innovation cycles, deeper personalization, and smarter customer engagement at enterprise scale. For Informatica customers, the opportunity is equally compelling, as they gain accelerated access to next-generation AI capabilities underpinned by the power and scale of Salesforce’s platform. As the industry moves toward agentic AI, intelligent agents will automate core data operations — from ingestion and integration to pipeline orchestration and governance. What once required days or weeks of manual effort will now be handled autonomously, enabling faster time to insight and greater business agility. With a unified data, AI, and analytics stack, Informatica customers will unlock faster innovation, stronger competitive advantage, and greater return on their data assets. Despite the promise of this combination, effective execution remains an open question. Salesforce will have to work through overlapping capabilities with MuleSoft, integrate Informatica’s products and teams within Salesforce, and preserve relationships with Informatica customers who are not using Salesforce. For more insights, Forrester clients can book time with us via an inquiry or guidance session. source

Salesforce Accelerates Its Path To Agentic AI Read More »

What Sets High-Performing Consumer Insights Teams Apart

Consumer insights (CI), market research, consumer intelligence — all different names that typically represent the same thing: teams whose remit is to understand and represent their brand’s consumer. Sure, it’s simple to understand, but it’s difficult to execute. We find that while most B2C companies have some insights or research function (98 percent!), dissatisfaction with CI teams is high: Fifty-seven percent of B2C marketing decision-makers agree that the CI team takes too long to deliver the insights they need, and 54% agree that the insights the team delivers are not actionable. Successful Consumer Insights Teams Are Strategic But Also Fast How do successful CI teams behave? Speed and strategy are the mandates. High-performing CI teams are evolving from traditional research roles into strategic partners embedded in the business. In fact, of the CI teams we interviewed, many not only service marketing functions but also service growth or strategy functions. These teams are expected to deliver insights that are: Fast. Quick-turn research is now the baseline. Outcome-oriented. Projects must tie directly to business decisions. Strategic. CI pros must act as both researchers and strategists. There’s No “One Size Fits All” CI Team Structure I often get asked about how CI teams should be structured. The answer? One size doesn’t fit all. But the common themes of what influences a CI team’s structure do, indeed, apply to all: the company’s portfolio, size, maturity, and data culture. In my new report, The State Of Consumer Insights Teams, 2025, I address what influences a CI team’s structure, their roles and responsibilities, and needed skill sets, as well as what’s important to their vendor partnerships. To complement the report, we’ve also published an RFP template and a role profile, with a maturity assessment upcoming. Forrester clients, if you have questions about consumer insights teams and/or what’s going on with consumer behaviors, feel free to schedule a guidance session with me. source

What Sets High-Performing Consumer Insights Teams Apart Read More »

The European Sovereign Cloud Day Forecasts Stormy Weather For The Cloud Ecosystem

Forum Europe’s third annual European Sovereign Cloud Policy and Industry Day took place in Brussels on June 3, 2025, gathering representatives from cloud vendors, service providers, and industry experts, as well as members of the European Parliament. Keynote speeches and panel discussions cast some light on the new sovereignty challenges that cloud providers and buyers in Europe are facing, which we can summarize around three major themes: 1) an unavoidable need to focus on AI sovereignty; 2) an urgent call for standardization; and 3) a shift in perspective from data protection to resilience. Here are our key takeaways from these discussions: Sovereign AI calls for a new supporting ecosystem in Europe. While AI usage is becoming ubiquitous across industries all over Europe, recent geopolitical tensions are raising some urgent questions. Sovereign AI means that organizations can adopt large language models (LLMs) that nobody can take away from them, that they can feed these LLMs with data that should not leave the country when this is a requirement, and that the data can be hosted on cloud infrastructure that does not suffer from any dependability on foreign governments’ decisions. European operators will have to offer a viable sovereign alternative to those of dominant US or Chinese vendors if they want to provide an appropriate ecosystem for sovereign AI in Europe. Cloud vendors and users need standards to sustain sovereign deployments. Fragmentation of the cloud is one of the key challenges that vendors are facing in Europe today. As Dave Michels, researcher at the Queen Mary University of London, pointed out, “The argument that we cannot beat the hyperscalers is not helping. We need interoperability and standards.” On top of that, Francesco Bonfiglio, CEO of Dynamo, affirmed that “Data is the fuel, and we need to build the pipe: This is the concept behind EuroStack.” On this point, the discussions also highlighted how far the European cloud ecosystem is from offering a true sovereign alternative to the global vendors. The sovereignty discussion moves from data protection to business resilience. In times of mounting geopolitical pressures, data protection is a comparatively smaller problem for businesses. Overreliance and overdependability on US hyperscalers is now concerning European organizations. In this context, a stronger focus on digital and cloud sovereignty allows organizations to be more resilient and avoid dependability on foreign governments’ influences. The European Sovereign Cloud Day has clearly outlined how the digital sovereignty discussion has passed the data protection layer and now drives European organizations’ fundamental choices regarding their cloud infrastructure and vendor ecosystem. Reach out to Forrester to schedule an inquiry or guidance session to help guide your sovereign cloud and sovereign AI initiatives or to dig into the broader concept of digital sovereignty as it relates to European dependency on big tech. source

The European Sovereign Cloud Day Forecasts Stormy Weather For The Cloud Ecosystem Read More »

MIT’s 2025 CIO Symposium: For AI Success, Follow Path Two

At the MIT Sloan CIO Symposium in Cambridge, Massachusetts last week, MIT Nobel Prize-winning economist Daron Acemoglu painted it in black and white. He said that technology can take two paths: Path one is using technology to automate tasks that people once did. We’ve been doing that for centuries. Silicon Valley loves it. Business managers love it. Take out cost by replacing people with machines — it’s inevitable. But no company ever saved their way into the history books. Automation: necessary but not interesting. Path two is using technology to make people more successful. That’s what Henry Ford did. It’s what Apple did. It’s what Schwab did for investors. It’s what Google did, what Facebook did, and what Amazon did. Those companies are in the history books. Empowerment: setting the bar for innovation expectations. AI has stared down these paths since its inception in the 1950s. Path one uses AI to automate tasks. The Turing machine’s test created in 1950 by Alan Turing is simple: Can the machine fool you into thinking that it’s a person? Artificial “intelligence”? Forsooth. Intelligence is a human trait. Daron called that path “automation technologies.” Artificial general intelligence is a natural goal for this camp. Who needs people when I have a genius in the cloud? (Sound familiar?) Alternatively, some scientists in that era argued that AI should augment human expertise. Daron called it “human-complementary technology”: Use technology to make people more successful, and give them the expertise to move forward with confidence. He referenced a famous paper by J. C. R. Licklider called “Man-Computer Symbiosis.” Replace “man” with “human,” and I think it reads as well today as in 1960. You can tell which one of these paths Daron prefers — me, too (see the figure). Here’s how I’ve been thinking about the notion of harnessing AI for good, and for good business: Empower customers with the expertise they need on demand. Give people an expert assistant to do things for them. Help people feel more confident in making decisions and adept in taking action. Make the machine a symbiotic resource for people, not something that strips them of agency. Follow path two if you want to be in the history books. (You’ll get to the same place as path one, maybe even faster, because you’ll have people on your side.) It is this approach that will change the world. Much of the conversation at the event was about risk security but also replacing people with machines and focusing on scaling adoption, which effectively means replacing people faster. Some speakers, such as CIOs Dimitris Bountolos (Ferrovial), Monica Caldas (Liberty Mutual Insurance), and Bill Pappas (MetLife), talked more about making people successful with AI than they did about replacing people with AI. We applaud the difference. source

MIT’s 2025 CIO Symposium: For AI Success, Follow Path Two Read More »

This Year’s Upfronts Advanced TV’s Transformation

This year’s TV upfronts — the star-studded ritual full of sponsored content and fraught negotiations — were conditioned by macroeconomic uncertainty and TV’s digital transformation. Volatility Leads To Flexibility And Consolidation Publishers touted more flexible deals to appeal to advertisers’ uncertainty about consumers’ ability to weather tariff-induced thunderstorms. Ryan Gould, president of ad sales and client partnerships at Warner Bros. Discovery (WBD), assured advertisers that their deals would be “fully flexible” to account for macroeconomic volatility, while Rita Ferro, president of Disney Advertising, affirmed her company’s commitment to “being that strategic partner you can count on for flexibility” in the face of fickle consumers and geopolitical chaos. Publishers also paraded their depth of offerings to help marketers derive more value from existing partnerships rather than stretch already-tight budgets. Consequently, NBCUniversal, Amazon, and Netflix cast themselves at the upfronts as one-stop shops for distinctive customer understanding, culturally resonant content, engaging ad formats, and robust adtech. NBCU, for example, showcased its wealth of live sports, including the Super Bowl and Winter Olympics, and new-fashioned viewing formats, including bingo cards that gamify NBA viewers’ experience. Amazon emphasized its original programming and more shoppable ad inventory. Netflix had the commissioner of the NFL, Roger Goodell, announce this season’s Christmas Day games and introduced the Netflix Ads Suite, bringing Netflix to parity with more mature ad platforms. Each positioned itself as a safe harbor for brands and agencies seeking low-risk return on ad spend. TV’s Transformation Demands Cross-Screen Planning, Buying, And Measurement TV’s digital transformation was on vivid display at the upfronts as publishers, advertisers, and measurement providers adapt to the continued convergence of linear and streaming TV. Viewers’ migration from linear to streaming TV has stabilized, affording both types of TV room to (re-)establish their value. Linear TV remains home to live, culturally salient tentpoles such as sports and award shows. Streaming TV is an increasingly effective way to replace lost reach on linear TV, engage viewers consuming TV on demand, and measure TV advertising’s impacts in near real time. To help advertisers capitalize on both types of TV, WBD introduced NEO, offering buyers direct, consolidated access to linear and streaming TV inventory. Disney launched Compass to mitigate data’s fragmentation across types of TV. Nielsen adapted to this new normal by shifting from panel-only to panel-plus-big data measurement. This helped the company regain its Media Rating Council accreditation and market credibility. Due to Nielsen’s revindication, alternative currencies from iSpot, VideoAmp, and Comscore were far from center stage. Despite The Upfronts, Consumers Tune Out TV Advertising Without consumers’ receptivity to TV ads, the upfronts’ pageantry is pointless. According to How Consumers Really Feel About Streaming TV Ads, our latest data overview on TV advertising, consumers spanning generations increasingly upgrade or cancel streaming TV subscriptions to avoid ads. When asked about streaming TV ads, members of Forrester’s Market Research Online Community complained, “I will often get the same ad back to back,” “[Ads] feature too many overpaid celebrities,” and “[Brands] lose their authenticity when [they] use generative AI in TV commercials.” Nevertheless, upfront volume grew, driven by more streaming TV and sports. As advertisers buy more TV, they should prioritize cost-effective reach, embrace audience-based planning and buying, test TV creative, and measure TV’s halo effects on search. TV’s great power, which persists despite the medium’s digital transformation, is its massiveness and persuasiveness. Advertisers mustn’t lose hold of that as TV becomes more addressable and fragments across devices. Check out our data overview to learn how advertisers and publishers can reengage TV viewers, and always feel free to reach out. source

This Year’s Upfronts Advanced TV’s Transformation Read More »

The Public Sector Must Navigate The “Everything Everywhere All At Once” Efficiency Conversation With Portfolio Rationalization

We’re in an era defined by constant disruption and rapid shifts — a true “everything everywhere all at once” scenario. The need to streamline operations and reduce footprints must be anticipated, with the ability to pivot expected. For tech execs, line-of-business leaders, and architects in the public sector, this dynamic landscape presents immense challenges but also unparalleled opportunities to shift to improvements and best practices that everyone can get behind. Our message to you: Don’t panic — prepare! This ability to pivot, innovate, and thrive hinges on a critical, often overlooked, capability: being thoroughly prepared with robust portfolio consolidation and rationalization processes. Government agencies can benefit from the forced discipline that these best practices create. In the upcoming webinar for Forrester clients, Portfolio Rationalization For Public Sector Efficiency, I plan to discuss these amplified challenges. As we are reminded of how the public sector has been specifically impacted, I will talk about portfolio consolidation — the act of merging or unifying similar systems, tools, and processes — as well as rationalization — the strategic evaluation of assets to identify, categorize, and prioritize them against business value. In many cases, this is the untapped superpower that empowers you to improve operational efficiency and performance, gain visibility and control of overspending, and drive strategic alignment. The questions you are being asked — Does this serve a clear business purpose? Is it providing optimal value? Is it aligned with our strategic goals? — require an evaluation of your entire landscape to position your organization for the next disruption. During this webinar, I’ll commit to offering moments of optimism as we address the following key takeaways: Learn how to assess and rank projects based on strategic importance and resource availability to ensure optimal portfolio composition. Discover techniques to streamline operations by removing overlapping or unnecessary initiatives, thereby improving efficiency. Gain insights into effective resource management strategies that enhance operational effectiveness and drive better outcomes for government agencies. Forrester clients, join me June 26 at 11 a.m. EST to be informed and leave encouraged.  source

The Public Sector Must Navigate The “Everything Everywhere All At Once” Efficiency Conversation With Portfolio Rationalization Read More »

Meet the New Analyst Covering Zero Trust and Microsegmentation

The 25-plus years of my career so far can be divided into two acts. Act I was enterprise IT, beginning with desktop support and progressing to network and security architecture at organizations ranging from small business to the Global 10. Act II opened with a move into technical alliance and ecosystem roles at security vendors and closed with roles in product and technical marketing. The throughline of both acts has been clarifying problems, thinking about the combination of technologies that provide solutions to those problems, and articulating the rationale behind and value of those decisions. I expect that throughline to continue in Act III, now that I have joined Forrester as an analyst on the security and risk (S&R) team, focusing on Zero Trust and microsegmentation. What Brought Me To Forrester The cybersecurity field is more important than it has ever been because so much of what happens in the real world depends on or is influenced by what happens in the digital one. Helping to develop and implement strategy generally — realistic and practical security strategies, in particular — has always been important to me. One of the many enduring lessons from my time at a large automotive manufacturer is that the right process produces the right result. Forrester’s focus on rigorous, actionable research offers a great opportunity to stitch both these things together in my day-to-day work in a way that will hopefully have a positive impact for Forrester clients, as well as their customers and partners. Finding this role was both fortuitous and circuitous. The first step on my Forrester journey actually started five years ago when I applied for a different role on the S&R team and made it through a big chunk of the recruiting process but ultimately decided to zig instead of zag and took a role with a security startup. I stayed in touch with some of the amazing people I met during the first go-around, however, and was fortunate that the stars aligned when this role was announced. How I Think About Zero Trust I started thinking about the principles of Zero Trust around 2016, well after Forrester coined the term but before it truly became part of the zeitgeist. At the time, I was focused heavily on devices, apps, and flows as authentication and authorization subjects — especially in mixed-ownership settings. As my thinking evolved, I considered Zero Trust to be primarily a systems integration problem. Even though definitions have been revised, the applicable scope has grown, and standards have emerged, I largely still think of it that way. While it’s easy to be cynical about Zero Trust because of its overuse in marketing — rather than as a philosophy or an “architectural school”— I believe both that it represents one of the most potentially beneficial approaches to protecting digital infrastructure and that it is actually within reach for most organizations. With that said, implementing, extending, and refining Zero Trust remains challenging or controversial for many organizations. Even so, I’d venture to guess that every S&R pro — even those with the most Zero Trust skepticism — knows in their bones that the consistent application of the core principles of default-deny, least-privilege access, and comprehensive monitoring would markedly improve their organizations’ security posture and resilience. The principles themselves are simple, but as the author Scott Berkun says, “Simple does not mean easy.” The example he uses to illustrate the point is that running a marathon is simple: You just run 26.2 miles — but even the most well-trained athletes wouldn’t describe the preparation or the event itself as “easy.” It’s the same with Zero Trust. Just like running a marathon, the right combination of planning and focus makes it possible. What’s Next I’m excited to leverage and expand the existing body of Forrester research to help our clients. Whether they’re taking the first steps on their journeys, restarting stalled initiatives, or improving their overall maturity, I’m looking forward to helping clients tackle the marathon that is Zero Trust. Forrester clients, please feel free to schedule a guidance or inquiry session to further explore my research topics and coverage areas. source

Meet the New Analyst Covering Zero Trust and Microsegmentation Read More »

AI Cost As A First-Class Metric — Our Conversation With David Tepper, CEO And Cofounder Of Pay-i

As generative AI (genAI) moves from experimentation to enterprise-scale deployment, the conversation in most enterprises is shifting from “Can we use AI?” to “Are we using it wisely?” For AI leaders, managing cost is no longer a technical afterthought — it’s a strategic imperative. The economics of AI are uniquely volatile, shaped by dynamic usage patterns, evolving model architectures, and opaque pricing structures. Without a clear cost management strategy, organizations risk undermining the very ROI they seek to achieve. Some AI enthusiasts may forge ahead with AI but favor speed and innovation over cost accounting. They might argue that AI cost and even ROI remains hard to pin down. But the reality is, to unlock sustainable value from genAI investments, leaders must treat cost as a first-class metric — on par with performance, accuracy, and innovation. So I took the case to David Tepper, CEO and cofounder of Pay-i, a provider in the AI and FinOps space, to get his take on AI cost management and what enterprise AI leaders need to know. Michele Goetz: AI cost is a hot topic as enterprises deploy and scale new AI applications. Can you help them understand the way AI cost is calculated? David Tepper: I see you’re starting things off with a loaded question! The short answer: It’s complex. Counting input and output tokens works fine when AI utilization consists of making single request/response calls to a single model with fixed pricing. However, it quickly grows in complexity when you’re using multiple models, vendors, agents, models distributed in different geographies, different modalities, using prepurchased capacity, and accounting for enterprise discounts. GenAI use: GenAI applications often use a variety of tools, services, and supporting frameworks. They leverage multiple models from multiple providers, all with prices that are changing frequently. As soon as you start using genAI distributed globally, costs change independently by region and locale. Modalities other than text are usually priced completely separately. And the SDKs of major model providers typically don’t return enough information to calculate those prices correctly without engineering effort. Prepurchased capacity: A cloud hyperscaler (in Azure, a “Provisioned Throughput Unit,” or, in AWS, a “Model Unit of Provisioned Throughput”) or a model provider (in OpenAI, “Reserved Capacity” or “Scale Units”) introduces fixed costs for a certain number of tokens per minute and/or requests per minute. This can be the most cost-effective means of using genAI at scale. However, multiple applications may be leveraging the prepurchased capacity simultaneously for a single objective, all sending varied requests. Calculating the cost for one request requires enterprises to separate traffic to correctly calculate the amortized costs. Prepurchased compute: You are typically purchasing compute capacity independent of the models you’re using. In other words, you’re paying for X amount of compute time per minute, and you can host different models on top of it. Each of those models will use different amounts of that compute, even if the token counts are identical. Michele Goetz: Pricing and packaging of AI models is transparent on foundation model vendor websites. Many even come with calculators. And AI platforms are even coming with cost, model cost comparison, and forecasting to show the AI spend by model. Is this enough for enterprises to plan out their AI spend? David Tepper: Let’s imagine the following. You are part of an enterprise, and you went to one of these static pricing calculators on a model host’s website. Every API request in your organization was using exactly one model from exactly one provider, only using text, and only in a single locale. Ahead of time, you went to every engineer who would use genAI in the company and calculated every request using the mean number of input and output tokens, and the standard deviation from that mean. You’d probably get a pretty accurate cost estimation and forecast. But we don’t live in that world. Someone wants to use a new model from a different provider. Later, an engineer in some department makes a tweak to the prompts to improve the quality of the responses. A different engineer in a different department wants to call the model several more times as part of a larger workflow. Another adds error handling and retry logic. The model provider updates the model snapshot, and now the typical number of consumed tokens changes. And so on … GenAI and large language model (LLM) spend is different from their cloud predecessors not only due to variability at runtime, but more impactfully, the models are extremely sensitive to change. Change a small part of an English-language sentence, and that update to the prompt can drastically change the unit economics of an entire product or feature offering. Michele Goetz: New models coming into market, such as DeepSeek R1, promise cost reduction by using fewer resources and even running on CPU rather than GPU. Does that mean enterprises will see AI cost decrease? David Tepper: There are a few things to tease out here. Pay-i has been tracking prices based on the parameter size of the models (not intelligence benchmarks) since 2022. The overall compute cost for inferencing LLMs of a fixed parameter size has been reducing at roughly 6.67% compounded monthly. However, organizational spend on these models is rising at a far higher rate. Adoption is picking up and solutions are being deployed at scale. And the appetite for what these models can do, and the desire to leverage them for increasingly ambitious tasks, is also a key factor. When ChatGPT was first released, GPT-3.5 had a maximum context of 4,096 tokens. The latest models are pushing context windows between 1 and 10 million tokens. So even if the price per token has gone down two orders of magnitude, many of today’s most compelling use cases are pushing larger and larger context, and thus the cost per request can even end up higher than it was a few years ago. Michele Goetz: How should companies think about measuring the value they receive for their genAI investments? How do

AI Cost As A First-Class Metric — Our Conversation With David Tepper, CEO And Cofounder Of Pay-i Read More »