E-Discovery Quarterly: The Perils Of Digital Data Protocols

By Tom Paskowitz, Colleen Kenney and Matt Jackson ( April 15, 2025, 10:43 AM EDT) — This article is part of a quarterly column analyzing the most notable e-discovery developments from the previous three months. This installment takes a closer look at recent disputes over stipulated protocols governing how parties in litigation preserve, collect and produce electronically stored information…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

E-Discovery Quarterly: The Perils Of Digital Data Protocols Read More »

Key Takeaways From The Forrester Wave™: Data Management For Analytics Platforms, Q2 2025

I’m excited to announce the release of The Forrester Wave™: Data Management For Analytics Platforms, Q2 2025. This edition evaluated the 11 most significant data management and analytics (DMA) platform vendors, providing a comprehensive view of a market undergoing rapid transformation. Over the past decade, we’ve consistently published the DMA Wave, tracking the evolution of the space and offering guidance to enterprises navigating their data strategy, but this year’s evaluation reflects a notable shift. Historically, most DMA solutions were optimized for structured data and near-real-time processing and often tied to a single cloud with limited data sources. Today, the demands on DMA platforms are much broader and more complex. What’s Going On With The DMA Platforms Market? With the rise of multicloud and hybrid-cloud data strategies, diverse data types, and increasing expectations for improved scale and automation, we tailored our criteria to reflect these emerging requirements. This year’s Wave captures how vendors are adapting with new and advanced capabilities fueled by advanced automation, integrated data intelligence, and AI-driven data management. Generative AI is emerging as a transformative force, enhancing both automation and intelligence within DMA. As a result, selecting the right DMA platform provider to support immediate and long-term data strategies has become increasingly complex. There are two important takeaways from the research: Generative AI is automating DMA functions. The modern DMA platform is automating complex tasks such as data ingestion, cleansing, transformation, integration, governance, and security. Natural language allows users to interact with data, generate insights, and manage platforms without deep technical expertise. This significantly reduces the need for specialized engineers and streamlines operations. GenAI also enhances DMA with advanced features such as anomaly detection and support for vector-based search. Leading vendors leverage agentic AI and natural language capabilities to deliver more intelligent and integrated data management. Built-in data intelligence is elevating DMA to a new level. Built-in data intelligence is streamlining data semantic-related tasks, dramatically improving efficiency and unlocking deeper insights. These capabilities can automatically detect patterns, relationships, and trends within datasets that normally take significant time and effort to uncover. Leading vendors deliver comprehensive, automated intelligence that enables rich data contextualization, accelerating a wide range of use cases. This empowers organizations to act proactively — whether predicting customer behavior, optimizing operations, or mitigating risks such as fraud. New Wave Criteria Reflect Evolving DMA Requirements With generative AI and data intelligence becoming foundational to modern DMA platforms, vendors increasingly embed these capabilities — although vendor offerings range from basic to highly advanced, integrated solutions. The key differentiator is not simply the presence of genAI but how deeply and effectively it is integrated. We evaluated genAI as a standalone criteria and as an embedded capability across core functions such as data ingestion, transformation, governance, security, and integration to capture this evolution. We also emphasized natural language for data access and end-to-end platform management through conversational interaction. This holistic approach ensures that our evaluation reflects the rising demand for intelligent, intuitive, and highly automated data management solutions. If your organization still relies on traditional data management tools for analytical workloads, now is the time to shift to a modern DMA platform. Modern platforms powered by genAI, advanced automation, and intelligence enable the real-time delivery of consistent and trusted data. This transformation accelerates high-impact use cases, fuels innovation and growth, and rapidly democratizes data access across teams. Upgrading your DMA platform isn’t just a technology upgrade; it’s a strategic move toward becoming a data-driven, AI-enabled organization. Don’t wait — embrace genAI-powered DMA platforms to stay ahead of the curve. For more insights, book time with me via an inquiry or guidance session. source

Key Takeaways From The Forrester Wave™: Data Management For Analytics Platforms, Q2 2025 Read More »

How AI is Revolutionizing Data Center Power and Cooling

Vlad Galabov, Omdia’s research director for digital infrastructure, spoke during Data Center World 2025’s analyst day. Image: Courtesy of Data Center World AI will drive more than 50% of global data center capacity and more than 70% of revenue opportunity, according to Omdia’s Research Director for Digital Infrastructure Vlad Galabov, who said massive productivity gains across industries driven by AI will fuel this growth. Speaking during Data Center World 2025’s analyst day, Galabov made a number of other predictions about the industry: NVIDIA and hyperscalers’ 1 MW-per-rack ambitions probably won’t materialize for another couple of years until engineering innovation catches up to power and cooling demands. By 2030, over 35 GW of data center power is expected to be self-generated, making off-grid and behind-the-meter solutions no longer optional for those looking to build new data centers, as many utilities struggle to deliver the necessary power. Data center annual capital expenditure (CAPEX) investments are expected to reach $1 trillion globally by 2030, up from less than $500 billion at the end of 2024. The strongest area for CAPEX is physical infrastructure, such as power and cooling, where spending is increasing at a rate of 18% per year. “As compute densities and rack densities climb, the investment in physical infrastructure accelerates,” Galabov said. “We expect a consolidation of server count where a small number of scaled-up systems are preferred to a scaled-out server strategy. The cost per byte/compute cycle is also decreasing.” More about data centers Data center power capacity explodes Galabov highlighted the explosion AI has caused in data center power needs. When the AI wave began in late 2023, the installed capacity of power in data centers worldwide was less than 150 GW. But with 120 kW rack designs on the immediate horizon, and 600 kW racks only about two years away, he forecasts nearly 400 GW of cumulative data center capacity by 2030. With new data center capacity additions approaching 50 GW per year by the end of the decade, it won’t be long before half a terawatt becomes the norm. But not everyone will survive the wild west of the AI and DC market. Many startup DC campus developments and neoclouds will fail to build a long-term business model, as some lack the expertise and business savvy to survive. Don’t focus on a single provider, Galabov cautioned, as some are likely to fail. More Data Center World 2025 coverage: NVIDIA’s Vision For AI Factories AI drives liquid cooling innovation Omdia’s Principal Analyst Shen Wang laid out the cooling repercussions of the AI wave. Air cooling hit its limit around 2022, he said. The consensus is that it can deliver up to 80 Watts per cm2, with a few suppliers claiming they can take air cooling higher. Beyond that range, single-phase direct-to-chip (DtC) cooling — in which water or a fluid is taken to cold plates that sit directly on top of computer chips to remove heat — is needed. Single-phase DtC can go as high as 140 W/cm2. “Single-phase DtC is the best way to cool chips right now,” Wang said. “By 2026, the threshold for single-phase DtC will be exceeded by the latest racks.” That’s when two-phase liquid cooling should begin to see a ramp-up in adoption rates. Two-phase cooling runs fluids at higher temperatures to the chip, causing them to turn to vapor as part of the cooling process, thereby increasing cooling efficiency. “Advanced chips in the 600 watt and above range are seeing the heaviest adoption of liquid cooling,” Wang said. “By 2028, 80% of chips in that category will utilize liquid cooling, up from 50% today.” source

How AI is Revolutionizing Data Center Power and Cooling Read More »

Unpacking FTC's New Stance On Standard-Essential Patents

By Gail Levine and Carmen Longoria-Green ( April 16, 2025, 5:48 PM EDT) — On Inauguration Day, Andrew Ferguson succeeded Lina Khan in the role of chair of the Federal Trade Commission.[1] Ferguson’s vision of the FTC may well be different from his predecessor’s in a host of ways, including in his understanding of the scope of the FTC’s power to challenge anticompetitive conduct under Section 5 of the FTC Act, which authorizes the FTC to challenge unfair methods of competition.[2]… Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Unpacking FTC's New Stance On Standard-Essential Patents Read More »

How Well Are You Protecting Existing Customer Revenue?

You already know why it’s important to build and maintain good relationships with customers. Forrester data shows that existing customers, through renewal and expansion, account for 61% of B2B revenue — higher for established companies and lower for companies still in new-account growth mode. That’s a big enough slice of the pie to warrant mindshare and resources to ensure that customers attain value from your offering and, as a result, stay, grow, and advocate for your company. You already know why. We’ve introduced expanded and upgraded versions of our customer engagement aligned approach and the foundational Forrester Customer Engagement Range Of Responsibilities Model to show you how. Secure The Spotlight On Customer Value The aligned approach (Forrester client access required) shows how three key functions — customer marketing, customer success, and customer advocacy — should partner in distinct yet complementary roles to optimize the postsale relationship between B2B companies and their customers. It puts customer value at the center, with each function contributing through its own lens. As with stage lighting, while a single light does a specific job, the effect grows stronger with multiple lights and coordinated angles. Companies that invest in all three areas and enable collaboration regardless of reporting lines see distinct advantages in retention, growth, and advocacy. Customer engagement functions: An aligned approach that intensifies value. The Forrester Customer Engagement Range Of Responsibilities Model (client access required) guides alignment of the customer-facing functions responsible for maximizing value for customers and the company. Our extensive update reflects an evolution in competencies and responsibilities and elevates customer advocacy to sit beside customer marketing and customer success as a distinct postsale function. The model is a tool to ensure that these teams complement, rather than conflict, with each other and work seamlessly with account management, services, education, support, and customer experience. Fix Your Gaze On Five Engagement Outcomes The Customer Engagement Range Of Responsibilities Model is designed to align all functions in the postsale ecosystem around five key outcomes that reflect a focus on customer value: Value network engagement shapes the customer experience. A value network is a group of people and organizations that a customer works with to pursue the goal that drove their initial purchase. These networks play a significant role in the customers’ expectations and their use of the offering, which drives the decision to renew or repurchase and buy more. Product adoption ensures that users maximize the offering. Product adoption is the process by which customers begin to use an offering and integrate it into their daily workflow. It’s the path toward customer value attainment: Companies don’t see value in a solution that they aren’t using. Customer outcomes validate business benefits. Customer outcomes are the long-term and ongoing business benefits that customers want from deploying a product or service, including increased revenue, cost savings, and efficiency. Customers who clearly understand the value they have attained are predisposed to stay, grow, and advocate. Advocacy and references enhance reputation, demand, and growth. Customer advocacy takes a cohesive approach to finding and activating customer storytellers and references, elevating beyond one-off requests. Successful companies create a beneficial experience for advocates who share stories that in turn enhance reputation, encourage renewal or repurchase, and support growth. Account expansion increases revenue. Account expansion includes cross-sell and upsell strategies and programs that increase revenue from existing customers. Cross-selling involves selling additional products or services to a buying center not previously engaged, while upselling involves engaging an existing buying center with additional offerings. Reach out to your account manager for access to the new model and supporting guidance, or contact us to learn more about how we approach postsale customer engagement. source

How Well Are You Protecting Existing Customer Revenue? Read More »

Stanford’s 2025 AI Index Reveals an Industry at a Crossroads

Vanessa Parley, director of research at Stanford’s Institute for Human-Centered AII, speaks in a video about the 2025 AI Index. Image: Stanford HAI The AI industry is undergoing a complex and transitional period, according to Stanford University’s 2025 AI Index, published by the Institute for Human-Centered AI. While AI continues to transform the tech sector, public sentiment remains mixed, underscoring the rapidly shifting nature of the field. Below are key takeaways from Stanford’s latest findings on the current state of artificial intelligence — both generative and not generative. Investment in AI increases Investment in AI is growing. Private investors poured $109.1 billion into AI in the US. Globally, private investors contributed $33.9 billion to generative AI specifically. The number of businesses reporting using AI has grown from 55% in 2023 to 78% in 2024. Most notable AI models in 2024 were produced in the US; China and Europe follow. While China has 15 notable models to the 40 notable models produced in the US, China’s models nearly match America’s in quality. Plus, China produces more AI-related patents and publications. The Middle East, Latin America, and Southeast Asia have also produced notable AI launches. Most advanced AI are ‘reasoning’ models Frontier models today typically use “complex reasoning,” an increasingly competitive part of the field. Stanford pointed out reasoning is still a challenge. Frontier AI still struggles with complex reasoning benchmarks and logic tasks. Although companies often refer to human-level intelligence, pattern-recognition tasks that are simple to humans still elude the most advanced AI. SEE: Meta-hallucinations: Anthropic’s Claude 3.7 Sonnet and DeepSeek-R1 don’t always accurately reveal how they arrived at an answer in their explanations of their reasoning.  AI benchmark scores improve Stanford said benchmark scores are steadily improving, with tests like MMMU now considered standard and AI systems scoring high. Video generation has improved, with AI videos now able to be longer, more realistic, and more consistent moment-to-moment. More must-read AI coverage FDA approval for medical devices increase In 2023, a growing number of medical devices including AI were approved by the FDA: 223 compared to 15 in 2015 (these devices don’t necessarily include generative AI). Automated cars like Waymo’s growing fleet show AI is becoming more and more integrated with daily life. AI responsible risks needs to be addressed more Generally accepted definitions of how to use AI responsibly have been slow to emerge, Stanford pointed out. “Among companies, a gap persists between recognizing RAI [responsible AI] risks and taking meaningful action,” the researchers wrote. However, global organizations have released frameworks to address this. SEE: How to Keep AI Trustworthy From TechRepublic Premium Consumers worry about AI’s drawbacks compared to benefits Consumer sentiment does not always match business sentiment. Significant proportions of respondents to the study in Canada (40%), US (39%), and the Netherlands (36%) said that AI would prove more harmful than beneficial. Elsewhere, the public is more on board – see the number of people who believe AI has more benefits than drawbacks in China (83%), Indonesia (80%), and Thailand (77%). Confidence that AI companies will protect users’ data fell from 50% in 2023 to 47% in 2024 globally. Barriers to AI decrease, though environmental impact is still a concern As with any technology, people gradually learn how to produce it more quickly and with greater efficiency. Looking at Stanford’s data, costs to run the hardware declined by 30% annually, while energy efficiency improved by 40% per year. “Together, these trends are rapidly lowering the barriers to advanced AI,” the researchers wrote. Improved energy efficiency does not necessarily mean good energy use. Power consumption has increased beyond the capacity of energy efficiency to make up for it, meaning carbon emissions from frontier models continue to rise. source

Stanford’s 2025 AI Index Reveals an Industry at a Crossroads Read More »

Strava To Acquire UK-Based Running Training App Runna

By Elaine Briseño ( April 17, 2025, 3:30 PM EDT) — Privately held exercise app Strava announced Thursday that it will acquire United Kingdom-based Runna, a coaching platform for runners, but no financial details were included with the announcement…. Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as Daily newsletters Expert analysis Mobile app Advanced search Judge information Real-time alerts 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial. source

Strava To Acquire UK-Based Running Training App Runna Read More »

Judge Rules Google Is An Illegal Monopoly

Meta’s not the only Big Tech company in the hot seat this week. US District Judge Leonie Brinkema found Google liable for illegally monopolizing two online advertising technology markets: publisher ad servers and ad exchanges. This comes less than a year after another federal judge ruled that the company had a monopoly in online search. Google disagrees with the court’s decision and plans to appeal the ruling, asserting that publishers choose Google over other options because its tech tools are “simple, affordable, and effective.” As we’ve said before, the impact of these cases won’t be fully realized until the remedies stage, which may take years to play out. Any order to break up Google will spend time in the court of appeals and potentially go to the Supreme Court. When we surveyed consumers about Google’s illegal monopolies, only 18% said they “believe that Google will have to break up.” The Google Era Gives Way To A Google Overhaul Judge Brinkema’s ruling, paired with Judge Amit Mehta’s finding that Google maintains an illegal search monopoly, raises the likelihood of Google’s overhaul. The Department of Justice specifically requested divestment of Google Ad Manager, which includes its publisher ad exchange and ad server. At least, Google will be compelled to not destroy evidence of its monopolization going forward. According to Judge Brinkema, “Google’s systemic disregard of the evidentiary rules regarding spoliation of evidence and its misuse of the attorney-client privilege may well be sanctionable.” In addition, Google’s publisher adtech could be restructured by separating its ad server from its ad exchange, opening the loop between two products that have been tied to competition’s detriment. Publishers Can Expect (Eventual) Changes To The Sell-Side Adtech Ecosystem This ruling heightens (the already substantial) counterparty risk between Google and publishers, which is exacerbated by generative AI. Google’s AI Overviews, which facilitate zero-click searches, retain traffic that would, pre-ChatGPT, land on publishers’ sites. During guidance sessions, publishers tell us that they’re losing tons of traffic to AI Overviews. Publishers missing traffic must now deal with uncertainty about the future of Google’s sell-side adtech. Advertisers, however, are relatively unaffected by this decision. The DOJ failed to prove that Google has a monopoly on tech advertisers’ usage to buy display ads. In ruling for Google on the buy side, where Google fortifies tech acquired from DoubleClick and Admeld, Judge Brinkema found that advertisers choose among various ad platforms based on perceived return on ad spend. Advertisers continue to be dissatisfied by Google’s buy-side adtech’s lack of transparency and control, but Google doesn’t monopolize that market. Forrester clients: Let’s chat more about this via a Forrester guidance session. source

Judge Rules Google Is An Illegal Monopoly Read More »

'No AI Agents are Allowed.' EU Bans Use of AI Assistants in Virtual Meetings

Image: Guillaume Périgois/Unsplash The EU is banning the use of AI-powered virtual assistants during online meetings. Such assistants are often used to transcribe, take notes, or even record visuals and audio during a video conference. In a presentation from the European Commission delivered to European Digital Innovation Hubs earlier this month, there is a note on the “Online Meeting Etiquette” slide that states “No AI Agents are allowed.” AI agents are tools that can perform complex, multi-step tasks autonomously often by interacting with applications, such as video conferencing software. For example, Salesforce uses AI agents to call sales leads. The Commission confirmed this presentation was the first time this rule had been imposed but declined to explain why when questioned by Politico. There is no specific EU legislation that covers AI agents, but the AI models that power them will need to abide by the strict and controversial rules of the AI Act. AI agents raise security concerns While AI notetakers and other agent types are not inherently a security threat, according to a 2025 report from global AI experts, security risks stem from the user being unaware of what their AI agents are doing, their innate ability to operate outside of the user’s control, and potential AI-to-AI interactions. These factors make AI agents less predictable than standard models. SEE: How Can AI Be Used Safely? Researchers From Harvard, MIT, IBM & Microsoft Weigh In Tech companies do have to be cautious when promoting products that can accomplish an increasing amount without the user’s awareness. One of the biggest cautionary tales is that of Microsoft Recall, an AI tool that allowed users to control their PC or search through files using natural language. The convenience came at a cost: Recall captured screenshots of active windows every few seconds, saving them as a timeline, raising concerns about privacy and data usage and leading to significant launch delays. Microsoft has since released a series of agents specifically designed to tackle cyber threats. More must-read AI coverage AI agents are growing in prevalence This hasn’t stopped the AI players from handing over more control to their models. Anthropic added a Computer Use feature to its Claude Sonnet chatbot in October 2024, which gave it the ability to navigate desktop apps, move cursors, click buttons, and type text. Its deep research function, announced this week, also responds to prompts “agentically,” as does Microsoft’s equivalent. Last month, OpenAI expanded its text-to-speech and speech-to-text tools to agentic models, indicating their growing relevancy. In January 2025, OpenAI announced Operator, an agentic tool that runs in-browser to autonomously perform actions such as ordering groceries or booking tours. SEE: EU Invests €1.3 Billion to Boost AI Adoption & Improve ‘Digital Competencies’ Anthropic and OpenAI are even working together to improve agent technology, with the latter adding support for the former’s Model Context Protocol, an open-source standard for connecting AI apps, including agents, to data repositories. Anthropic has also joined forces with Databricks to help large corporate clients build their own agents. TechRepublic predicted at the end of 2024 that the use of AI agents will surge this year. OpenAI CEO Sam Altman echoed this in a January blog post, saying “we may see the first AI agents ‘join the workforce’ and materially change the output of companies.” By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, according to Gartner. A fifth of online store interactions and at least 15% of day-to-day work decisions will be conducted by agents by that year. source

'No AI Agents are Allowed.' EU Bans Use of AI Assistants in Virtual Meetings Read More »

Why 81% of organizations plan to adopt zero trust by 2026

VPN technologies have long been the backbone of remote access, but according to new ThreatLabz research, the security risks and performance challenges of VPNs may be rapidly changing the status quo for enterprises. The Zscaler ThreatLabz 2025 VPN Risk Report with Cybersecurity Insiders draws on the insights of more than 600 IT and security professionals on the growing risks and operational challenges posed by VPNs. It reveals that enterprises are actively grappling with the security risks, performance challenges, and operational complexity of VPNs. One key striking trend: enterprises are beginning to transition en masse to adopt zero trust solutions. Overall, 65% of organizations plan to replace VPN services within the year, a 23% jump from last year’s findings. Meanwhile, 96% of organizations favor a zero trust approach, and 81% plan to implement zero trust strategies within the next 12 months. All of these shifts, meanwhile, happen within the context of an AI-enabled threat landscape. Because VPNs are internet-connected, it has become relatively straightforward for attackers to use AI for automated recon targeting VPN vulnerabilities. This can take the form of simply asking your favorite AI chatbot to return all current CVEs for VPN products in use by an enterprise, which are then easily scanned over the public internet. When you consider that researchers have recently discovered that tens of thousands of public IP addresses hosted by at least one of the largest security providers are being actively scanned, likely by attackers, the crux of the problem for VPNs becomes clear: if you’re reachable, you’re reachable. The report analyzes these risks in the context of enterprise concerns, plans, and their adoption of zero trust strategies to secure the hybrid workforce and enable secure connectivity to private applications.  Below, this blog post discusses three key findings from the report underlying these critical shifts. For full insights, analysis, and best practices, download the Zscaler ThreatLabz 2025 VPN Risk Report today. 1. The widespread security challenges of VPNs Virtual Private Networks (VPNs) were once the gold standard for enabling secure remote access. But as cyber threats evolve, VPNs have shifted from trusted tools to major liabilities. Indeed, VPN vulnerabilities are proving irresistible for attackers; 56% of organizations reported VPN-exploited breaches reported last year, a notable rise from the year prior. Such vulnerabilities pose a central challenge. Because VPNs are internet-connected devices, threat actors can easily probe for impacted VPN infrastructure and exploit it before any patch is released or has been applied. Recently, CISA issued an advisory for impacted organizations to apply security updates for CVE-2025-22457, now a known-exploited critical vulnerability that may allow unauthenticated attackers to achieve remote code execution (RCE). These gaps have become prime entry points for ransomware campaigns, credential theft, and cyber espionage campaigns that can cause widespread damage across networks. Indeed, a staggering 92% of respondents share concerns that unpatched VPN flaws directly lead to ransomware incidents—highlighting how difficult it is to continuously patch VPNs in time. Meanwhile, 93% of respondents express concerns over backdoor vulnerabilities introduced by third-party VPN connections, as attackers increasingly exploit third-party credentials to breach networks undetected. Mapping the rise of VPN CVEs from 2020-2025 In an effort to understand the rise of VPN vulnerabilities, ThreatLabz also analyzed VPN Common Vulnerabilities and Exposures (CVEs) from 2020 to 2025 based on data from the MITRE CVE Program. In general, vulnerability reporting is a good thing, as rapid vulnerability disclosure and patching helps the entire ecosystem improve cyber hygiene, improve community collaboration, and quickly respond to new vectors of attack. No type of software is immune from vulnerabilities, nor should it be expected to be. Zscaler Figure 1: The impact type of VPN CVEs from 2020-2024, covering remote code execution (RCE), privilege escalation, DoS, sensitive information leakage, and authentication bypass. How these CVEs are discovered and the information they contain reflect changes in the evolving threat landscape. In the case of VPNs, ThreatLabz found that not only have VPN vulnerabilities increased over time — in part reflecting their popularity during the post-COVID transition to hybrid work — but they are often severe. Over the sample period, VPN CVEs grew by 82.5% (note that early 2025 data has been removed for this portion of the analysis). In the past year, roughly 60% of the vulnerabilities indicated a high or critical CVSS score — indicating a potentially serious risk to impacted organizations. Moreover, ThreatLabz found that vulnerabilities enabling remote code execution (RCE) were the most prevalent kind in terms of the impact or capabilities they can grant to attackers. These types of vulnerabilities are typically serious, as they can grant attackers the ability to execute arbitrary code on the system. Put another way, far from being innocuous, the bulk of VPN CVEs are leaving their customers vulnerable to exploits that attackers can, and often do, exploit. As enterprises race to keep pace with advancing attacker sophistication, organizations are turning to other options. Zero trust architectures are emerging as the solution for filling these security gaps. Unlike VPNs, which rely on implicit trust and broad network access, zero trust frameworks enforce granular, identity-driven access policies that directly mitigate attacker movement within networks — and remove the risk of internet- and network-connected assets that can be easily scanned for and exploited by attackers. 2. End-user frustration driving enterprise decision-making VPN inefficiencies aren’t just a problem for security—they’re frustrating users. Slow connectivity, frequent disconnections, and complex authentication processes have plagued VPN users for years — and these challenges top the list of end-user frustrations in our findings. According to the report, these user experience frustrations are increasingly influencing IT strategies, with enterprises looking to zero trust to deliver secure access without performance challenges or compromises. Zero trust models achieve this by bypassing centralized network dependencies in favor of direct, application-specific connections. The result? Employees gain swift and seamless access to the tools they need, while IT teams can ensure security posture checks and policy enforcement in real-time. Unsurprisingly, satisfaction with zero trust solutions spans both end users and IT teams, solidifying

Why 81% of organizations plan to adopt zero trust by 2026 Read More »