The cybersecurity industry is in the middle of a land grab as AI security M&A heats up. In just 18 months, eight major vendors — including Check Point, Cisco, CrowdStrike, F5, and Palo Alto Networks — have spent upwards of $2.0 billion acquiring startups focused on securing enterprise AI. AI for security is already poised to disrupt the industry, but these acquisitions show that security for AI is every bit as important. While the individual deal sizes can’t match up to the larger deals we’ve seen throughout 2024 and 2025, such as the Wiz and CyberArk acquisitions, these tuck-ins show that cybersecurity M&A isn’t slowing down.
Why AI Security Is Suddenly A Board-Level Priority
Enterprise AI adoption has exploded. From customer-facing chatbots to internal coding copilots and autonomous agents, AI is now embedded in core business processes. But legacy security tools weren’t built for this — they don’t understand prompt injection, model tampering, or AI-specific data leakage. Security vendors saw the gap, and instead of building AI security capabilities from scratch, they bought them.
Who Bought What And Why
Here’s a snapshot of the deals that are reshaping the market:
Acquirer | Acquired company | Deal value | Strategic purpose |
---|---|---|---|
Palo Alto Networks | Protect AI | $650 million | Launch Prisma AI resilience |
CrowdStrike | Pangea | $260 million | Extend Falcon with AI detection and response |
Cisco | Robust Intelligence | ~$500 million (estimated) | AI model validation in security cloud |
Check Point | Lakera | ~$300 million | Embed runtime guardrails for large language models and agents |
F5 | CalypsoAI | $180 million | Add inference-layer defenses to app security suite |
Cato Networks | Aim Security | $300–350 million | Integrate AI governance into SASE platform |
SentinelOne | Prompt Security | ~$250 million | Monitor genAI use within XDR offering |
Tenable | Apex Security | ~$105 million | Extend risk management platform to AI attack surfaces |
For the acquirers: These AI security M&A deals are about more than technology. They’re a race to collect talent, reduce time to market, and maintain competitive positioning. Vendors needed innovative products, PhD-level experts, and signs of early traction with Fortune 500 customers. Most importantly: They wanted to avoid being the only major player without an AI security story.
For the acquired: The macroeconomic and geopolitical environment is volatile. Protectionist policies — in every region and country — make it tough to be an early-stage vendor that can’t build or staff to meet every country’s sovereignty requirements. Couple that with budget pressure for CISOs, and suddenly, exiting early and taking shelter within a well-capitalized mega-vendor seems like a pretty smart move.
What This Means For CISOs
The good news: AI security capabilities are coming to the platforms you already use. You won’t need to stitch together point solutions or build from scratch. You’ll get AI model scanning, prompt filtering, agent sandboxing, and AI-specific data loss prevention all integrated into your firewall, extended detection and response (XDR), or secure access service edge (SASE) suite.
The challenge: Integrations take time, so none of this will come to your favorite platform on day one, but these acquisitions should — not will, but should — be faster to integrate than some others. The acquired companies are smaller, have fewer products, and most are cloud-native platforms with comprehensive API capabilities. The platform story isn’t always unicorns and rainbows, though.
The longer view: Securing generative AI (genAI) is today’s problem, but agents are here, and agentic is just around the corner. I’ll be delivering a keynote with my colleague Jess Burn at Forrester’s Security & Risk Summit 2025 titled “The CISO Of The Agentic Future,” which explains how securing agents and agentic AI will change security programs. Come see us in Austin on November 5–7.
What To Do About It
Here’s what you’ll need to do — as these capabilities come to your existing solutions — to solve for these use cases:
- Start with discovery and genAI’s detection surface.
Nothing in security happens without visibility: You need to know where genAI exists across your technology estate. Understanding applications, users, models, and data, as well as how each intersects, is the starting point for your detection surface.
- Build cross-team bridges.
AI security isn’t just a CISO’s problem: You need to work with data scientists, developers, innovation teams, and compliance officers. Align policies for AI usage, model development, and acceptable inputs/outputs.
- Revisit vendor contracts and roadmaps.
Ask your vendors how they’re integrating their acquisitions. What features are available now? What’s coming next? Will AI security be bundled or sold separately? Push for clarity on service-level agreements, support, and pricing.
- Don’t rely solely on technology.
AI security tools help, but they’re not enough. You still need policies, training, and oversight. Update acceptable use and data confidentiality policies, educate employees on AI risks, and establish governance frameworks.