It’s a fast and furious week in the world of generative AI (genAI) and AI security. Between DeepSeek topping app store downloads, Wiz discovering a pretty basic developer error by the team behind DeepSeek, Google’s report on adversarial misuse of generative artificial intelligence, and Microsoft’s recent release of Lessons from red teaming 100 generative AI products — if securing AI wasn’t on your radar before (and judging by my client inquiries and guidance sessions, that’s definitely not the case), it should be now.
All of this news is timely, with my report covering Machine Learning And Artificial Intelligence Security: Tools, Technologies, And Detection Surfaces having just published.
The research from Google and Microsoft is worth the read, and it’s also timely. For example, one of Microsoft’s top three takeaways is that generative AI amplifies existing security risks and introduces some new ones. We discuss this in our report, The CISO’s Guide To Securing Emerging Technology, as well as in our newly released ML/AI security report. Microsoft’s second takeaway is that the detection and attack surface of genAI goes well beyond prompts, which also reinforces the conclusions of our research.
Focus On The Top Three GenAI Security Use Cases
In our research, we simplify the top three use cases that security leaders need to worry about and make recommendations for prioritizing when you need to worry about them. Security leaders securing generative AI should:
- Secure users who are interacting with generative AI. This includes employee — and customer — use of AI tools. This one feels like it’s been around awhile, because it has, and unfortunately, only imperfect solutions exist right now. Here, we focus primarily on “prompt security,” with scenarios such as prompt injection, jailbreaking, and, simplest of all, data leakage. This is a bidirectional detection surface for security leaders. You need to understand inputs (from the users) and outputs (to the users). Security controls need to examine and apply policies in both directions.
- Secure applications that represent the gateway to generative AI. Pretty much every interaction that customers, employees, and users have with AI comes via an application that sits on top of an underlying ML or AI model of some variety. These can be as simple as a web or mobile interface to submit questions to a large language model (LLM) or an interface that presents decisions about the likelihood of fraud based on a transaction. You must protect these applications like others, but because they interact with LLMs directly, additional steps are necessary. Poor application security processes and governance makes this far more difficult, as we have more apps — and more code — as a result of generative AI.
- Secure models that underpin generative AI. In the generative AI world, the models get all the attention, and rightfully so. They are the “engine” of generative AI. Protecting them matters. But most attacks against models — for now — are academic in nature. An adversary could attack your model with an inference attack to harvest data. Or they could just phish a developer and steal all the things. One of these approaches is time-tested and works well. It’s good to start experimenting with model security technologies soon so that you’ll be ready once attacks on models go from being novel to mainstream.
Don’t Forget About The Data
We didn’t forget about data, because protecting data exists everywhere and goes well beyond the items above. That’s where research on data security platforms and data governance comes in (and where I step aside, because that’s not my area of expertise). Think of data as underpinning all of the above with some common — and brand-new — approaches.
This sets up the overarching challenge, which allows us to get into the specifics of how to secure these elements. Things might look out of order at first, but I’ll explain why this is the necessary approach. The steps, in order, are:
- Start with securing prompts that are user-facing. Any prompt that touches internal or external users needs guardrails as soon as possible. Many security leaders we’ve spoken with mentioned finding that customer- and employee-facing generative AI already existed well before they were aware of it. And of course, BYOAI (bring your own AI) is alive and well, as the DeepSeek announcements have showcased.
- Then move on to discovery across the rest of your technology estate. Look up any framework, and “discovery” or “plan” is always the first step. But those frameworks exist in a perfect world. Cybersecurity folks … well, we live in the real world. This is why discovery is second here. If customer- and employee-accessible prompts exist, they are your number one priority. Once you’ve addressed those, you can start the discovery process on all the other implementations of generative and legacy AI, machine learning, and applications interacting with them across your enterprise. That’s why this is the second step. It may not feel “right,” but it’s the pragmatic choice.
- Move on to model security after that … for now. At least in the immediate future, model security can take a bit of a back seat for industries outside of technology, financial services, healthcare, and government. It’s not a problem that you should ignore, or you’ll pay a price down the line, but it’s one where you have some breathing room.
The full report includes more insights, identifies potential vendors in each category, and gives additional context on steps you can take within each area. In the meantime, if you have any questions about securing AI and ML, request an inquiry or guidance session with me or one of my colleagues.