Autonomous AI agents = Autonomous security risk

Upping the pressure? The risks from AI agents will grow super fast because AI itself changes so fast. Rather than one human stealing employee credentials — or 50 machines orchestrated by a hacker — there will be 50,000 AI agents. They’ll move fast, learn and pivot — far faster than humans. Any semblance of control we think we have over data and systems will be fiction.

Back to basics 

As such, the next enterprise security frontier isn’t only defending against human threats — it’s also about securing the exploding universe of autonomous AI agents. CEOs and CIOs need to double down on the basics — ideally before AI agents are deployed. Needed steps include risk assessment around: 

  • Employees. At least 15% of employees “routinely access” generative AI platforms on their corporate devices, a Verizon survey shows. This greatly enhances the risk of data leaks and open doors. Figure out what’s going on in your company, who’s using what and what guardrails are needed.
  • Agent permissions. If you’ve already got AI agents deployed, what do they have permission to do and what data can they access? AI agents often rely on credentials or API tokens initially provisioned with overly broad permissions for simplicity or operational speed. Over time, these broad permissions create significant security risk, as agents perform tasks and access critical resources far beyond their actual business requirements.
  • Data. What data is being uploaded and where? How strong is your data governance, meaning you know where the data came from, when, how and if it was changed and by whom? Agentic AI will exploit weak data governance like never before because AI’s ability to explore data is unprecedented.
  • Vendors. How are vendors using and securing AI agents? Where are they in your supply chain? Look for AI agents to have job functions, like ordering parts when supplies get low. You want vendors to profile agents so they’ll be more likely to spot abnormal behaviors. For instance, if the parts agent asks for supplier payment information, red flag alert. Press vendors for audits and metrics that show results. 

In short, secure AI agents are like any employee. Demand full visibility into human and non-human identities, the ability to track interaction by AI agents back to their origin, spot behaviors that are abnormal and detect unauthorized, anomalous or risky actions by agents across cloud, SaaS and hybrid infrastructures. By continuously auditing agent permissions, privileges and interactions, companies will better enforce policies that minimize risk exposure.

source

Leave a Comment

Your email address will not be published. Required fields are marked *