Datacom CISO Collin Penman on AI-powered threats and cyber fatigue

Collin Penman: I think they’re slowly maturing, I would use the word, but I’ll take a step back, because I think AI affects companies in a couple of different ways. We see AI coming through the supply chain.

A lot of the products that we actually using, from a supply chain point of view now has AI actually embedded within those products that we’re consuming that may be utilizing the data internally to learn and teach those products as an example.

So supply chain, I certainly see the requirement to not only include AI questionnaires as part of that supply chain risk point of view.

We’re starting to see itused internally, obviously, to build out the business value from applications point of view, really reduce, you know, a lot of the manual tasks, repetitive tasks internally, but actually to to really start to engage customers and our internal staff around what we can do better around the applications that we have.

And then two things, one is the defence from AI based security attacks. So starting to look at how AI helps us from a cybersecurity point of view, and then the way that we see, and a lot of discussion, is around AI utilized within cybersecurity from threat actors.

And we’re really starting to see that, not only from phishing emails. Previously, you would have seen a phishing email that was poorly worded or all the English wasn’t spectacular, it’s very, very now targeted to the individuals using AI.

So those are the frameworks of what we’re starting to see.

But I think back to your question around how CIOs really rethink –  I think they really need to think about their security posture, the landscape that they’re operating in, from a market point of view, adopt AI as part of that defense mechanism, but the other thing is to upskill and educate the internal teams about the use of AI internally with external AI providers.

And what does it mean from a development point of view as well? Yeah, that’s interesting. The third-party risk as well. And it look, it sounds like a lot of organisations think they’re more secure than they actually are.

So what are some of those common blind spots that leave them exposed? You know, even when they think they’re covered? I think, I mean, certainly our index showed that there was a gap between where the leadership of organisations were versus the actual employees.

And the employees –  their awareness and the education training on the use of AI is certainly not there.

And I think that’s, I know where, that’s where our focus is even within Datacom is is not to stand up a separate artificial intelligence, but how do we actually bring that into the current organisation processes, not only from a change management point of view, but from a governance and security and even a cyber awareness training so that we actually now incorporate AI training.

And what does it mean to specifically the service desk from a deep fakes point of view or from a fishing point of view? What do we have from a AI around phishing for our finance teams? And what do we look for around that?

So really, it’s to make sure that the development and the ecosystem around AI is is secure, to educate and up skill the teams, but it’s also around doing the right hygiene things, you know, patching, making sure we have early detection and continuous monitoring around some of the tooling.

And I know a lot of companies are moving towards a zero trust framework around multi-factor authentication and access to applications, and certainly that’s where AI also adds a lot of value.  

source

Leave a Comment

Your email address will not be published. Required fields are marked *