Hi everybody. Welcome to Global Tech Tales, the show where we talk with editors from around the world about the latest technology and leadership topics to find out what buyers want. I’m Keith Shaw co hosting along with Matt Egan, the global content and editorial director at Foundry, who also is representing the UK on this global podcast. And joining us on this episode is Andrea Benito. She is the editor of CIO Middle East for Foundry.
Welcome everybody. Hello, hello, thank you. You know the episode that we’re going to talk about today is managing risk in an AI world. So we’re going to talk a lot about security, but also talk about risk. And when we start the show, we talk about some statistics. And so from our friends at IDC, from their worldwide responsible AI survey, they said more than 30% of respondents have noted that the lack of governance and risk management solutions was their top barrier to adopting and scaling AI. More than 75% of those who use responsible AI solutions reported improvements in data privacy, customer experience, confident business decisions, brand reputation and trust and organizations are increasing their investments in AI and machine learning governance tool, 35% of AI organizations spend in 2024 was allocated towards governance tools, and 32% was to professional services. However, another survey by PwC showed that while 73% of executives said they currently plan or plan to use generative AI in their organizations. Only 58% of them have completed a preliminary assessment of AI risks in their organization. So as we think about these statistics, I want to hear what people are talking about around the world, some of from from the IT leader perspective. So you know, what are some of the risks that they are discovering that need to be managed? So Andrea, why don’t we start with you? You know, what are you hearing in your side of the world?
It depends who you ask. If you are talking to a CIO or you are talking to a CISO, because AI, for example, is an open door for CIOs as a great opportunity, but it’s also a back door for hackers and more sophisticated attacks. So what I hear is that CISOs need to be proactive, and of course, they can modernize cyber security measures and AI driven risk management strategy and are becoming more integral across industry, but evolving trends are also emerging due to AI so CSOs, they see a great potential, but because of all the volume of data that is being generated on all the new attacks that are coming through, AI is this a false alarm? Because of the vast volumes of data like they need to find the difference between true threats and false alarms. I can say that, for example, in the UAE, where I’m based, there is a proactive approach to AI. We have a national artificial intelligence strategy for 2031and we have seen a huge increase in the use of AI, especially specialty in the healthcare sector, but also in oil and gas, where they are integrating AI for predictive maintenance and logistics. That’s one of the, you know, huge sectors across the Gulf region, oil and gas.
Yeah, but as I say, great opportunity for CIOs, but a back door for more sophisticated and new attacks. So CISOs, they see AI as a kind of a threat, and they fear the impact of having AI everywhere. I mean everything, okay.
And Matt, you know, I wanted to ask you about some of the different areas of risk. Do you think that data privacy is the is the biggest issue, or is it some other areas where some potential risks could be jumping in, whether it’s, you know, generative AI that could create a vulnerability or or things like that, or, you know, the risk of putting AI into a company’s existing products, you know, or is data privacy like the big, the big chunk?
I mean, it’s certainly one of the biggest, if not the biggest. It’s really interesting actually listening to Andrea, from her perspective, and Keith, you and I, you know, I’m in the UK, we’re both very focused on the US market. All IT leaders. Literally every single one tells us that they are either scoping, trialing or implementing AI projects. And in our own recent research, our AI priority study, 98% of IT decision makers say they see challenges with AI deployments, and a big part of that is, is, to your point, is the unknowns, right? AI is a very powerful thing. It can exponentially accelerate good outcomes, which means and Andrea touched on this like it can accelerate bad outcomes too.
And in the same piece of research, and certainly in anecdotal conversations, you have, 96% of decision makers IT leaders say they have difficulty addressing the ethical implications when implementing AI technologies, 44% to your point, Keith, specifically say they have concerns over data privacy. And a really interesting stat, I think, is that 30% of surveyed, IT decision makers say they believe that organizations are moving too fast, specifically with generative AI, none of these things are directly about managing risk, but it’s all related, right? There are unknowns. AI is an accelerator. IT leaders. Andrea spoke about this. The CIOs, they’re kind of under pressure to to go and play in this space and and find out, like, how success can be achieved. And you touched on this. Keith, like, I think data privacy is a big part of this, but I think we can break down the risks into two different approaches.
One is the use of AI in existing internal processes and operations, and the other is the building of AI into products and services that are going externally. And they do offer two different, completely different types of challenges. On one hand, on the internal side, you have to manage the risks of things going badly wrong, and that is where the data privacy thing really comes in, right? You could be creating vulnerabilities that could lead to a data breach. There’s a risk of you misusing data because you don’t know, in the end, how a generative AI in particular is going to adapt and continue to to use data. There’s an ethical risk. On the other hand, if you’re building customer products and services that include elements of AI like you’re putting that risk, all of those things into the hands of your customers, and on that side, especially, but but in both cases, you have to consider the supply chain. So every platform it touches, every vendor that’s involved, is it open source? What about your storage and connectivity? Each link in the chain offers risks that needs to be recognized and managed. And I do think to your point, Keith, I think at the root of all of this is privacy as well as ethical use of data, there’s also this, this potential that you’re going to create some kind of vulnerability
because of the the accelerant nature of the AI. You know, I really enjoy having these conversations, because sometimes it doesn’t matter where you are based, because I’m based in Dubai, but covering Middle East, and every CIO is having the same issue, and I completely agree with everything that Matt has has say. Because when I attend events, you know, our technology is moving so fast, but we humans and tech tech leaders are not moving that fast. So sometimes here I was approached to me like, now generative AI is the hot topic. It was cloud in the past. Blockchains in the past as the war and my CEO, my C senior levels, are coming to me like, we need to implement AI. We need to have this. We need to implement generative AI. And every CIO in these events are like, but why? What? What’s the goal of doing this? And it’s because it’s the whole topic. Is because everyone is saying like, we need to have AI. But what is the value like, why? What is AI? What do you want to get from AI, and what is the value that you want to achieve by implementing these solutions?
Do you think the business leaders understand that, or do you feel like maybe the IT leaders are sometimes seen as the bad guy or the traffic cop that’s holding up the stop sign while everybody else is rushing towards them, going, ai, ai, ai, and either the CIO or the CISO or the CSO is saying, Whoa, we can’t do this because of these risks. When you talk to the CSOs. Do they feel bad about, maybe, about trying to slow it down or hold it up? Anybody?
No, I was gonna say that. I don’t think tech leaders are tech leader anymore. They are business leaders. They are involved in every board member conversation. I think the CEOs are well aware on senior levels of the importance of investing in digital transformation, in cyber security, like we all know that, and they are part of the conversations, and CIOs are the right hand of CA CEOs, but still we have not achieved that same level conversation yet they they know they have to do it. They know they need to work with the CIO, but we still did not achieve the 100% of the understanding like what it means and what is the back door that is going to mean also.
And this is something we talk about in every conversation. Keith, like is the idea of the CIO, the IT leader as the the agent of transformation, but we carried a report on CIO recently, on cio.com recently, which said that CFOs were extremely, not negative necessarily, but cautious around AI. And you can, you can see that from a cost perspective, right, like we’ve spent the past.
18 months trialing big projects and to Andrea’s point, like with the IT leader and being the agent of transformation. But you know, as I think, was behind the way you framed that question, Keith, there are lots of organizations whereby AI might be the answer, but no one’s quite sure what the question is, I think. But the other thing that is impacting the CFO mindset is definitely this idea of risk, right? We’re introducing things, and we don’t know what the outcomes are, and we’ve got regulation to deal with. And you know, it definitely feels like in that context, IT leaders, CIOs specifically, are kind of in the middle. And it varies from org to org, and it varies from individual but you know, what I hear often is, you know that the CIO feels like, on the one hand, their IT strategy has to almost drive business strategy these days, which is a whole new thing, but on the other hand, they’ve got to make that strategy work, which means the infrastructure piece is huge, and so they’re kind of both driving transformation and managing risk with their CISO partners. And it can be quite an invidious position to be in, but it also can be quite a cool position to be in the right organization.
When you talk to, you know, these, the CSOs and the CIOs out there, how confident are they in their data? Because generative AI kind of came on like this wave, and a lot of companies, at least the ones that we were talking to here in the US were still in the middle of a lot of data transformation, digital transformation projects. So there’s there was a feeling that maybe that they didn’t have their data ready for generative AI yet, they’re still working on that. But what are we hearing in other parts of the world, from my perspective, from my conversations with CIOs, there is definitely a lot of tension. They don’t think the data, the data is ready, the data is there, and they can get value from there, but they don’t know where to start, like not all the solutions have been yet implemented, so they think that they are spending a lot of time taking that data to be used, and there is still a lot of data that they say, literally, is trash. It’s rubbish. It’s losing my time, and I, I’m not able to use this data anymore. And that is the real fear, all the garbage that is out, right? That’s, that’s the, the term that we hear here is garbage in, garbage out. So, yeah, yeah.
you’re hearing the same thing, yeah. I mean, it varies from org to org. Of course it does. But exactly what Andrea was speaking to that, I think, is the case is that, and again, this, this can be that, that issue of the difference in perception between a business leader and the person who has to implement the strategy we’ve talked about this before many times, Keith, but like, if you are the CIO, not only are you kind of responsible for driving forward an innovative strategy, but you’re also responsible for, like, the the data, the infrastructure, cloud and connectivity and and the security with your CISO partner, and those are the big pieces that need to be in place before any AI project can succeed. I think the challenge, and this is definitely a risk management issue, is that you cannot afford any one of those elements to be sub optimal, and with data, especially the marketing department, might not want to accept or may not even know that their data is trash. And so that can be quite a difficult perception to challenge, if you’re the person who’s in charge of implementing something that that kind of can’t succeed until that heavy but hidden work is done. And I know we brought up vulnerabilities earlier and some data breaches potential.
How concerned are the IT leaders that you talk with?
How concerned are they about their use of AI by some of these bad actors? And do they feel like that? They have to now be even more vigilant in their protection schemes, and, you know, because of better social engineering capabilities, insider threats, things like that. And then are they also looking at AI to help them come combat these, these new threats. Andrea, why don’t we start with you? I wanted to say that, well, I want to talk about, later we will talk about AI regulations, because I know this is we are talking that every CIO is having the same issues, but AI regulations are different in all the countries where we are based. But I wanted to say, before going deeper into that, that, for example, AI enabled ransomware is one of the biggest fear for me at companies here. I mean, CISOs are asking and searching for AI tools capable to prevent zero day attacks, for example, yeah, definitely, definitely, I think probably a worldwide situation as well. With the speed of of how AI can can do that. It’s an Arms Race, right? It’s an arms race. Ai enables the acceleration of everything. So it is definitely enabling the acceleration of defense, but it needs to, because bad actors are able to increase threats exponentially. And if I could make this link, you know, governments are pretty slow to regulate against these things. Like, it tends to be that the bad actors and the AI can can move more quickly. So regulation is a big deal, right? Yeah, all right, that’s a great segue.
I want to get back to Andrea talking about some of the regulations that she’s seeing in the in the Middle East. When we talked before the show, you had mentioned Amazon, AWS, for example. Like, there are regulations that prevent companies from using that service in your area. Is that right? Or did I miss that? Yeah, yeah, that’s right. I mean, here many countries are in the region are aligning with GDPR, but depends on, you know, some sensitive data, like healthcare, banking, like companies have to be very careful about where they source the data.
That’s why some companies are fully embrace public cloud, like AWS, Azure, Google Cloud, because of data residency concerns. That’s why CIOs are looking at more hybrid models. But I’m pleased to see in the last few years, like all the advancements that major cloud providers are making to build local data center like, for example, AWS is going to base 5.5 point 3 million USD to build data centers in Saudi Arabia to boost their technology in the region. And for example, another great example. What we have seen, Oracle. Oracle, they just opened the second cloud region in Saudi Arabia. The first one was yellow in Red Sea. And now is they just open real and they are going to open in neon in the north, plus two more data centers in the UAE. Yeah, Dubai and Abu Dhabi. So I think this, in total, is yeah, 1.5 billion USD investments only in Saudi Arabia, not talking about the UAE. So yeah, this all happened just in the last four years. So yeah. I mean, I’m really looking forward to seeing what these cloud providers are going to do in the region in the next years.
And Matt, I wanted to bring up things that are going on in the UK and the US, where we’re starting to see some political situations arise too, right?
Yeah, I mean, the UK specifically is in a unique and potentially uniquely bad situation in the sense that we’re outside the EU. But we have tended, as with the UAE, actually, we have tended to map to EU regulations. So GDPR, for instance, is is not enforced in the in the UK, but, but, but our laws mirror it kind of thing. But we’re also an international hub, and so pretty much every organization, even if it’s, you know, something like a university or a public sector organization, it needs to operate to international standards and manage risk with that in mind. But at the same time, our economy is in the toilet, basically. So the UK Government needs something to kick start that economy, particularly since we left the EU, and sees AI and deregulation as a means of doing that, which kind of makes it a bit of an unholy mess if you happen to be an IT leader on this island, because there’s huge amounts of expectations and lots of unknowns to manage, and then at the same time, Keith, you know, we can speak to this Right, like in the US, the political situation, means that
I think we’re seeing similar, but maybe even more accelerated versions of those, of those trends.
Yeah, it feels like that, that Trump is going to kind of be hands off on AI, but you never know. I mean, it’s, it’s a, you know, it’s such a wild card situation, especially he’s got, you know, he’s, he’s, he’s working with Elon Musk, and it feels like, right now that they’re probably not looking at AI, but at some point they could. I think they might pull back from some of the Biden regulations and executive orders that were issued. But I wouldn’t, I wouldn’t put it across, you know, them to suddenly then come, you know, and say something like the data storage requirement, for example, I know that they’re big on making sure that data storage stays in the US, or, you know, doesn’t go to some of the their enemies, so that that could be something that that companies would have to worry about as well. You have to look kind of beneath the surface of what’s driving a lot of this. And it’s, it’s economics, right? Like, like, most of the major Western economies are struggling somewhat, and so I’ll give you an example, which is, there’s been quite heavy lobbying in the UK for the government to kind of roll back a little on intellectual property and copyright as it relates to AI, which obviously in our industry as the creators of content and intellectual property. That’s kind of not something we want to hear. That’s what our business values. And you know, some quite strong voices within the UK Government are seeming to be impacted by this lobbying, because they see maybe if you sort of deregulate copyright, like there’s some kind of accelerator for for business, and I do see similar kind of trends in the US where it’s definitely the case.
That the big tech companies have got more of an influential voice now than they have had. And how do you manage risk there, right? If things have been deregulated, in a way, I mean, history tells us, time and again, when financial markets are deregulated, in the end, something goes boom, right? And you know that that kind of feels like, from an IT decision maker perspective, like it’s a very challenging situation to be in to try and map to multiple different governments and the trends around regulations that they’re trying to handle. I mean, to Andrea’s point, those are very controlled regulatory environments, which certainly the US and the UK feel like is is less the case than it was. Right? There’s one more area I want to touch upon, and that’s around employee training.
Do you feel like risk management is an area where companies would need to invest more in employee skills and training, such as in other areas of AI, we’re hearing that for a lot of things. Or, do you know, do they feel like that they can accomplish this with their existing risk management tools, or maybe they’re asking their vendors to just provide AI capabilities within existing tools. Where do people stand in terms of the employee skills training?
Tech leaders in the Middle East, like of course, they see, they recognize the importance of employees skills training for effective risk management, not only the IT department, but also the employers, the employers that they always talk to, the board members that we need to invest in skills training for effective risk management, like for example, While advanced platforms are valuable, investing in training ensures employees can utilize these tools to manage risk better, like we’ve seen so many times that the issue comes from a normal employee who didn’t know how to use the right tools or how to use the email like facing attacks So like, it’s not only about the tech department anymore. Like companies and board members and CEOs need to know that part of the budget has to be gone to, you know, employee skills training, especially for effective rates management.
Okay? And Matt, you have anything else that you wanted to add?
This is a classic case where the perfect skill set, the perfect tool, doesn’t exist. You cannot create 100% risk free environment. And it is, it is about having an individual or individuals who have full accountability and responsibility for figuring things out. But it is also definitely about every individual within the organization having the right level of training and context to understand, like, the risks that we’re dealing with. And just you know, you have to hope a little bit like bringing up children, right? You have to train people in the right way and hope that they make, they make the right decisions, right? But you can’t control it. So there’s a level of of of kind of investing in all of your people sharing accountability and responsibility, and, yeah, hoping for the best. It’s risk management. It’s not risk elimination.
Do I have a feeling that we’re going to see more security training efforts and add in, add in AI capabilities to make sure that you can maybe recognize some of these AI phishing attempts versus other types of things, or other data leak types of issues. Okay, I want to, I want to end the show with our vote. This is usually, this doesn’t this always, I always try to do a yes or no question, and we always get yes, no or maybe so or it depends. So I’m going to allow, allow for the use of, it depends. But I’m wondering if you feel like CSOs and the other IT leaders are going to be able to properly manage the risk associated with AI this year, or in order to speed up projects, will we start seeing more projects deployed because
CSOs will feel more confident about the data, or is it going to be just kind of the same level that we’re seeing right now. Well, I’m sorry, I know you say you’re looking for a yes or no, but in my case, it’s gonna be, it depends. Yeah, that’s because, for example,
I mean, chief security officer, they can manage AI risk, but it depends on some factors like the availability of the right tools, employee training, and the company’s commitment to cyber security, okay? And that, I think, for the first time ever, Keith, I am going to give a definitive answer and not say it depends, because I think, like, I don’t think anything stopping this train rolling, and I think risk management is a problem for it and the other departments to solve. I don’t think it’s a blocker for movement report. I think actually the blocker around AI currently is return on investment. That’s that’s the biggest that’s the biggest challenge for those implementing AI projects in 2025 I don’t I think the challenge for those people who have to manage the.
Risk is that the risk isn’t necessarily recognized with their line of business leaders. So I don’t think it will stop this train from rolling forward pretty fast.
Yeah, I have a feeling that the I’m also going to say that they will, that it’s going to be a yes, they will be able to properly manage the risk. I think they will kind of ease up off of the break. I think they should be able to get a handle on some projects. There might be some other projects where they still have to hold up that stop sign. But for the most part, I think that they are not going to be a roadblock. And to your point, Matt, it might. It might be the CFOs and the people holding the budgets that that might slow it down. Okay, all right.
So again, thank you all for participating. We’re going to be back next month with more global editors to talk about the issue of preparing data for AI usage in analytics, another great AI topic, and that’s all the time we have for the show today. Feel free to add any comments you have below and check out our other TECHtalk shows, such as today in tech CIO Leadership Live and DEMO, if you are interested in seeing B to B product demonstrations. I’m Keith Shaw, thanks for watching, everybody.
Global Tech Tales: What Buyers Want | Episode 5: Managing risk in the AI world
