CIO CIO

SAP change management still challenges enterprises

And while 93% of SAP leaders think their teams have the right skills to manage SAP changes, the survey noted that the reality is that the teams have the skills to manage smaller, IT-centric changes, but can’t keep up with business needs. That leads to outsourcing of many of the processes, from strategy and planning (49%), and requirements definition (46%) through deployment and cutover (38%) and post go-live support (43%). While that solves the immediate problem, the survey pointed out, it creates extra cost and perpetuates the internal skills gaps. Automation is key So change managers are looking for new ways of bridging the skills gap, such as the use of AI, with 87% of respondents saying they believe that the technology will influence SAP change management in the next five years. Despite roadblocks such as resistance from stakeholders or leadership (42%), regulatory or compliance constraints (39%), and privacy and security concerns (36%), AI is in the cards for SAP change management to automate manual processes (59%). “I am still surprised that we come across customers in that large, complex kind of category that are not using tools for this in this day and age,” Lees said. “They should have been doing this already 10-15 years ago.” source

SAP change management still challenges enterprises Read More »

5 challenges every multicloud strategy must address

As organizations expand across multiple hybrid cloud platforms, maintaining clear visibility becomes increasingly complex, says Chris Thomas, principal at Deloitte Consulting. “Without robust, cross-platform observability, it’s easy for issues to go undetected, impacting everything from compliance to customer experience,” he says. To address these challenges, organizations are adopting unified monitoring solutions that aggregate data from all hosting platforms, enabling real-time insights and proactive response, Thomas says. “Organizations should give thought to using AI-driven analytics to identify patterns and anomalies that might otherwise be missed in siloed environments,” he says. Cost concerns Keeping costs under control in a multicloud environment can be a challenge for enterprises. This is deeply intertwined with effective financial management, often referred to as FinOps, Simari says. FinOps is a cloud management practice focused on optimizing spending by fostering collaboration among IT, finance, and business teams. source

5 challenges every multicloud strategy must address Read More »

CIO ASEAN with McKinsey: Transformative role of AI and GenAI in the insurance sector in the Southeast Asia

Overview In our latest CIO ASEAN Leadership Live session, I – Estelle Quek, Editorial Director of CSO ASEAN engaged Violet Chung, Senior Partner at McKinsey, on how artificial intelligence (AI) and generative AI (GenAI) are reshaping the insurance sector across Southeast Asia. Our conversation was structured around key questions that CIOs and senior technology leaders must address to move from experimentation to enterprise transformation. We began by exploring Southeast Asia’s relatively low AI maturity scores, despite its digital readiness. Chung pointed to the need for holistic transformation-beyond technology procurement, emphasizing leadership alignment, change management, and scalable enterprise capabilities. I asked how insurers should evolve their business models as embedded insurance gains traction. Chung highlighted the importance of agile partnerships with non-financial players and the co-creation of customer-centric value propositions. On GenAI, we explored what lessons CIOs can draw from early pilots. Chung cautioned that many initiatives lack integration and measurable ROI. She stressed the importance of infrastructure flexibility, workforce enablement, and a clear roadmap for scaling. I also asked how insurers should approach AI governance, particularly in underwriting and customer-facing applications. Chung advocated for embedding compliance from the design phase, maintaining human oversight, and ensuring data transparency. Finally, we examined the readiness of insurers to scale GenAI amid legacy systems and siloed data and Chung recommended infrastructure-agnostic platforms and agentic architectures, supported by public-private collaboration to close protection gaps and accelerate innovation. Register Now source

CIO ASEAN with McKinsey: Transformative role of AI and GenAI in the insurance sector in the Southeast Asia Read More »

AI agents are coming. Your data isn’t ready.

Agentic AI will transform how work gets done—but only if it runs on trusted, unified real-time data. The leaders are building that foundation now. AI agents are moving from hype to reality, promising to automate decisions and workflows without constant oversight—but speed and autonomy mean nothing if they run on bad data. Gartner warns that by 2026, 60% of AI projects without an AI-ready data foundation will be abandoned, yet 63% of data leaders admit they lack—or aren’t sure they have—the right data practices. The gap between traditional data management and AI-ready data is where initiatives fail, and where the real work to unlock agentic AI must begin. Data silos: Yesterday’s problem, today’s AI risk One of the biggest obstacles to the agentic future? Data silos. In an ideal world, all enterprise data would be seamlessly unified and accessible in real time—a vision promised decades ago with the arrival of data warehouses and repeated with every new wave of data platforms. But the reality is harsher. Especially for companies with legacy systems and decades of accumulated tech debt, data remains scattered across functions, geographies, and applications. In the age of agentic AI, those silos aren’t just an inconvenience—they’re a direct threat to performance. An AI agent that can’t see the complete, current picture will make decisions based on partial truths, eroding trust and compounding errors at scale. Breaking those silos isn’t just an IT exercise—it’s a strategic imperative. AI agents thrive on connected, trusted, real-time data that flows freely across the organization and its ecosystem. When every decision—human or machine—is informed by the same up-to-date, context-rich intelligence, errors shrink, efficiency grows, and the speed of action becomes a competitive weapon. This requires rethinking data architecture from the ground up. Instead of treating data as a collection of static repositories, it must become a living, interoperable network—one that updates continuously, scales easily, and speaks a common semantic language so every agent, application, and analyst is working from the same definitions. Figure 1: The future with AI in the enterprise Leaders who make this shift now will have a decisive advantage: the ability to deploy AI agents that not only act quickly, but act correctly—accelerating innovation, sharpening decision-making, and building trust with every automated interaction. Those who wait will be left with fragmented pilots and missed opportunities, watching competitors pull further ahead. Trusted data: The gatekeeper to agentic AI In the history of enterprise technology, few shifts carry as much transformative potential—and as much risk—as the rise of agentic AI. These systems don’t just assist users; they reason, decide, and act. Across industries, they are already reshaping how decisions are made, how work gets done, and how businesses engage with customers, partners, and markets. But here’s the real challenge: agentic AI doesn’t need more data—it needs trusted data, delivered at the speed and scale every enterprise is striving to reach. Research shows that while most leaders recognize the promise of agentic AI, only a fraction feel truly prepared to capture its value. What’s holding them back? The trusted data foundation that underpins business operations. Agentic AI cannot succeed without a shift in how organizations unify, govern, and operationalize their data. It’s not enough to have a data lake or a dashboard. These systems require context-rich, relationship-aware data delivered in milliseconds—data that reflects transactions, interactions, and dependencies across the enterprise, not just static records in isolation. For organizations serious about scaling agentic AI responsibly, the first step isn’t deploying the agent—it’s building the semantic data layer that ensures every decision is based on accurate, complete, and current information. Without that foundation, automation becomes risky. With it, agentic AI can operate at full potential: fast, safe, and aligned with business goals. Act before the agents arrive Agentic AI is not a distant concept—it’s here, and it’s maturing quickly. The companies that will lead in the Age of Intelligence are those building their trusted, real-time data backbones today. Delay, and you risk deploying fast, capable AI agents that make bad decisions faster than you can correct them. Act now, and you position your organization to operate at the speed, scale, and confidence the future demands. For leaders ready to move from vision to execution, our white paper 10 Data Rules to Win in the Age of Intelligence, offers a strategic blueprint for success. It offers a practical framework for architecting the unified, trusted, and interoperable data foundation that agentic AI demands. Read it, share it, and start building the intelligence layer your business will rely on for the next decade. source

AI agents are coming. Your data isn’t ready. Read More »

Neuromorphic computing and the future of edge AI

Equally important, neuromorphic AI is directly tackling the SWaP problem that prevents conventional AI from running effectively at the edge. In 2022, more than 112 million IoT devices were compromised, and IoT malware surged by 400% the following year. Neuromorphic processors, such as Akida 1000, address these challenges by delivering on‑device, event‑driven anomaly detection without heavy infrastructure requirements. This positions neuromorphic SOC technologies as a practical path to securing IoT, UAVs and critical infrastructure endpoints that cannot support traditional AI models. Market and strategic implications Darwin Monkey 3 symbolizes more than a technological achievement; it reflects geopolitical competition in next‑generation AI hardware. The ability to deploy neuromorphic systems across healthcare, ICS, defense, logistics and security may shape both national resilience and private‑sector competitiveness. Importantly, as Furber notes, the hardware is ready — but the ecosystem isn’t. Development tools akin to TensorFlow or PyTorch are still emerging (e.g., PyNN, Lava), and convergence toward standards will be crucial for widespread adoption (IEEE Spectrum, 2024). Adding to this, a 2025–2035 global market forecast projects significant growth in neuromorphic computing and sensing, spanning sectors such as healthcare, automotive, logistics, aerospace and cybersecurity. The study profiles more than 140 companies, from established giants like Intel and IBM to startups such as BrainChip and Prophesee, which are releasing joint products now, underscoring the breadth of investment and innovation. It also emphasizes challenges in standardization, tooling and supply chain readiness, suggesting that the race will not just be technological but also commercial and regulatory. source

Neuromorphic computing and the future of edge AI Read More »

What rural healthcare taught me about digital transformation

Lesson 2: Redesign processes before adding tech The next step was to look at processes. At first, we thought digitizing existing workflows would be enough. But what we discovered was that many of the processes were themselves inefficient or inconsistent. For example, record-keeping varied from nurse to nurse. Some used notebooks, some scraps of paper, some nothing at all. If we had simply built a digital tool to replicate this, we would have digitized chaos. Instead, we worked with the nurses to create a simple, standardized process for patient intake and follow-up. We created a complete process map with the nurses as to what happens when one visits a healthcare facility. We co-created the complete patient journey in a physical setup. Only once this process was agreed upon, we mapped the journey to design the user screens to give them the same experience. Not only this, when we wanted to roll out the system in the field, we again got all our nurses to recall the entire manual process and showed them step by step on the digital platform. source

What rural healthcare taught me about digital transformation Read More »

Building a solid foundation to support AI adoption at Bentley

On determination: Years ago, there was a program called Tomorrow’s World, and it was all about the future and what that could look like. I was fascinated by it because in one episode, there was a woman presenter. So that inspired me then, and showed you could aspire to that. I went into my first role working in a software development division in the mid-90s where everyone was frantically recoding everything because of Y2K. Those were my very early days in technology. But even then, I had female role models who made it seem accessible. I was curious but not very academic at school, so I felt lucky to get a job in technology. Being able to learn coding was very interesting, and I found my routine, and continued to grow and move through the ranks. On data: I always say there’s no AI without data. So the thing we’ve been working on for the last two years is the data strategy, understanding the governance, the framework, and everything else we need there as foundations. We’ve done a lot of work around data literacy and upskilling in the organization, and we’ve been doing that in preparedness for AI because we know everybody wants it, and they want it now, but they don’t necessarily know what they want it for. So it’s about creating that safe space where people can test and learn. I’ve been working in partnership with the chief strategy officer to say this needs to be a joint business, and we need an IT strategy around data and AI. It can’t just be directly from it. We need to work together with the business to understand what we want to use AI for so we can get to value sooner. And if you think about what we’ve been doing for the last three years in moving to the enterprise systems, reducing the systems landscape, and making sure we understand what data we’ve got in those systems, it’s all been creating the pathway and the foundation levels we need to get us there sooner. On streamlining efficiencies: When I came into automotive, it was essentially learning a whole different language because many of the abbreviations are tied to German words. So even when you know them, you don’t understand what they mean. For me, it was about taking that big step back, looking at our business and saying we’re all about designing and creating an amazing product. We then have customers we service through the web or the app. So it’s broken down into value streams, and within those are the processes of designing, building, marketing, and selling cars. I like logic, so I try to apply it when we examine capabilities, like reducing cost of delivery from an IT perspective. That gives us more money to invest in other things, or future ready our organization. source

Building a solid foundation to support AI adoption at Bentley Read More »

Link11 Reports 225% more DDoS attacks in H1 2025 with new tactics against infrastructure

The threat landscape surrounding distributed denial-of-service (DDoS) attacks intensified significantly in the first half of 2025, according to the latest Link11 European Cyber Report. Documented attacks targeting the Link11 network increased by 225% compared to the same period in 2024. The report highlights not only a marked rise in attack frequency but also a substantial escalation in their duration, intensity, and technical sophistication. Notably, attackers deployed volumes reaching 438 terabytes—equivalent to over seven years of continuous 4K Netflix streaming—and increasingly employed Layer 7 attacks that closely mimic legitimate user traffic. The report also identifies politically motivated campaigns, including those attributed to groups such as NoName057(16), targeting critical infrastructure. Record figures: data avalanches and sustained attacks In the first half of 2025, the volume of attacks totaled 438 terabytes. This corresponds to the data consumption of 7 years of uninterrupted Netflix streaming in 4K resolution. The recorded peak values of 1.2 terabits per second and 207 million packets per second reached dimensions that can overload even high-performance systems. The duration has also increased: the longest documented attack lasted more than 8 days. The shift from short flash attacks to coordinated sustained fire through long-term campaigns presents defense systems with ever-changing challenges. New forms of attack: Precision instead of brute force While classic volumetric attacks continue to dominate, Link11 analysts are seeing a significant increase in precise Layer 7 attacks. These cleverly disguise themselves in legitimate data traffic by generating seemingly normal requests. “If they are invisible in regular traffic, 20,000 deceptively genuine requests per minute can be more dangerous than 200 million packets per second,” explains Jag Bains, VP Solution Engineering at Link11. Politically motivated attacks on critical infrastructure The connection between geopolitical events and waves of attacks is particularly striking. Pro-Russian groups such as NoName057(16) targeted government agencies, banks, energy suppliers, and city administrations in Europe. These attacks often coincided with security policy decisions. Other groups, such as Dark Storm and Keymous also became more prominent. “The dimensions are frightening. In the first half of 2025, we recorded a total of 438 terabytes of DDoS traffic on the Link11 network. That’s equivalent to more than 7 years of non-stop Netflix streaming in 4K. Comparisons like this illustrate the threat better than any statistics,“ says Jens-Philipp Jung, founder and CEO of Link11. ”European companies urgently need resilient defense strategies to protect their digital sovereignty.” Professionalization through crime-as-a-service and AI The attacks are not only bigger and longer, they are also more professionally organized. Attackers are increasingly working together, using DDoS-as-a-service platforms and AI to optimize and camouflage their attacks. The World Economic Forum highlights this automation in its Global Cybersecurity Outlook 2025 as a key driver of the threat landscape. Resilience instead of reaction required The report’s findings show that companies and institutions need to consistently expand their security architecture. This includes: Real-time monitoring for early detection of attacks AI-supported defense systems for automated mitigation Contingency plans and redundancy strategies for emergencies Only with a combination of intelligent defense technology and clearly defined resilience strategies can the consequences of massive attacks on business processes and critical infrastructures be effectively limited. About Link11 Link11 is a specialized European IT security provider that protects global infrastructures and web applications from cyberattacks. Its cloud-based IT security solutions help companies worldwide strengthen the cyber resilience of their networks and critical applications to avoid business interruptions. Link11 is a BSI-qualified provider of DDoS protection for critical infrastructure. With PCI DSS and ISO 27001 certifications, the company meets the highest standards in data security.  Contact Lisa Froehlich Link11 GmbH [email protected] source

Link11 Reports 225% more DDoS attacks in H1 2025 with new tactics against infrastructure Read More »

Your data strategy is broken, and agentic AI will expose it

Enterprises invested billions into digital transformation, hoping that data would unlock smarter decisions and sharper competitiveness. Instead, many are left with digital landfills—bloated data lakes, siloed systems, and outdated governance. AI was supposed to be the payoff. But now, AI is revealing a hard truth: if your data strategy is stuck in the industrial era, your AI initiatives will stall—or worse, backfire. Welcome to the Age of Intelligence. It’s not just about AI. It’s about architecting your business for real-time, autonomous decision-making—by both humans and machines. And that starts with data. Data can’t sit still anymore Data is no longer a passive resource to be collected, stored, and protected. It’s an active force. The rise of agentic AI—autonomous agents that generate content, make decisions, and continuously learn—demands data that moves, adapts, and connects in real time across your organization. This is where most enterprises fail. They treat data like a filing cabinet—organized for archiving, not action. But modern companies treat data like a neural network—interconnected, interoperable, and always in motion. Take Ant Financial, whose AI-driven systems execute over 300 million autonomous decisions per day, powered by a unified data architecture that seamlessly connects billions of transactions. This isn’t just about AI horsepower—it’s about real-time, context-rich data mobilized across every touchpoint. In contrast, legacy enterprises often struggle with fragmented systems and data debt that hinder their progress. These organizations may have the compute power, but without a trusted, unified data layer, their AI agents operate blind. The result? Misdirected automation, compliance risks, and missed opportunities. Velocity now beats volume In the industrial era, advantage came from collecting more data. In the intelligence era, it comes from activating it faster. McDonald’s, for example, didn’t just collect customer data—they rearchitected their data flow to enable personalized offers, supply chain agility, and real-time decisioning across thousands of locations. Their transformation wasn’t about volume. It was about velocity. And it changed everything—from customer experience to operational efficiency. The message is clear: real-time, trusted data isn’t a back-office concern—it’s the front line of competitive advantage. Ecosystems—not silos—will win Another major shift? The smartest companies are extending their data strategies beyond the enterprise. They’re integrating data across ecosystems—partners, suppliers, platforms—to power dynamic collaboration, optimize operations, and drive new growth models. Legacy institutions often struggle here. Data silos built over decades, through M&A and department-led decisions, choke interoperability. Startups and digital natives, on the other hand, build with real-time data exchange in mind from day one. That’s why they can adapt faster, collaborate deeper, and scale innovation with agility. Figure 1: Real-time, context-rich multisource data fuels agentic transformation The time to act is now AI is not a band-aid for your data problems. It’s an amplifier. If your data is slow, inconsistent, or siloed, AI will expose those flaws—at scale. If your data is real-time, trusted, and unified, AI will accelerate your advantage. That’s the new mandate for enterprise leaders. Architect data not just for storage, but for intelligence. Build for interoperability, not isolation. Prioritize velocity over volume. Treat trust as your foundation—not your afterthought. And above all, rethink your data strategy as a business strategy—not an IT project. This isn’t a tech refresh. It’s a competitive rewrite. Want the full blueprint? Dive deeper into the new strategic playbook for enterprise data in the Age of Intelligence—complete with real-world case studies and the 10 data rules separating the winners from the laggards. source

Your data strategy is broken, and agentic AI will expose it Read More »

How AI is replacing the painful, manual process of building an annual operating plan

Early adopters of this model include global ecommerce platforms, major CPG brands, industrial manufacturers, medtech and pharma companies — firms with complex investment needs and high-stakes forecasting cycles. The benefits for CFOs are clear, from faster planning cycles to real-time modeling of different scenarios (e.g., tariffs, supply-chain issues) to strategic alignment, as forecasts are built around key metrics rather than aggregated numbers. This type of methodology enables more realistic precision, helping CFOs quantify risk and uncertainty. Perhaps most importantly, the model can bridge communication gaps between finance, operations, and business units. It reimagines planning as a dynamic activity versus a one-and-done, static operating plan.  source

How AI is replacing the painful, manual process of building an annual operating plan Read More »