CIO CIO

Raising the bar on private cloud – announcing VMware Cloud Foundation 9.0

When we formed the VMware Cloud Foundation (VCF) Division in Broadcom, we set out with a singular mission: deliver the industry’s best modern private cloud platform for our customers. This was in direct response to what we were hearing from customers. They were, and continue to be, concerned about the skyrocketing costs of the public cloud. Many expressed frustrations over failed public cloud migration projects. Some customers even tried to build cloud capabilities in their own data centers but struggled to get the cost and operational efficiencies they sought. Customers wanted a different approach: a platform that could help modernize their infrastructure and give their developers a public cloud-like experience. And that’s what we set out to do with VMware Cloud Foundation. Today, we’re excited to celebrate the general availability of VMware Cloud Foundation (VCF) 9.0, a massive transformation of our industry-leading private cloud platform. With VCF 9.0, we are again raising the bar for the modern private cloud by vastly simplifying its deployment, operations, and developer experience. I have the pleasure of collaborating with customers on a daily basis. Interactions from our teams in the field and our customer engagement programs also provide countless opportunities to collect feedback and insights. We are proud to deliver today VCF 9.0 as a truly customer-driven innovation. These conversations with experienced practitioners and IT leaders have visibly shaped VCF 9.0’s features and strategic direction.  Take a listen to Paul Turner, our vice president of products in the VCF Division, as he lays out this “private cloud moment” and discusses all of VCF 9.0’s new innovations. Delivering the modern private cloud If you’ve been around enterprise IT long enough, you’ve seen the pendulum swing from private cloud to public cloud and back again. That’s part of a larger trend that’s being dubbed the “Cloud Reset.” Enterprises are reconsidering workload placement, moving them from public back to private cloud, while targeting their new workloads to the private cloud. This shift represents a forward-leaning critical transformation that recognizes a modern private cloud as a strategic priority for today’s and tomorrow’s IT landscape. But what does that mean? First, cloud is not a location; it is a way of working—an operating model. Cloud benefits come from modernizing how we build, operate, consume, and protect IT resources, applications, and services. So, the modern private cloud takes all of the positive attributes of the public cloud – agility, scale, developer self-service and automation – and combines them with the cost control, security, resilience, and compliance you get on prem. The modern private cloud leverages software-defined infrastructure through a private cloud platform to enable a consistent cloud operating model that is location agnostic and supports all workloads. Past efforts to deliver private cloud yielded only some of these capabilities and benefits. VMware Cloud Foundation is the first platform to deliver a truly modern private cloud in its entirety. Exceeding our own expectations We aimed high with VMware Cloud Foundation 9.0, and the platform has surpassed even those ambitions. Market tailwinds, a steady flow of consumer insights, and a strong Broadcom innovation engine combined to finally deliver on the -standing promise of a unified private cloud platform built for all applications—traditional, modern, and AI. What’s more is the speed at which we’ve delivered on that promise. Our velocity is largely due to Broadcom’s focus on innovation and aggressive investment in R&D. This, coupled with a strategically reorganized division and a massively simplified portfolio, has enabled us to focus on developing this next-generation cloud platform. This dedication ensures our platform continually evolves to meet future challenges before they become obstacles. With VCF 9.0, Broadcom is leading the private cloud transformation. But we’re not stopping here. Our commitment to innovation will continue to shape the future of enterprise cloud for the foreseeable future. Cloud isn’t just infrastructure; it’s a strategic capability. Welcome to cloud as it should be – with VMware Cloud Foundation. About Krish Prasad Broadcom Krish Prasad is the Senior Vice President and General Manager of Broadcom’s VMware Cloud Foundation Division where he oversees the company’s multi-cloud Infrastructure software portfolio that spans Private Clouds and Clouds from Service Provider partners and Hyperscalers. Before joining Broadcom, Mr. Prasad served as Senior Vice President and General Manager of the VMware Cloud Infrastructure Business Group (CIBG) and previously led the VMware vSphere business. He drove the strategy, roadmap and delivery of the SDDC cloud platform that powers VMware Cloud and led a wide range of functions including Product Management, Engineering, Cloud Operations, SRE and Product Marketing. Mr. Prasad has more than 30 years of experience in the enterprise software business across both R&D and general management. Prior to VMware, he held senior and executive management positions with HPE and BMC Software. source

Raising the bar on private cloud – announcing VMware Cloud Foundation 9.0 Read More »

AI agent orchestration: The CIO’s crucial next step

“You’ve probably already put together your existing API integrations,” he says. “Tweak that rather than thinking that you’ve got to completely reinvent the whole bit. Just adjust that to meet the way that your AI needs to talk to it, rather than thinking, ‘Oh, it’s AI, I’d better rewrite everything I’ve learned in the past.’” Experimenting with orchestration IBM is one company that’s taking on agent integration in house. The tech giant began experimenting with agent-like tools eight years ago, and it now has agents deployed in several workflows, including IBM’s sales and IT departments, says Suzanne Livingston, vice president at IBM watsonx Orchestrate Agent Domains. HR was the early test case, and now, agents operate many HR functions. “There are a lot of processes in HR, and it’s difficult for employees to understand how to work with the HR system,” she says. “We use one [app] at the time, and if you’ve ever used one of these enterprise HR systems, you had to find the instructions to know exactly what you wanted to do. And, by the way, those instructions changed every month.” source

AI agent orchestration: The CIO’s crucial next step Read More »

Novanta CIO Sarah Betadam on managing risk around AI

No, I appreciate it, because I feel like they need to be a chief AI officer now too, because it is a lot different in the sense that it’s a brand new setup, that you need to also go through the process of change management, training and educating. So when it comes to the management, senior management, you have to educate. Like, what does aI mean for each organization? What would be our AI strategy? Because machine learning has been around for many, many years. So what everyone is talking about, the definition of AI for organization is Gen AI, and what does that mean? So then that’s where I worked with our senior leadership, as well as everyone in the company we are looking at the Gen AI, and that’s our strategy. And then we start ensuring in a different ways, you start with experimenting, experimenting many things. So, you know, there’s multitude of enterprise level buy that you could buy or you build it within. But build it within, it comes with a different set of items that you have to make sure that you’re paying attention to, and board as well. Now we have to present in front of the board, because AI has become such a big topic that not only Board wants to know, how are we progressing, and how is that going to be enabler for productivity, as well as transforming a competitive advantage of our product line, but also the privacy aspect of it, of How we’re protecting our IP information from the eyes that are not supposed to be seeing them, right? source

Novanta CIO Sarah Betadam on managing risk around AI Read More »

How AI is transforming the data center: 7 talking points

Data sovereignty and legislation in general play a part. There is a lot of risk to consider, and it may be easier and safer to manage that in house. Your cloud provider is likely as secure as you can be, but you don’t control that.   Then there is sustainability – a subject that seems to have disappeared from the public agenda in many organizations but is most definitely not going away as the major cloud providers quietly slip out sustainability reports showing significant increases in power consumption. And geopolitics. The world feels much less stable than it once did, and your own physical data center is, again, totally under your control.  …but cloud will never die Which is not to say that era of cloud is in any sense over. Just that it’s a mix for almost all organizations. Cloud is great for scalability and centralized storage of well ordered data. But there are many other use cases. There is a lot to be said for balancing investment in infrastructure with ongoing running costs.  source

How AI is transforming the data center: 7 talking points Read More »

Applying agentic AI to legacy systems? Prepare for these 4 challenges

Due to these challenges, it’s not realistic to expect a “plug and play” experience when deploying AI agents for legacy systems. That may work in more modern environments, like public clouds, which tend to be consistent and predictable. But don’t expect things to be so easy in a legacy environment.  This doesn’t mean, however, that integrating agentic AI with legacy systems is impossible. It can be done by targeting bounded use cases, such as custom code analysis or test automation, where the requisite data resources and outcomes are well-defined. This is more feasible than attempting to automate large chunks of legacy system management processes using AI.  It also helps to take advantage of modernized versions of legacy software where possible. For example, in an SAP environment, features like SAP BTP AI Core, SAP Graph or SAP Event Mesh can expose SAP business objects to AI agents in a clean, API-consumable format, making it easier to build the necessary integrations.  source

Applying agentic AI to legacy systems? Prepare for these 4 challenges Read More »

CIO Leadership Live with Nobumasa Takeuchi, Editorial Director, CIO Japan

Oh, they are now tackling on that issues because, one, because she is, like a CIO doesn’t even know the, you know, detailed technology, and then, you know, generative AI. That’s why. A CIO is trying to translate the kind of, you know, terminology and even the use stage of, you know, AI somehow, and then to let them learn, you know, what it is. Then after that, cio finally could understand the land you know, how utilize generative AI to all, you know, the organizations. So now it’s a real, you know, stage to just start, yeah, and again, what about that? Balancing the innovation, you know, again, it’s always one of the challenges that CIOs face is sort of balancing innovation with sort of maintaining the traditional systems and when to actually move. So how are they addressing that? Oh, I say that it will be. Takes a time because, you know the Japanese, you know management, you know, fond of, you know, the following, the legacy systems, even at this time, even though they understand the, you know, like a crowd business is very, you know, passionate, but again, now it’s not the stage, it’s now the learning stage, And then CIO. Better to let the CIO now the, you know, housing like a crowd business would be more, you know, important to protect, you know, their IT assets and the data. So we’ve got to continue the conversation around data and AI integration. Christopher Holmes source

CIO Leadership Live with Nobumasa Takeuchi, Editorial Director, CIO Japan Read More »

4 leadership paradoxes that define AI adoption

Everyone wants to move fast. Deploy the model, launch the tool, and beat the competition. But the faster you go, the more you expose yourself. Rushed deployments skip safeguards. Vendors oversell. Internal teams cut corners. And the moment something goes wrong, with an exposed endpoint or a biased decision engine, it’s not just your data that suffers. It’s your credibility.  On the other hand, tightening the bolts too much can cause nothing to move. Security leaders slow down everything from procurement to testing. By the time the system clears review, the market has moved on.  There’s no easy middle ground. You need a mindset of calculated speed. Built with agile security baked into the design. Run risk-based reviews, not endless checklists. Use red teams not to block progress but to sharpen it.  source

4 leadership paradoxes that define AI adoption Read More »

Complex tech challenges demand human leadership from CIOs

The 2025 CIO Summit Australia underscored the critical role of CIOs in driving business strategy and innovation in an increasingly complex technology landscape. This is backed by Foundry’s State of the CIO research, which surveyed 1,200 global companies at the beginning of this year, prior to the tariffs. According to the report, about 65 per cent of CIOs said their budgets were going up, with around 25 per cent saying budgets would stay the same. At a granular level, AI investments were a top reason for tech budget increases. In the Asia Pacific region (excluding China), there was stronger sentiment on AI and ML than in other regions. The research showed close to 40 per cent also cited security as a top concern, followed by infrastructure modernisation. However, upgrades weren’t just for infrastructure’s sake, they were aimed at supporting or preparing for AI. The same applied to hiring and upskilling, which were also driven by AI needs. While much of the pressure to adopt AI came from CEOs, boards, and investors, it was CIOs tasked with researching and evaluating potential AI implementations. During the summit Fusion5 director of AI Shannon Moir emphasised need for organisations to adopt AI immediately rather than wait for future developments like artificial superintelligence (ASI), as building AI literacy takes time and is crucial for successful implementation. He said that while traditional algorithmic solutions have handled routine problems over the past decades, more complex challenges have accumulated in the background. Now that AI has the capability to address these tougher issues, organisations must focus on developing the ability to identify problems suitable for AI and apply the technology effectively to solve them. What’s keeping Australian CIOs up at night? Those were some of the challenges causing CIOs a lot of stress. During the summit these IT leaders expressed the frustration of needing to bring in the right skills, amid staff and skills shortages. They were also navigating key challenges around choosing the right development approach: build internally, outsource, or redesign, with clarity still needed on direction. These IT leaders were tasked with balancing the chasing of the shiny new thing, versus what’s important at the time. CIOs are increasingly positioned as key strategic leaders who connect business priorities with innovative, data-driven, or AI-based solutions, while ensuring the security, integrity and resilience of the organisation’s digital infrastructure. They must develop clear AI strategies, communicate effectively across all levels of the organisation, and implement trustworthy AI frameworks. This includes planning for how AI affects roles, decision-making, and compliance, while ensuring the business remains competitive and responsible. Ultimately, the CIO’s expanding portfolio now includes overseeing AI initiatives, shaping organisational design, and leading both digital transformation and ethical implementation. The most impactful initiatives are those that solve real business problems or capitalise on strategic opportunities, particularly where innovation, analytics, cybersecurity, or AI are leveraged effectively. As digital ecosystems expand, cybersecurity is no longer just a defensive measure but a foundational enabler of trust, resilience, and business continuity. Simplicity and cost of implementation were also important considerations. CIOs were evolving from traditional IT roles into “chief solution officers,” driving digital transformation and innovation, while ensuring robust security frameworks to protect digital assets and customer trust. Aligning the CIO and CISO However, as technology becomes complex, cyber threats grow ever more prevalent and can no longer be viewed in isolation. True resilience demands tight alignment between CIOs and CISOs, cross-functional collaboration, and a strong security culture embedded across the organisation. Only by integrating cybersecurity into the broader business and innovation strategy can organisations stay secure, agile, and competitive in a rapidly evolving threat landscape. However, the difference in priorities can create tension unless there’s clear communication, aligned goals, and mutual understanding between the roles the CIO and CISO. Without synergy what can arise are departments using tools without policies, risking sensitive data and intellectual property. Innovation was important but must be responsible, reaffirming the need to make alignment critical. Real progress happens when leadership unites IT, security, and other teams to share responsibility. A culture where people safely report issues is essential. Recognising security efforts builds this culture, and with good processes and education, risks reduce. Misalignment can cause delays and gaps. After breaches, security is blamed first, even when root causes are elsewhere. This frustrates teams trying to do the right thing. Evolution of the CIO role At the same time, the rapid evolution of the CIO role means they’re no longer just running IT. The shift from “human in the loop” to more autonomous AI fundamentally changes the CIO’s responsibilities. They’ve become an integral part of shaping the future of the business, where the human elements, communication, trust, and collaboration, are essential to bridging gaps and driving success. Reflecting on Fusion5’s Moir presentation at the summit, it underscored the CIO’s challenge: navigating the impact of AI on both the workforce and the human element. As AI and agentic AI are driven within organisations by technology leaders, these changes will have profound human effects. At the end of the day, technology has been replacing busy jobs for more than 40 years. But now, the rate of replacement is about to accelerate significantly. Technology leaders will need to imagine and convey the impact of AI across the entire business, which will be a major challenge. The traditional organisation chart will change, and this will become the new normal. CIO roles have become broader and more complex; they will be responsible for overseeing all activities related to autonomous systems, referred to as “agentic activity.” They have evolved from being technology experts to strategic leaders—potentially even stepping into the CEO role. source

Complex tech challenges demand human leadership from CIOs Read More »

Myths of AI networking

As AI infrastructure scales at an unprecedented rate, a number of outdated assumptions keep resurfacing – especially when it comes to the role of networking in large-scale training and inference systems. Many of these myths are rooted in technologies that worked well for small clusters. But today’s systems are scaling to hundreds of thousands – and soon, millions – of GPUs. Those older models no longer apply. Let’s walk through some of the most common myths – and why Ethernet has clearly emerged as the foundation for modern AI networking. Myth 1: You cannot use Ethernet for high performance AI networks This myth has already been busted. Ethernet is now the de facto networking technology for AI at scale. Most, if not all, of the largest GPU clusters deployed in the past year have used Ethernet for scale-out networking. Ethernet delivers performance that matches or exceeds what alternatives like InfiniBand offer – while providing a stronger ecosystem, broader vendor support, and faster innovation cycles. InfiniBand, for example, wasn’t designed for today’s scale. It’s a legacy fabric being pushed beyond its original purpose. Meanwhile, Ethernet is thriving: multiple vendors are shipping 51.2T switches, and Broadcom recently introduced Tomahawk 6, the industry’s first 102.4T switch. Ecosystems for optical and electrical interconnect are also mature, and clusters of 100K GPUs and beyond are now routinely built on Ethernet. Myth 2: You need separate networks for scale-up and scale-out This was acceptable when GPU nodes were small. Legacy scale-up links originated in an era when connecting two or four GPUs was enough. Today, scale-up domains are expanding rapidly. You’re no longer connecting four GPUs – you’re designing systems with 64, 128, or more in a single scale-up cluster. And that’s where Ethernet, with its proven scalability, becomes the obvious choice. Using separate technologies for local and cluster-wide interconnect only adds cost, complexity, and risk. What you want is the opposite: a single, unified network that supports both. That’s exactly what Ethernet delivers – along with interface fungibility, simplified operations, and an open ecosystem. To accelerate this interface convergence, we’ve contributed the Scale-Up Ethernet (SUE) framework to the Open Compute Project, helping the industry standardize around a single AI networking fabric. Myth 3: You need proprietary interconnects and exotic optics This is another holdover from a different era. Proprietary interconnects and tightly coupled optics may have worked for small, fixed systems – but today’s AI networks demand flexibility and openness. Ethernet gives you options: third-generation co-packaged optics (CPO), module-based retimed optics, linear drive optics, and the longest-reach passive copper. You’re not locked into one solution. You can tailor your interconnect to your power, performance, and economic goals – with full ecosystem support. Myth 4: You need proprietary NIC features for AI workloads Some AI networks rely on programmable, high-power NICs to support features like congestion control or traffic spraying. But in many cases, that’s just masking limitations in the switching fabric. Modern Ethernet switches – like Tomahawk 5 & 6 – integrate load balancing, rich telemetry, and failure resiliency directly into the switch. That reduces cost, lowers power, and frees up power for what matters most: your GPUs/ XPUs. Looking ahead, the trend is clear: NIC functions will increasingly be embedded into XPUs. The smarter strategy is to simplify, not over-engineer. Myth 5: You have to match your network to your GPU vendor There’s no good reason for this. The most advanced GPU clusters in the world – deployed at the largest hyperscalers – run on Ethernet. Why? Because it enables flatter, more efficient network topologies. It’s vendor-neutral. And it supports innovation – from AI-optimized collective libraries to workload-specific tuning at both the scale-up and scale-out levels. Ethernet is a standards-based, well understood technology with a very vibrant ecosystem of partners. This allows AI clusters to scale more easily, and completely decoupled from the choice of GPU/XPU, delivering an open, scalable and power efficient system The bottom line Networking used to be an afterthought. Now it’s a strategic enabler of AI performance, efficiency, and scalability. If your architecture is still built around assumptions from five years ago, it’s time to rethink them. The future of AI is being built on Ethernet – and that future is already here. Click here to explore more about Ethernet technology and here to learn more about Merchant Silicon. About Ram Velaga Broadcom Ram Velaga is Senior Vice President and General Manager of the Core Switching Group at Broadcom, responsible for the company’s extensive Ethernet switch portfolio serving broad markets including the service provider, data center and enterprise segments. Prior to joining Broadcom in 2012, he served in a variety of product management roles at Cisco Systems, including Vice President of Product Management for the Data Center Technology Group. Mr. Velaga earned an M.S. in Industrial Engineering from Penn State University and an M.B.A. from Cornell University. Mr. Velaga holds patents in communications and virtual infrastructure. source

Myths of AI networking Read More »