CIO CIO

Navigating water stewardship in Texas' AI expansion

Similarly, representatives from the Texas Water Development Board (TWDB) point out that the Trinity Aquifer, another key water source in Central Texas, has been in decline over the past decade. “Shallow soils and steep hill slopes make for rapid runoff, which provides less opportunity for groundwater recharge,” they explain. “While surface water reservoirs capture some runoff, high evaporation rates from high temperatures and low humidity reduce available supply even as we save it for later use.”   So, are AI data centers even viable in Central Texas? It ultimately depends on how well tech companies balance technological needs with ecological realities. As these facilities will require substantial water in an area that’s already wrung dry, the first priority of any strategic planner or field engineer working in water-sensitive regions should be to understand the regulatory landscape and address community concerns.   Unpacking water rights and regulatory challenges  Water management in Texas presents unique challenges for data center development, as the state’s “rule of capture” doctrine allows landowners significant pumping rights with minimal restrictions. But in areas like the Edwards Aquifer, where groundwater directly feeds surface water systems like the San Marcos River, this approach increasingly conflicts with conservation needs. As Parker notes, “Without that aquifer having enough water in it, we’re not going to have a river. That’s going to impact not only the environment and species that depend on the river, but communities downstream that depend on it for drinking water.”   source

Navigating water stewardship in Texas' AI expansion Read More »

2025 AI infrastructure: What the data is telling leaders now

AI adoption is no longer theoretical. It’s here, accelerating, and already reshaping how organisations operate, innovate, and compete. The Uptime Institute’s 2025 AI Infrastructure Survey provides an urgent executive briefing: the strategic build-out for AI is in full swing, and proactive engagement is paramount to securing future competitive advantage. Based on comprehensive data from 519 global data centre owners/operators, the survey reveals a market adapting at speed, driving significant shifts in capital expenditure and demanding innovative approaches to manage AI’s extreme density and power requirements. The findings point to one clear message: if you’re not scaling your environment now, you’re falling behind. Why your AI infrastructure needs to scale by 2026 AI workloads are rapidly becoming mainstream: 32% of operators are already running AI inference workloads 45% plan to implement them in the near future That’s nearly 4 in 5 organisations either deploying or preparing for AI, confirming that AI isn’t an experiment. It’s a competitive advantage. And powering it requires more than compute; it needs reimagined infrastructure built for density, performance, and scale. Why your AI strategy must stay local Public cloud is often discussed as the go-to model — but the survey shows a different reality: 46% of AI workloads are hosted on-premises 34% run in colocation environments Only 14% rely primarily on the public cloud Why this shift? The survey highlights the top drivers for keeping AI workloads closer to home: Data Sovereignty (46%): Protect IP, ensure compliance, and reduce risk Reuse of Existing Infrastructure (50%): Maximise ROI on current facilities Power Availability (37%) & Data Proximity (29%): Critical for real-time performance Cost (30%): Cloud OpEx for 24/7 AI can become unsustainable Bottom Line: AI deployment is as much about location, control, and cost predictability as it is about compute power. Power and cooling: AI’s infrastructure breaking point AI is not only data-hungry; it’s power- and heat-intensive. The Uptime report shows: 27% of AI training racks already exceed 50kW Even inference racks now hit the 31 to 50kW range To meet this demand: 52% of data centres are upgrading power infrastructure 51% are modernising cooling systems The consequences of ignoring this shift are clear: delays, downtime, and capped growth. NEXTDC is already enabling environments with up to 600kW per rack and advanced liquid and immersion cooling technologies designed for the AI era. NEXTDC Connectivity and ecosystem: AI Factories need more than power The location of AI workloads matters — not just for performance, but for capability. NEXTDC offers: Data centres in every Australian capital Direct access to subsea cable systems, enabling faster regional AI workloads Dense ecosystems that bring together hyperscalers, sovereign cloud, research institutions, and high-performance digital platforms Whether you’re training large models or serving real-time AI applications, your infrastructure must be physically and digitally close to your data, your partners, and your users. Strategic drivers: Why leaders are investing now The reasons behind this infrastructure race go far beyond IT. According to Uptime’s survey, the top reasons organisations are investing in AI infrastructure include: 50%: Improve operational efficiency 49%: Enable new products and services 41%: Enhance customer experience 28%: Boost employee productivity 25%: Differentiate in the market This isn’t about maintaining status quo — it’s about using AI to lead. Bottom Line: What CIOs and CTOs Need to Know Your infrastructure choices now will define your organisation’s AI potential for the next decade. Ask yourself: Can your racks support 30kW+ densities? Is your cooling ready for continuous inference and model training? Is your infrastructure located where your AI data needs to live? Are you connected to the digital ecosystems that enable AI to thrive? AI is already benchmarking your readiness. Will your infrastructure keep up? Want to explore the full article?Read the full Uptime Institute 2025 AI Infrastructure Survey. NEXTDC source

2025 AI infrastructure: What the data is telling leaders now Read More »

Lighting the first flame: How to spark a transformation that sticks

That’s why, after strategy and budget are locked, the real work begins. Go on a roadshow. Segment your audiences — by initiative, function, business unit, whatever makes sense — and engage them in smaller groups. Create a common pitch deck that starts with the “why,” clearly outlines what’s in it for them, and defines what success will require. Rinse and repeat. Send newsletters. Run surveys. Share progress updates.  A Fortune 500 energy client did this particularly well during an operating model transformation. They rolled out the new model in phases, by cohort. Every cohort began with a two-day, in-person training that connected the dots between enterprise strategy, their role, and the new way of working. It featured industry case studies, tactical role-specific training, and a clear explanation of what was changing and why.  It worked because it honored people’s time and perspective. Most of the folks you need to execute the transformation already have full-time jobs. Their mindshare is limited. And if you don’t give them the tools and context to understand what’s happening and why, it will take far longer than you think to get traction.  source

Lighting the first flame: How to spark a transformation that sticks Read More »

CIO Leadership Live Australia with Mark Opitz, Group Head of ICT, Acciona Australia

Overview In this episode of CIO Leadership Live, Cathy O’Sullivan speaks with Mark Opitz, Group Head of ICT at Acciona Australia. Mark shares insights from his career journey, details Acciona’s digital transformation, and highlights impactful innovations like the Linksite tool and early adoption of AI and large language models. He also discusses his approach to change management, building vs. buying tech, and fostering innovation across a diverse business. Register Now source

CIO Leadership Live Australia with Mark Opitz, Group Head of ICT, Acciona Australia Read More »

The outlier mindset: Leadership shifts that turn CISOs into business catalysts

There is a vast difference between a great CISO and a transformational one. The world’s best security leaders aren’t just managing risk. They’re redefining how security fuels innovation, drives trust, and accelerates business. These leaders are not defenders of the status quo, they’re architects of safe velocity. I’ve come to believe that supreme security leadership rests not on frameworks and tools, but on a mindset. A mindset established from curiosity, intention, and resilience. The following principles have not only guided my CISO journey but are key drivers in redefining modern security leadership. Think like an outlier Mainstream thinking is optimized for average outcomes, unless you’re in a game of Family Feud. Security’s goal is to find the least expected answers. Technology gives us clear visibility across most of our attack surface. The challenge is not seeing what we already know, it’s identifying what we’re missing. Where does visibility end? What are attackers modeling that we aren’t? The outlier mindset challenges assumptions across the industry, your team, and even your own thinking. Brakes are for speed Why do brakes exist? The obvious answer is to help slow and stop, but we’re searching for the least expected answer. The real benefit is that brakes enable faster movement. Formula 1 cars, for example, don’t win with the fastest engine. Drivers win by braking hard into corners and accelerating out with control. Similarly, well-designed security doesn’t slow innovation, it enables bold, confident maneuvers. Security isn’t about slowing the business down by braking, it’s about creating the trust infrastructure that lets it accelerate to top speeds. Our job is to design systems where risk is managed atvelocity, not avoided altogether. The weakest link is at the seams Most security leaders talk about the weakest link, but it’s not usually a system or person. It’s a connection point, a seam, where systems, tools, vendors, or teams intersect. That’s where visibility fades and responsibilities blur. While internal threat modeling is valuable, it can often miss what familiarity obscures. The real challenge is uncovering hidden risks born from integration gaps and routine handoffs. That’s where there’s value in a partner like Trace3: An outside perspective that asks questions we’ve grown too close to see. The goal isn’t to audit risk, but to locate seams. Just like how most robberies happen during cash transit rather than inside the vault, digital threats often exploit what moves between systems. That’s why we harden those transitions, isolate networks, protect data in motion, and closely inspect AI data flows. Resilience begins at the seams. Build a culture that invites every voice Security must be inclusive, as it affects every function of an organization. That means structuring conversations in ways that allow non-technical stakeholders to contribute meaningfully. It’s not about simply translating but creating a shared language and framing risk in business context. If a CFO can’t weigh in on a security risk that impacts financial controls, that’s a design failure – ours. Design for chaos Traditional security models focus on known threats. The next generation of CISOs must assume the unknown and plan for failure by adopting a “design for chaos” mindset. Resilience is not just about better controls, but engineering for disorder. What happens if your anomaly detection systems are compromised through data poisoning? Could your platform continue operating securely if a core service fails or is manipulated? Chaos engineering allows us to test these scenarios in controlled environments. It reveals the unexpected contours of our attack surface to show us how systems respond under stress. Hire challengers How do you distinguish between many technically excellent candidates, beyond likability? This favorite interview question flips the dynamic: “You’re interviewing me for this role… what would you want to know?” This simple shift reveals a candidate’s intellectual curiosity, strategic depth, and thought process beyond the role and into the business. It surfaces who’s just following a script and who’s truly engaged in the mission. Supreme teams are made up of individuals who challenge assumptions and speak truth to power. The most effective team members are not just skilled executors, they enhance strategy, ask tough questions, and elevate the conversation. Exceptional leaders surround themselves with thinkers who sharpen perspectives rather than echo consensus. Know what keeps your boss up at night CISOs are often asked, “What keeps you up at night?” A better question is, “What keeps your CEO up at night?” Transformative CISOs are skilled at translating business priorities into actionable security strategies. This isn’t about keeping your boss happy. It’s about focusing your time, influence, and resources on the risks that matter most to the business, especially the ones you can control. This mindset applies across the org. Every role has a unique perspective and impact area. The closer you’re aligned to what matters to leadership, the more valuable and resilient your security program becomes. The best CISOs don’t just manage security. They translate a CEO’s top concerns into focused, effective security actions. They look from the inside out and from the outside in. If your security program doesn’t actively support the company’s growth, reputation, and resilience, it’s not a strategic asset – It’s just overhead. Be business friendly Arguably the most important principle in transformative security leadership. The early wins in security that create momentum and establish a foundation are important, but they are not the destination. The real work begins when security is asked to support complex change. That’s when security leadership must evolve from operational execution to strategic enablement. It’s about designing frictionless controls that support transformation, M&A, accelerate customer growth, and scale securely into new markets. It’s also when complexity grows and risk follows. Business-friendly security leaders deliver controls that reduce risk without slowing down innovation. They create environments where speed, agility, and protection coexist. They ensure that trust is not a constraint, but a catalyst.                                                                                                                                                                                       The future belongs to outliers The next generation of security leaders will not be defined by how well they protect, but by how effectively they unlock possibility. Those that lead at

The outlier mindset: Leadership shifts that turn CISOs into business catalysts Read More »

AI factories are the new power plants of intelligence

Artificial Intelligence (AI) is rapidly reshaping how we live, work, and learn. From voice assistants and recommendation engines to generative chatbots and self-driving vehicles, AI is now a core part of everyday life. But behind every smart application lies something often unseen: infrastructure. That infrastructure is called an AI Factory. AI Factories aren’t just clusters in the cloud. They are real, physical environments, purpose-built data centres designed from the ground up to support the world’s most demanding AI workloads. Where traditional facilities host websites or store files, AI Factories train, run, and refine advanced models by converting massive volumes of raw data into real-time intelligence. They are the new production lines of the AI era: manufacturing insight instead of goods. At their core, AI Factories deliver extreme power, precision cooling, and ultra-fast connectivity. These aren’t upgrades; they’re the foundational infrastructure of the AI economy. If you’re in cloud, colocation, or enterprise IT, understanding what makes an AI Factory different is critical to staying competitive. AI Factories: the power plants of intelligence Just as cities rely on centralised power plants for energy, AI relies on centralised infrastructure to deliver intelligence. AI Factories combine thousands of high-performance processors (GPUs), ultra-fast interconnection, and advanced cooling systems to run AI workloads at industrial scale. They’re built to: Train advanced models using vast datasets Deliver real-time inference at scale Support GPU clusters requiring intensive power, cooling, and performance Bottom Line: AI Factories are not just next-gen data centres. They are the power plants of digital intelligence. NEXTDC What makes an AI Factory different? AI Factories mark a fundamental shift in infrastructure design. Unlike traditional data centres, they’re engineered for the scale, complexity, and performance AI demands. Key differentiators include Specialised AI Hardware Thousands of GPUs and AI accelerators (e.g. NVIDIA Hopper, Blackwell) AI-optimised CPUs like NVIDIA Grace AI-Centric Software & Orchestration Full-stack platforms like NVIDIA AI Enterprise Built-in scheduling, monitoring, and optimisation tools Extreme Power Density 100kW to 600kW per rack Electrical systems optimised for full-load performance Advanced Cooling Systems Liquid cooling and immersion solutions Energy-efficient thermal design High-Speed Interconnectivity InfiniBand and NVLink fabrics for low-latency data flow Scalable, Sustainable Architecture Modular design with support for sovereign and net-zero goals Bottom Line: AI Factories are intelligence-first. If your infrastructure can’t support this shift, you risk falling behind. Traditional data centre vs AI Factory  Feature  Traditional Data Centre  AI Factory (NEXTDC-Ready)  Primary Purpose  Apps, storage, websites  AI training, inference, machine learning  Hardware Inside  CPUs, some GPUs  Thousands of AI-optimised GPUs and chips  Power Per Rack  5–15kW  30–600kW+ (NEXTDC supports today)  Cooling Method  Air-based ventilation  Liquid, direct-to-chip, immersion  Speed of Network  Standard networking  High-bandwidth, ultra-low latency fabric  Scale of Compute  General-purpose    servers  GPU clusters managed, monitored, orchestrated at scale Bottom Line: Traditional data centres offer versatility. AI Factories offer intelligence at industrial scale. What workloads do AI Factories support? AI Factories are built to handle the world’s most computationally demanding tasks. 1. Model Training Teaching AI to understand patterns, predict outcomes, and reason at scale: Language models (e.g. ChatGPT) Medical image analysis Autonomous driving systems 2. Inference at Scale Deploying AI to make real-time decisions: Product recommendations Chatbots and assistants Smart surveillance and object recognition 3. High-Performance Simulations Powering AI-enhanced simulations: Drug discovery and genomics Financial modelling and risk analysis Climate forecasting and energy grid management Bottom Line: AI Factories are not general-purpose. They’re purpose-built for high-stakes, compute-intensive AI workloads. The AI Factory era has arrived AI Factories are already live and redefining infrastructure across industries. They enable faster deployment, higher reliability, and scalable AI operations. As power densities, cooling requirements, and compute demands rise, traditional infrastructure is reaching its limits. What’s Next: Why AI Factories Matter Now 600kW+ rack power is becoming standard AI-specific chip architectures are evolving fast Cooling and interconnect innovation is a must NEXTDC is building the infrastructure behind Australia’s AI future. With NVIDIA-certified facilities, sovereign-grade security, and national reach, NEXTDC supports everything from GPU-as-a-Service to sovereign AI deployment. Whether you’re scaling Neo Cloud, building national capability, or launching new services, our infrastructure gives you the power to scale confidently. Ready to build your AI Factory? Connect with NEXTDC’s infrastructure specialists and start powering what’s next. NEXTDC source

AI factories are the new power plants of intelligence Read More »

Top 14 certifications for enterprise architects

Certified Pega System Architect certification The Certified Pega Systems Architect certification is designed for developers and other technical staff members who want to learn how to develop Pega applications. It’s an entry-level certification on the path to the Systems Architect certification path from Pega Academy. The certification path includes two more levels of certification with the Senior System Architect and Lead System Architect exams. The Senior System Architect covers topics such as application development, case management, data and integration, user experience, reporting, performance, security, and mobility. The Lead System Architect exam covers Pega platform, application, data model, user experience, security, reporting, asynchronous processing, work delegation, deployment, and testing design. Cost: $190 per exam attempt. Google Professional Cloud Architect The Google Professional Cloud Architect certification demonstrates your abilities working with Google Cloud technologies. The certification is designed to validate that you have the necessary understanding of cloud architecture and Google technology, and you know how to design, develop, and manage secure, scalable, dynamic solutions to drive business objectives. The exam covers topics such as how to design and plan cloud solution architecture for security and compliance, manage cloud infrastructure, analyze and optimize business processes, and oversee the implementation of cloud architecture. There are no prerequisites for the exam, but it must be taken in person at an official testing center location. source

Top 14 certifications for enterprise architects Read More »

The 10 most overhyped technologies in IT

Everest Group’s Joshi has a similar take and cites specifically the industrial metaverse as being overhyped. “The promise has been higher than the actual implementations,” he says. There are definite use cases, Joshi believes, such as for design and maintenance of shop floors, with digital twins of high-end devices, and training. However, challenges around infrastructure costs, people training, interoperability, and poor UX has marred its adoption. 8. Multicloud Many CIOs embrace multicloud but Joshi says few are getting all the benefits that this cloud strategy has promised. “The [enterprise] objective to have uniformly synched interoperable workloads for multiclouds that allow them to address vendor lock-in has not worked,” he says. “Most enterprises are multicloud, but their bet on cloud vendors rarely change. They do not necessarily interoperate their workloads across different cloud platforms either.” So while CIOs are more intentionally pursuing multicloud strategies, whereas previously many had found themselves there as a matter of near happenstance, interoperability and other key issues are adding complexity to the calculus. 9. Electric vehicles Granted, this is not a technology that CIOs usually deal with, but some CIOs still put it on their list of overhyped tech. Chris Grebisz, CIO of technology company Welocalize, is one of them. He described having to figure out how to put his Tesla into neutral when he took it to a car wash for the first time, saying that such routine actions have to be relearned with EVs. And as he’s doing that, he’s finding that the user interface isn’t as intuitive as promoted. “It’s going from 30 years of driving a car to something like an iPad, and I’m a tech guy,” he says. “Everything needs to be figured out. I have to go and read the manual.” Grebisz says he now considers his Tesla a “transportation appliance” rather than a car, a mindset shift that helps him with the change management required when shifting from a conventional car to an EV. He notes that it was a dramatic change, suggesting that true digital natives might find the shift easier to make. The experience has also given him insight into how workers feel when a technology disrupts longstanding workflows. “I was just really surprised by my experience. I thought it was going to be a lot easier,” he adds. 10. Green energy David Williamson, CIO of Abzena, a life sciences company, goes even further and puts green energy in the overhyped tech category. To be clear: He’s not against it. In fact, he, too, has a Tesla and has solar energy for his home. It’s those personal experiences that led him to conclude green energy isn’t the silver bullet that some promise. To start, he — like Grebisz — found there’s a learning curve to driving his EV. “My biggest complaint is they change the user interface all the time,” he says, offering that he has “watched a lot of videos to know what to do with the car.” He has also found that both hot and cold weather wear down the battery, “so you think you have a certain range but you don’t.” He has had a similar experience with solar panels, saying “the promise and the reality are different. They get dirty and then lose efficiency, so they have to be cleaned. And the difference in their summertime and wintertime performance is significant. And then there’s the surprise costs, with Williamson noting that he had to pay to connect to the grid and still gets charged a delivery fee. Williamson says these experiences remind him that “we underestimate the impact of technology on the individual” and that “there’s gotchas with the technologies that aren’t discussed.” source

The 10 most overhyped technologies in IT Read More »