CIO CIO

AI governance gaps: Why enterprise readiness still lags behind innovation

As generative AI moves from experimental hype to operational reality, navigating the balance between innovation and governance is becoming a real challenge for enterprises. It’s why my company, Pacific AI, in collaboration with Gradient Flow, set out to better understand the state of AI and responsible AI with our first AI Governance Survey. And the results highlight a concerning trend: While enthusiasm for AI is high, organizational readiness is lagging.  The data highlights significant disparities in governance maturity, especially between small firms and large enterprises, and underlines the urgent need for leadership to embed governance into the foundation of AI development. But to build safer, more resilient AI systems, we need to first understand the current governance gaps and how they trickle into AI development and use. Cautious adoption, limited maturity  Despite the media buzz and strategic urgency surrounding generative AI, only 30% of organizations surveyed have moved beyond experimentation to deploy these systems in production. Just 13% manage multiple deployments, with large enterprises being five times more likely than small firms to do so. This measured approach underscores a broader trend: most companies are in exploration mode, seeking to understand where AI can drive value before committing to widespread rollout.  source

AI governance gaps: Why enterprise readiness still lags behind innovation Read More »

Extending Zero Trust principles to cellular is redefining the future of secure mobile connectivity

As enterprises increasingly rely on cellular connectivity to power IoT devices and mobile endpoints, securing these connections has emerged as a significant challenge. Traditional approaches like VPNs and firewalls lack the scalability required for the billions of connected devices, resulting in operational inefficiencies, visibility gaps, and elevated security risks. To tackle these issues, Zscaler has unveiled an innovative solution built on Zero Trust principles: Zscaler Cellular. This groundbreaking service enables organizations to secure mobile and IoT traffic effortlessly, while simplifying management and enhancing performance. With Zscaler Cellular, businesses gain robust security, scalable deployment potential, and comprehensive control over the ever-expanding connected landscape. A game-changing approach to cellular security: Zscaler Cellular Zscaler Cellular revolutionizes mobile and IoT connectivity by bringing secure, Zero Trust-based connectivity to the cellular environment. Designed for IoT and OT devices that typically operate outside traditional network infrastructures, the solution extends security to mobile “things” previously beyond the reach of enterprise IT teams. Through Zscaler Cellular, these devices can securely route all traffic via Zscaler’s Zero Trust Exchange platform, ensuring reliable two-way connectivity while closing critical security gaps. At its core, Zscaler Cellular reimagines cellular traffic management and protection. Providing secure connectivity for devices on the move eliminates reliance on outdated tactics like routable IP addresses and backhauling traffic through firewalls—a process prone to inefficiencies, latency, and vulnerabilities. Instead, Zscaler Cellular routes traffic directly to its cloud-based security platform, enabling a streamlined and Zero Trust-driven security model. Key components of Zscaler Cellular The Zscaler Cellular solution is powered by two essential elements: Zscaler SIM card: A data-only SIM that securely routes cellular traffic to avoid vulnerabilities associated with traditional methods. Zscaler Cellular Edge: Acting as an intelligent gateway, this component bridges telecom infrastructure with Zscaler’s Zero Trust Exchange platform to provide centralized visibility, agentless connectivity, and seamless security enforcement. Together, these components eliminate inefficiencies like tromboning and backhauling, ensuring secure and direct traffic flow to the cloud security system for inspection and policy application under unified Zero Trust policies. How it works Devices equipped with Zscaler SIM cards seamlessly connect to public 4G and 5G networks. From there, traffic flows to the Zscaler Cellular Edge for inspection, enforcement, and secure routing to Zscaler’s cloud security exchange platform. Through centralized dashboards, administrators can monitor device activity, apply granular controls, and mitigate threats—all with minimal complexity. The solution’s deployment is flexible, offering three distinct approaches: Integration with telecom providers: Available as a service managed by existing telco infrastructures. Standalone deployment: Directly provided by Zscaler for end-to-end cellular security control. Private cellular environments: Allowing organizations to place the Cellular Edge in private cellular setups, ideal for replacing traditional Wi-Fi ecosystems with high-speed, low-latency 5G networks. Broad use cases for Zscaler Cellular Zscaler Cellular is engineered to address diverse industry-specific needs, making it applicable across various IoT and mobile scenarios: Securing point-of-sale terminals in retail environments. Protecting telemetry systems in logistics. Enabling security for connected vehicles and industrial IoT devices. Safeguarding critical infrastructures such as energy grids and railway networks. By combining effortless connectivity with bi-directional protection, the solution effectively bridges the security gap between cellular networks and enterprise systems, redefining secure mobile connectivity for IoT and mobile endpoints. Redefining the future of cellular security Until now, achieving unified security across cellular infrastructures has remained a daunting challenge. Zscaler Cellular resolves this challenge by extending Zero Trust functionality to cellular-connected devices in a scalable, cost-efficient manner. By eliminating routable IP addresses, simplifying connectivity, and reducing the attack surface, the solution establishes cellular streams as invisible to attackers while enhancing performance and security worldwide. Moreover, whether integrated into existing telecom networks or deployed directly by Zscaler, the service securely connects IoT devices, optimizes cellular traffic, and scales effortlessly to meet modern mobile security demands, ushering in a new era of secure mobile connectivity. Explore more about Zscaler Cellular to understand how it empowers enterprises to confidently embrace the connected world. source

Extending Zero Trust principles to cellular is redefining the future of secure mobile connectivity Read More »

It’s time to up-level your data center for AI and sustainability

There’s no letting up on the demand for more data center storage, with artificial intelligence (AI) and machine learning (ML) workloads emerging as the leading growth driver. It’s time to ask, “Is our data center ready for AI storage demands?” Organizations have many options for meeting their growing storage needs: cloud, hybrid cloud, hard disk drives (HDDs), solid-state drives (SSDs), and tape archival solutions. That can make it challenging to come up with the optimal solution as companies try to match their strategic priorities with investment criteria, which may not always perfectly align. Unsurprisingly, the demands of AI and ML workloads are increasingly driving storage growth, now consuming an average of 24% of storage infrastructure, according to a February 2025 survey conducted by Foundry on behalf of Western Digital. This research included 109 decision-makers in IT-related roles, data and business intelligence, R&D, and executive management. The respondents were almost evenly split between midsize organizations (500–2,499 employees) and large enterprises (2,500+ employees). Participants could choose multiple reasons for their increased storage needs. Their most-cited drivers include: 68%: AI/ML workloads 54%: Expansion of private cloud and hybrid cloud environments 45%: High-resolution content 42%: Internet of things and edge computing 39%: Compliance and data retention policies As they strive to balance available options with top priorities and bottom-line considerations, participants said, they are increasingly relying on HDDs for both fast and mass storage. For example, 82% of the survey respondents expect to increase HDD investments over the next two years due to AI adoption — of those, 28% are forecasting “significant” increases. Priorities and objectives When the survey participants were asked about the decision-making process for storage investments, all factors seemed vital: 90%: Longevity and durability of devices 86%: Total cost of ownership (TCO) 82%: Energy efficiency and power savings 72%: Compliance with sustainability goals 66%: Availability of trade-in or refurbishment programs 65%: The vendor’s commitment to environmental, social, and governance (ESG) initiatives 62%: Recyclability and circularity of storage components The survey further reveals that there is no one-size-fits-all approach to the TCO conversation. Respondents cited many factors as being critical to TCO, including storage density, power and cooling requirements, reliability, performance, and acquisition costs. All of these considerations should be taken into account, from both a CapEx and an OpEx perspective. When the respondents were asked about storage vendor selection, their priorities changed slightly. Although they still highly ranked the need for reliability, durability, and performance, other issues bubbled up — most notably AI and analytics readiness (47%) and scalability (42%). Although energy efficiency and sustainability remain crucial, they were more commonly cited by midsize organizations as being extremely or very important (88%) to their storage provider selection, compared with 74% of enterprise-size organizations. Growth comes with growing pains Storage expansion is causing major pain points. On average, organizations said their storage demand grew 27% over the past year. Yet, 51% of the respondents said they had experienced increases of 25% to 50% or more. No one in the survey reported decreases in storage demands. The pain arises predominantly from the costs associated with storage expansion — particularly for midsize companies, of whom 60% rated it as top challenge, compared with 43% of the large enterprises. Other pains include: Meeting AI and analytics performance demands Security and compliance concerns Data access speeds and latency issues Managing unstructured data growth Migration challenges (on-premises to cloud and vice versa) Vendor lock-in or lack of flexibility With many companies still in the relatively early stages of deploying AI, growing data demands will almost certainly amplify new or evolving storage challenges. Performance for AI and analytics workloads is table stakes, as are data retention and compliance requirements. Yet, there’s also the issue of ensuring and paying for high-speed data access and retrieval. AI applications are causing a surge in data generation and usage. Users need to access and process AI workloads at high speed and with low latency, creating new obstacles to evolving storage architectures. As organizations accelerate their adoption of AI, the demands on enterprise storage are changing rapidly, and decision-makers must consider how the scale, speed, and complexity of AI applications will impact their infrastructure. The survey found that most companies use two or more storage solutions for AI workloads. Midsize companies rely more on hybrid cloud and HDD solutions, whereas larger companies have greater reliance on hybrid cloud and cloud object storage. Midsize organizations are also more reliant on SSDs than are enterprises. Each workload is unique, and success depends on aligning the right storage technologies — whether HDD, high-performance flash, or cloud — with the business outcomes that organizations are seeking from AI. Critical needs and challenges Reliability, scalability, power, cooling, and overall costs contribute to the strategies and decision-making for enterprise storage. The impact of AI overlays all these concerns and increases pressure for organizations to develop storage strategies that are high-performing but also intelligently tiered, scalable, and cost-aware. Data in the data center is not homogeneous; different applications and data types have varying access requirements such as frequency, latency, and cost sensitivity. This necessitates a tiered storage approach that is cost-effective and flexible enough to accommodate new and evolving AI use cases. Meeting these challenges demands an approach that combines next-gen storage technologies with deep architectural flexibility. HDDs continue to offer opportunities for data center “warm” storage opportunities where the vast majority of data lives. For some organizations, tape still has a role for “cold” archival and backup data, due to low cost and infrequent access. Aligning storage with the specific nature of the workload is key to unlocking AI’s full potential while optimizing total cost of ownership. Whether your storage priority is AI readiness, sustainability, TCO, or performance, Western Digital can help ensure that you meet your goals. Click here for more information. source

It’s time to up-level your data center for AI and sustainability Read More »

Navigating the crunch point: Volatility and change in manufacturing

Manufacturing is under pressure from all sides, from tariffs to recession worries to extreme competition. But right here, right now, leaders across the industry are rising to the occasion and investigating every advantage technology offers. I’ve seen it firsthand — both through data and in between the trendlines. My colleagues at Rockwell Automation set aside time each year to survey thousands of manufacturing professionals about their experiences with and uses of smart technology. What’s working? What isn’t? Which internal and external factors are motivating their changes? It’s a process I’m proud to support, and I always look forward to comparing this quantitative data against the qualitative data I’ve gained through decades of conversations in the field. I worked as an industry consultant helping customers apply solutions to solve problems in my earlier career. Now as a business unit leader, I talk often with leaders looking to the future and making sure we are aligned. This almost always results in discussions about what future trends are likely, how manufacturing will evolve and how we can jointly make the best business decisions possible to be prepared and reduce risks. When I reviewed the data for our 10th survey, I saw an industry caught between a constellation of rocks and hard places. People alone cannot match the hour-by-hour volatility of current economic conditions or keep up with the cybersecurity arms race that leaves supply chains vulnerable, and 81% confirmed that these pressures (internal and external) are accelerating their digital transformation timelines. This makes sense. Manufacturers need to fill gaps. However, they also need to push to beat their competition to the AI use cases that will generate current and future value — whether that’s mass adoption of physical AI on the factory floor or pragmatic quality control. And surrounding it all, an industry-wide resistance to change.  Manufacturing needs AI — But they’re still figuring out where and how Manufacturing leaders are almost unanimously adopting AI — our survey this year found that 95% of respondents are turning to the technology. This doesn’t surprise me based on what I’ve seen firsthand, but I was excited to see established use cases from our research last year turning into best practices. Notably, AI-powered quality control is changing manufacturing. Nearly half of the respondents (48%) plan to deploy this use case. In the field, I see the impact human error can have on quality control, especially in situations like our current trade conditions. Manufacturers now must quickly adjust where and when things are made, and that means new processes and people will come into play. That introduces opportunity for human error, leading to lower quality, so it is important to apply these AI use cases in conjunction with flexible automation solutions to ensure quality is maintained. Our survey’s respondents also highlighted cybersecurity as a key AI use case, as manufacturing companies accounted for 21% of all ransomware attacks in 2024 — only inflation and economic growth ranked as more concerning risks among our survey’s respondents. As bad actors adopt more sophisticated tactics to deploy cyberattacks, manufacturers are realizing that they can’t have people “watching” the system for bad actors. It is just too much and too complex. They are relying more on AI to do that for them and catch things quicker. In fact, nearly half of our survey respondents indicated they plan to use AI/ML for cybersecurity over the next year. We’re even seeing industry leaders pivoting from reactive to proactive. They’re proactively planning improvements in system hardening, patching and monitoring, and tying into current risk levels. This philosophy shift is especially noticeable in end-of-life (EOL) migrations. Historically, manufacturing EOL policy has been “since it is running, don’t touch it…” That resulted in old systems with out-of-date or obsolete parts in the critical system. source

Navigating the crunch point: Volatility and change in manufacturing Read More »

Don’t let cloud security hinder or slow your AI-driven business innovation

Given that nearly 90% of large organisations in the Asia Pacific (APAC) have adopted multi-cloud architectures, it’s safe to say that the transformation to embrace AI-driven innovation is well underway. These multi-cloud environments mean scalability, performance, and agility, but on the flipside, present unprecedented complexity. And so right across the region, CIOs and their teams are now grappling for ways to facilitate speed with AI while maintaining secure and resilient environments. Security complexity is outpacing traditional tools and teams This isn’t an easy challenge, because the tools that once secured infrastructure are struggling to protect rapidly evolving stacks. It’s not something that manpower alone can solve either, as security teams are being outpaced by developers deploying AI-powered applications at scale. Consequently, there’s a real chance of ending up with a landscape marked by fragmented visibility and inconsistent controls. There’s also the risk of AI deployments without governance, disconnected security, and skill shortages within the developer teams leading to poor enforcement of policies and difficulty in managing the complexity of multi-cloud and AI infrastructure. Ultimately, this will compromise the ability to deliver innovation. It also underpins why Foundry’s 2024 Security Priorities Study found that securing cloud infrastructure and protecting sensitive data are top concerns for Asia Pacific security leaders in 2025. A new model for securing AI-native, multi-cloud environments Understanding the risks, Wiz partnered with Amazon Web Services (AWS) to redefine cloud security for the AI era. As a unified cloud security platform, Wiz offers a solution that aligns speed, security, and innovation. Designed for cloud-native environments and tightly integrated with AWS, Wiz empowers teams to innovate faster, without sacrificing protection. Broken down, its benefits to security teams include: 1. Rapid full-stack visibility Wiz scans all layers of an AWS environment, including VMs, containers, serverless, and data services, to give teams a comprehensive, real-time view of everything happening in the cloud. It connects the dots across cloud services using its Security Graph, delivering context-rich insights in minutes. This visibility extends to AI tools, including shadow AI deployments that may go undetected in siloed environments. Through AI-BOM (AI Bill of Materials), Wiz identifies generative AI tools in use, even those deployed without approval, so teams can understand and manage risks early. 2. Prioritised risk management Managing large, complex multi-cloud environments means understanding that not every vulnerability needs immediate attention. Wiz helps security teams cut through the noise by prioritising the most critical risks that are exposed, exploitable, and high impact. This reduces alert fatigue, minimises the resources required to manage an environment, and ensures teams are focused where it matters most. The platform also integrates seamlessly with AWS services like GuardDuty, CloudTrail, and Security Hub, providing enriched telemetry that supports smarter decisions. 3. Secure migrations and modernisation Many enterprises struggle with securing cloud migrations, especially during mergers, acquisitions, or re-platforming efforts. Wiz accelerates these journeys with agentless deployment, faster assessments, and guided remediation steps. Its deep AWS integration allows teams to consolidate infrastructure securely and efficiently. For example, Wiz recently helped Asia Pacific-based enterprises Ansarada and Handshakes navigate complex cloud environments using its full-stack visibility and remediation capabilities. They, in turn, reported a multitude of benefits, from achieving a 360-degree view of the cloud landscape with built-in compliance capabilities to better empowering the IT team to proactively remediate security risks while the business was focused on growth and product development. 4. Secure AI innovation One of Wiz’s most powerful differentiators is how it supports safe AI adoption. Developers can build and deploy AI services using AWS tools while relying on Wiz to detect misconfigurations, enforce best practices, and eliminate exposure paths. Using Amazon Bedrock, organisations can even train AI models to support security operations. This automates the creation of awareness training or analysing communication records during M&A due diligence. Together, Wiz and AWS ensure that security is no longer a barrier to AI innovation but a foundation that supports it. By leveraging Wiz on AWS, organisations have reported shipping products three times faster, achieving ten times greater efficiency in remediating high-priority risks and gaining five times improved visibility across AI workloads. For CIOs and CISOs, these core capabilities of Wiz mean confidence, and they can move past the traditional trade-off where innovating quickly can mean taking on additional risk. Ready to experience the magic of Wiz? Join us for a gamified Immersion Day where you’ll get hands-on with the Wiz platform, learn how to tackle real-world cloud security challenges, and put your skills to the test in a Capture the Flag competition—complete with prizes for the top finishers. Stick around for beer, pizza, and networking to wrap up the day with your peers. Don’t miss this unique opportunity to see how Wiz is redefining cloud security. source

Don’t let cloud security hinder or slow your AI-driven business innovation Read More »

How a solution for a company created an ecosystem for a nation

AGIS’s revolutionary manufacturing platform  Following the political uncertainty and supply chain disruptions that started during the pandemic, Al Ghurair Iron & Steel (AGIS) – the largest producer of galvanized flat steel in the United Arab Emirates (UAE) with a history stretching back more than 15 years – realized that it was time to embrace the future.  Despite enjoying a reputation for manufacturing products fashioned to customer requirements and special requests, AGIS needed the technology to strategize long-term planning, monitor supply chain shifts, quickly fulfill orders, and integrate shop floor reporting systems to adapt to short-term changes. Today’s technology for tomorrow’s goals Unfortunately, the company’s manufacturing process was hindered by tools that slowed down decision making and impeded the company from achieving its full operational potential. Managing the more than 500,000 metric tons of customer orders was dependent on Excel-based planning and costing, as well as offline, manual communication between the planning, procurement, sales, execution, and quality teams. Multiple redundant systems affected efficiency.  Changing customer prerequisites led to constant adjustments and delays based on the raw material supply and shifting sales trends.  By and large, the entire end-to-end process was disjointed, hampering AGIS’s ability to effectively scale production capacity. Despite this, demand continued to grow, increasing the urgency to streamline planning and execution with Machine Translation (MT) technology. This needed to be leveraged with computational linguistics and artificial intelligence (AI) as part of a new manufacturing ecosystem that would transform AGIS into an innovation leader. AI as a unifier       The Al Ghurair Group’s grassroots heritage has been woven into the fabric of the UAE since 1960, when it began as a small trading business during a time when the country’s mainstays included diving and fishing.  But as the nation advanced, the Al Ghuriar Group helped restyle the business culture, opening the UAE’s first Galvanized Steel Plant (AGIS), flour mill, bank, and shopping mall en route to becoming one of the Middle East’s most diversified business groups. AGIS, commissioned in 2008, now needed a robust manufacturing solution to boost operational efficiency across the manufacturing process.  The platform’s goals included structuring the workflow for shop floor operators, some of whom had the tendency to become quickly overwhelmed by technological changes.   Fortunately, that challenge could be eased by providing a unified user interface reliant on automated data logic while limiting the amount of data entry required.  To lighten the weight of the undertaking, AGIS turned to SAP, the world’s top enterprise resource planning (ERP) software vendor.  With more than 130 AI scenarios available at the time – some 400 are expected to be implemented by the end of 2025 – SAP boasted the world’s largest AI portfolio.  And because of the amount of SAP software already being utilized by AGIS via SAP’s Business Technology Platform (BTP), adoption would be simplified. SAP, in turn, chose Deloitte as a participating partner.  The multinational professional services specialist proposed important design aspects of a solution built largely upon SAP Business AI.   The collaboration would enable the AGIS to refine the solution to meet operational needs by calculating plant maintenance, inventory, procurement, environmental, safety, and health issues, and other concerns into the algorithm. Production planning and execution would now be automated based on order requirements, while dashboards with automated controls kept track of the end-to-end processes.  Precision Bionetwork The three-way partnership led to AGIS deploying the solution in December 2024. Predicted the company’s CEO, Abu Bucker Husain, “With the innovative SAP Business AI solution, we will optimize processes, integrate value chains, and improve data-driven decision making.” The statement quickly proved to be prophetic.  For the first time, AGIS has been able to shatter the data silos that slowed production efficiency. The use of AI allows the platform to automate more than 300 complicated processes and material calculations based on some 90 attribute variations, tracking raw material availability, and proposing more than 120 business and logic quality parameters. This speeds up order execution, QA inspections, and ensures every product meets high-quality standards — with less effort from the team.  And with 90 percent process digitization, the accuracy, legitimacy, and extensiveness of customer orders can be evaluated and executed in a period that would have previously been unfathomable. For example, order planning time has plummeted from 15-to-20 minutes to less than five, while the unified operator dashboard has diminished the batch execution time from eight to ten minutes to less than three. Every day, more than 1,800 orders are processed – five times the previous amount – leading to an 80 percent increase in end-to-end production planning. AGIS now uses the BTP capabilities to capture real-time data for reporting, providing insights into sustainability metrics and production efficiency to track and report on its environmental efforts.  As a result of these trailblazing improvements, AGIS was honored as a winner in the Transformation Titan category at the SAP Innovation Awards in 2025, a yearly event celebrating organizations using SAP technology to change the way business is conducted. You can glean further details about AGIS’s accomplishment in their pitch deck. The recognition embodies the modern version of a company respecting the traditions of yesterday while reshaping tomorrow. To learn more, visit us here. source

How a solution for a company created an ecosystem for a nation Read More »

The missing backbone behind your stalled AI strategy

First is the AI strategy leader. This person serves as the connective tissue across the enterprise, defining the AI roadmap, evolving the operating model and orchestrating how the rest of the CoE supports the broader organization. They think through priorities, risks and investment sequencing, and often develop reusable assets such as intake forms, validation templates and lifecycle checklists that domain teams can adopt and adapt for their own use cases. They also play a critical role in promoting awareness and adoption of responsible AI frameworks, facilitating reviews for sensitive or cross-functional use cases, often via an ethics or risk committee. The second essential role is the architect. This individual owns the technology architecture that underpins enterprise-scale AI. They’re responsible for designing and maintaining the shared infrastructure: things like secure, GPU-enabled sandboxes, model registries and MLOps pipelines. These inputs allow domain teams to build and deploy responsibly and efficiently. They also define and enforce enterprise-wide data governance standards, recognizing that, like any technology, AI depends entirely on the quality and context of the data it consumes. Next is the teacher, a role we think every CoE should prioritize early. This person leads the education motion across the organization, building awareness around the benefits and risks of AI and enabling teams to upskill continuously as the technology evolves. They’re responsible for designing role-based learning programs and for training the spokes on key delivery processes and enterprise guidelines. source

The missing backbone behind your stalled AI strategy Read More »

Storage best practices: How to address the challenge of scaling AI workloads

Artificial intelligence (AI) technologies and hybrid/multi-cloud trends are putting pressure on organizations to optimize their storage strategy to ensure data availability — while enabling scalability and efficiency. For example, generative AI (genAI) applications have further accelerated data creation, which in turn increases the need for efficient, available storage that is also cost-effective. Optimizing all that data, whether in the cloud or enterprise data centers, is largely dependent on tiered data storage, which uses a mix of hard disk drives (HDDs), solid state drives (SSDs), and the ever-persistent archival tape storage. “Different applications and data have varying requirements around access frequency, speed, and cost-effectiveness,” says Brad Warbiany, director, planning and strategy at Western Digital. “As AI datasets, checkpoints, and results grow in size and volume, high-capacity HDDs are the only cost-efficient bulk storage solution for cold and warm data with an essential role alongside cloud-optimized technologies for modern AI and data-centric workloads.” IT and business decision-makers, as well as technologists and influencers from our CIO Experts Network, echoed this strategy when we asked: How can organizations address the biggest challenges in scaling storage infrastructures while balancing cost efficiency, sustainability, and long-term total cost of ownership (TCO)? The agility angle “As data volume, driven by AI, continues to increase, organizations must leverage data life cycle policies and auto-tiering to optimize storage capacity and control costs, ensuring data is dynamically moved to lower-cost tiers as it becomes less active,” says Hasmukh Ranjan (LinkedIn: Hasmukh Ranjan), senior vice president and CIO at AMD. Other experts agree that with AI rapidly evolving, organizations need flexibility and adaptability to meet future needs. “Implementing agile, high-performance storage platforms is crucial for handling the dynamic and ever-expanding nature of AI workloads,” says Chris Selland (LinkedIn: Chris Selland), independent consultant, analyst and lecturer on entrepreneurship and innovation at Northeastern University D’Amore-McKim School of Business. Selland points out that incorporating tools such as tiered storage can optimize costs by aligning storage resources with evolving data requirements. Automated data life cycle policies can help ensure that “data is stored on the most appropriate storage tier based on its age, access requirements, and business value.”   While data center SSD can provide key advantages, such as low latency, it’s not sufficient to justify a higher TCO for many applications and comes with a potential acquisition cost that can be six times greater than HDD. Even during periods of significant SSD price drops, the TCO advantage of HDD has blunted any major shift in data center market share. Meeting goals for business value According to experts, many organizations are striving to balance their storage needs with sustainability goals. “There needs to be a balance within companies to increase their storage demands that AI drives with that of staying energy efficient so as not to grow their organization’s carbon footprint,” says Scott Schober (@ScottBVS), president and CEO at Berkeley Varitronics Systems. “Balancing performance with sustainability requires a collaborative multi-generational team that can devote attention to your storage infrastructure,” says Will Kelly (LinkedIn: Will Kelly), a writer focused on AI and the cloud, “while also extending their focus to controlling data sprawl and optimizing your cloud storage tiers while cultivating an architecture that can scale and adapt as your AI workloads evolve.” Then there’s the issue of assigning storage based on its value to the business, says Arsalan Khan (@ArsalanAKhan), speaker, advisor, and blogger on business and digital transformations: “One of the biggest challenges is striking the right balance between collecting data for strategic, high-value use cases versus just accumulating data without a clear purpose. When scaling storage infrastructure, it’s critical to align these considerations with cost efficiency, sustainability, and long-term TCO.” That reinforces the need to assign storage tiers based on the value of the data. Savvy administrators will prioritize TCO and HDDs for lower-performance cool/warm workloads — which make up the bulk of the data center environment — while strategically deploying SSDs for workloads that benefit from a performance advantage. The rapid deployment of genAI technology can exacerbate the challenges for those with a storage infrastructure that can’t keep up, say experts: “GenAI is extending the business value of cleaned data, including real-time transactional data, unstructured data used for training AI models, and long-term archived data required for compliance,” says Isaac Sacolick (@nyike), president of StarCIO and author ofDigital Trailblazer. “IT teams manage many data types in data warehouses, data lakes, cloud file systems, and SaaS — with different performance and compliance requirements. The challenge for CIOs is defining and managing an agile storage infrastructure that scales easily, enables moving data depending on business need, meets security requirements, and has low-cost options to fulfill compliance requirements.” Kumar Srivastava (LinkedIn: Kumar Srivastava), CTO at Turing Labs, adds: “Rapid growth in data from R&D formulations demands agile, scalable storage solutions that support AI-driven analysis with data spanning multiple formats, structure, and quality. Ensuring low latency for data access while integrating modern tools with legacy systems is critical.” Also, as with just about anything involving IT, enterprises are contending with the IT skills gap, which affects storage management. “Inexperience in allocating dynamic resources for complex AI models results in poor orchestration, a costly problem,” says Peter Nichol (LinkedIn: Peter Nichol), data and analytics leader for North America at Nestlé Health Science. “This creates idle resources and encourages overprovisioned clusters, leading to waste. Cost leakage occurs more frequently than you might think.” Consider the architecture The intersection of AI and storage strategies necessitates a well-thought-out approach to storage architecture. It is critical to align appropriate storage types with the business outcomes that organizations are seeking from AI. HDDs provide a significant and persistent TCO advantage, making them a preferred option to fulfill a dominant share of tiered storage architectures and ensure a cost-effective approach to achieve the business outcomes that organizations are seeking from AI. Learn even more about efficient scaling of the data center by reading this whitepaper “The Long-Term Case for HDD Storage.” source

Storage best practices: How to address the challenge of scaling AI workloads Read More »

Textron takes flight with gen AI

On his second day as Textron’s global CIO, Todd Kackley found himself in the spotlight. During his first executive staff meeting, CEO Scott Donnelly turned to him and asked, “What are we going to do about generative AI?” There was no room for hesitation. Kackley, a longtime Textron executive who had most recently served as divisional CIO, leaned into the question in a way that would shape the company’s next major technology breakthrough: “Let me demonstrate the value,” he said. Three months later, he returned to the same room with results that would convince even the most skeptical of leaders. But the story of how Textron, a $13.7 billion industrial conglomerate known for brands like Cessna, Beechcraft, and Bell, accelerated its use of gen AI goes far deeper, and reveals critical lessons for technology leaders everywhere. A leap, not a request Kackley didn’t begin with a resource ask. “I had no budget, no tech, and no team for this,” he recalls. “But I had trust. I had a team that had learned how to innovate quickly and take risks.” That trust was hard-earned. Just a few years earlier, he led through personal adversity, including a cancer diagnosis, and discovered the power of vulnerability and transparency. It was a leadership approach that unlocked new levels of trust and creativity within his teams. source

Textron takes flight with gen AI Read More »

Reap the benefits of heterogeneous, multi-vendor networks without their complexity

Rapid artificial intelligence (AI) growth among enterprises is driving the next phase of digital transformation, with the technology presenting unprecedented business opportunities. This enthusiasm has seen applications grow tenfold due to AI-powered cloud migration, with over 80 percent of services now on the cloud. Eager to reap the tremendous value of AI while responding quickly to market changes,      organisations are combining different business functions across different systems through agile application integration. This approach, however, places immense pressure on their networks, as companies will require high network resilience and operational efficiency. This is further compounded in the financial sector, where the rapid pace means even a second’s delay can result in missed opportunities and financial losses. Addressing network concerns with heterogeneous networks As high network performance is necessary for meeting the demands of real-time transactions, online banking, and data analytics, heterogeneous networks have emerged as a viable solution to address reliability issues without sacrificing latency by incorporating devices and solutions across multiple vendors to ensure seamless connectivity. This offers financial institutions improved coverage and capacity compared to traditional networks, allowing them to incorporate devices and solutions from their preferred vendors and eliminating vendor lock-in. At the same time, this can ensure optimal network performance despite congested bandwidth. That said, many institutions come across significant challenges. A major concern is interconnectivity. Most vendors have distinct network protocols that can result in poor interoperability. Companies also experienced rising operational costs when managing multiple vendors, as advanced network management tools and expertise are usually necessary to ensure network visibility. Such tools become especially vital when identifying which service or device is not working to its fullest capacity. Ensuring these technologies work seamlessly across service providers can also be time-consuming, with expenses bolstered by initial setup costs. Then there are security risks, as heterogeneous networks can become more vulnerable to data leaks. Delivering a centralized overview across multi-vendor networks Overcoming these challenges requires a solution that can not only offer virtual network provisioning across a multi-vendor architecture but also address the myriad challenges of integration, security, and management for heterogeneous networks. To this end, Huawei has designed the Xinghe Intelligent Ultra-Resilient Financial Data Centre Network to ease the deployment of such networks. Recognized as a Leader in the 2025 Gartner Magic Quadrant for Data Center Switching, Huawei has achieved the highest score in the “Completeness of Vision” category. Its breadth of experience in connectivity solutions means that companies no longer have to limit themselves to a single network vendor or tackle the complexity of managing a multi-vendor environment. As an AI-powered connectivity delivering a central, unified view to manage multi-vendor networks, the Xinghe Intelligent Ultra-Resilient Financial Data Centre Network allows services and devices to operate as a single cohesive system, not unlike a conductor orchestrating multiple instruments to produce a harmonious, melodic symphony. The solution is built upon the Huawei CloudEngine series of data center switches, which delivers high-performance connectivity, as well as the Huawei iMaster NCE, a network cloud engine serving as the network’s neural center. Together, Xinghe offers reliable and secure network performance, as well as smart operations and management (O&M) capabilities. Steady deployment: Xinghe drives effective, error-free provisioning with the iMaster NCE network digital map. This provides users with a network application analysis that offers visibility into the network, such that security policies can be analyzed and deployed within minutes. At the same time, the most suitable firewall configurations are automatically recommended, with heterogeneous firewall setups implemented within five minutes. Xinghe can also integrate various IT services management (ITSM) tools, so as to streamline and deliver end-to-end IT services. Stable reliability: With failover happening within milliseconds in the event of network disruptions, Xinghe ensures zero service interruptions with CloudEngine switches, bolstered by intelligent lossless network technology and multi-layered reliability functions. Smart O&M: Xinghe is equipped with Huawei NetMaster, a large network model, to learn from businesses’ network fault-handling experiences, so the solution can automatically resolve faults alongside AI agents. This reduces network mean time to repair (MTTR) to less than five minutes. The solution can also tackle network faults with “1-3-5” troubleshooting of intra-data centre faults, as well as precise, minute-level demarcation of faults across data centres. Find out more about how Xinghe can help you bolster your financial institution’s network performance and intelligent digital transformation efforts with AI. source

Reap the benefits of heterogeneous, multi-vendor networks without their complexity Read More »