CIO CIO

The need to add agentic AI to the university syllabus

On rolling out agentic AI: I can share the success formula that we’re following, which I think is relevant pretty much anywhere. It’s not easy, but it’s important, and it starts with the commitment. You can have a leader, and I’ll use a higher ed example: Michael Crow, president of Arizona State University, says let’s make this happen. People get in line and get it done. But that’s not very common. A lot of times it has to grow from a line of business leader in corporate America, or from within higher ed. And you need to make sure there’s buy in at that level. Then it has to be linked to the cause. And with agentic AI, because you’re using it to make what you’re doing more productive, you’ll find the way to do it right as the cause morphs over time. But you need to really think about your business. This should be an evolution for what you do. Don’t just do the things you do today faster or with more information. You should be doing something different or broader with it because it gives you that opportunity. So that’s where the execution phase comes in. On AI governance: It’s critical, of course, because when I see AI governance under the data management office, I shudder because it’s really about the human experience. I’m not saying to put it under HR either, but it needs to report up at the level where you’re understanding the impacts to the human side of things. And humans can be the customers, employees, your stakeholders, or partners. It’s not about saving jobs. It’s about the human element and doing it right. All those pieces fall into that so you’d want the governance to be that way. That’s the model we’re following and we’re seeing success with it. On attitudes to AI in higher ed: I like the maturity model that’s in place, and with a few tweaks, it could fit well in higher education. Another component of culture is needed, though, because the people piece is mostly around workforce development, which is critical. From my experience at corporate America, cultures change more flexibly in a positive way than in higher ed due to an attitude of preserving how things have always been taught, and how they want to do things. When I started teaching my class, I asked how many were using some form of gen AI or a copilot provided on campus, and I think two people raised their hands. But people were using DeepSeek underground because they’ve got other professors saying you’re not allowed to use it. But I say please use it. You need to learn to teach differently. That’s what instructors need to think about, and not just in higher ed. Anybody who’s growing, training, and leading staff needs to think about it in that way. Use it as a gift and not as a barrier. source

The need to add agentic AI to the university syllabus Read More »

Unlock the full potential of AI with a unified Cloud, Data, and AI (CDAI) strategy

Modernizing AI and cloud for competitive advantage  AI has evolved from just a buzzword to a game-changer for business innovation, growth, and competitive differentiation. Yet, many companies are still tangled in fragmented AI solutions that can only deliver isolated benefits. While these solutions might address immediate problems, they also result in inefficiencies, data silos, misaligned investments, and higher costs. This patchwork landscape stifles progress and puts companies at a disadvantage in the race for innovation.  Recognize the importance of a unified AI and cloud strategy  To overcome the challenges of fragmented AI solutions, enterprises need a cohesive strategy that integrates AI, cloud, and data capabilities. An AI-enabled cloud can transform your data processing, analytics, and automation capabilities, ensuring that your operations are optimized and streamlined. Imagine an AI-powered cloud as a digital assistant for your business, automating tasks, predicting trends, and making smarter decisions faster than ever.  Overcome the challenges of fragmented AI solutions  Fragmented AI systems create numerous challenges that hinder business progress, such as:  Operational inefficiencies and bottlenecks: leads to increased workloads for IT teams, hindered productivity, and delays in crucial projects  Limited AI capabilities and ROI: fails to deliver exponential outcomes, limiting the return on AI investments  Scalability issues: leads to downtimes and performance issues, and results in lost business opportunities  Data silos: impedes effective decision-making and limit business insights  Cloud inefficiencies: leads to increased costs and struggles with effective deployment and management, which slows innovation  ERP & CRM integration issues: creates challenges in providing unified data views that can cause missed opportunities and inefficient processes  Poor customer experiences: leads to dissatisfied customers and competitive disadvantages  Higher costs: results in overspending on maintenance and upgrades and reduces profitability  Insufficient data integration: limits the ability to leverage real-time data analytics and responsiveness to market changes  Reduced capacity for innovation and slower time-to-market: slows down the roll-out of new products and services  Make the leap to a more unified approach to AI and cloud  This transformation is more than just a technology upgrade; it helps you align with broader business goals. Maintaining fragmented systems can be costly both in terms of money and efficiency. It is critical for companies to shift from isolated AI solutions to a unified, AI-powered cloud approach. You can achieve: Long-term gains: While transitioning to an AI-enabled cloud infrastructure involves upfront costs, the long-term benefits of improved efficiency, faster processes, and higher ROI make it a worthwhile investment.  Manage complexities and achieve simplification: It’s a complex task requiring careful planning and execution. Intentional investment in simplifying the IT landscape is crucial to effectively managing this complexity.  Align with overall business goals: Avoid discrete technology upgrades to build more credibility, demonstrate more impressive outcomes, and secure more funding.  Build an AI-first approach Learn about how we can help you build a cohesive, scalable, and adaptive technology environment that directly supports your strategic business goals here. source

Unlock the full potential of AI with a unified Cloud, Data, and AI (CDAI) strategy Read More »

Ways CIOs can set more realistic expectations with vendors

“We began as a startup, then embarked on a growth path thanks to several investment funds and, subsequently, by joining the Ca’ Zampa Group,” says Ciocia. “We know how crucial supplier relationships are in the early stages of development. When a company is still young, it often lacks the necessary internal expertise and focuses its resources on its core business, which, in our case, concerned the acquisition and development of veterinary clinics in Italy. In that context, we relied heavily on external partners to integrate expertise and operational capabilities, even though it’s not always easy to establish favorable conditions.” The first critical aspect is finding the right-sized supplier. Small companies typically look for a similarly sized supplier, because they’re less expensive than a large one and more flexible. So as the group grows, Ca’ Zampa is now expanding its reach, evaluating larger suppliers to support the expansion of services throughout Italy. For Ciocia, however, two elements remain key in collaborating with vendors: flexibility and trust. From this point of view, the supplier issue is also critical for Brunetti. “We’re a typical Italian manufacturing SME that’s almost a large company, but we don’t yet have the size or budget of companies with large IT teams,” she says. “I coordinate a team of four people, and we need to provide our services to the entire group ecosystem, including the factories and some overseas branches. We need suppliers who can support us with both infrastructure and second-level systems services and support. Our challenge is finding those who are the right fit.” source

Ways CIOs can set more realistic expectations with vendors Read More »

From app errors to user adoption: The missing analytics layer in Salesforce Lightning

Salesforce has become the backbone of enterprise operations, with 90% of Fortune 500 companies relying on its platform to drive business processes.[1] What’s more, the company’s various cloud offerings generate over $20 billion annually,[2] cementing its position as the dominant force in customer relationship management and business automation. To maximize their Salesforce investments, organizations are dedicating significant resources to Salesforce Lightning, spending an average of $500,000 on Lightning implementations. This substantial investment reflects Lightning’s appealing promise: empowering both developers and business users to build custom applications through intuitive drag-and-drop interfaces, democratizing app development across the organization. However, building applications represents only half the equation. Even the most sophisticated Lightning apps deliver zero value if employees don’t adopt them or abandon them due to poor user experiences. Even worse, when errors occur within these mission-critical applications, IT support teams often struggle to replicate issues, leading to prolonged resolution times and frustrated users. The core challenge facing enterprise IT teams centers on visibility. While Salesforce Lightning excels at enabling app creation, it provides limited insight into how these applications perform in real-world usage scenarios. IT leaders find themselves operating without crucial metrics that could illuminate user behavior patterns, identify performance bottlenecks, and prioritize which application errors demand immediate attention. This visibility gap creates cascading problems throughout the organization. Without understanding feature utilization rates, IT teams cannot determine whether customizations truly improve productivity. When applications underperform, support teams lack the session replay capabilities needed to witness exactly what users experienced during error scenarios. The result is decreased employee efficiency, lower satisfaction rates, and ultimately, unnecessary friction that can degrade end customer experience. Analytics platforms for Salesforce Lightning can address these challenges by providing the deep visibility that native Salesforce reporting cannot deliver. The capabilities extend beyond basic monitoring. Organizations can optimize employee onboarding processes by analyzing how top performers navigate Lightning applications, creating data-driven training programs that replicate successful usage patterns. This intelligence is also invaluable for demonstrating return on investment with concrete evidence of whether employees can accomplish intended tasks within the applications. Perhaps most importantly, comprehensive analytics enable efficient IT support case resolution by replaying sessions to recreate issues. Support teams can observe the exact sequence of events that led to errors, dramatically reducing troubleshooting time. “One of the most valuable features of a comprehensive analytics platform for Lightning is support’s ability to replay sessions to recreate issues,” said Kartik Chandrayana, Chief Product Officer at Quantum Metric. “Support teams no longer need to operate in the dark, because they can observe the exact sequence of events that led to errors, which dramatically reduces troubleshooting time.” The value of Lightning Analytics becomes clear through practical implementation. A major telecommunications company deployed Quantum Metric Lightning Analytics to ensure efficient internal IT support case resolution. IT was able to provide hard usage metrics and recreate issues across its Salesforce Service Cloud environment, and, as a result, uncovered critical issues that were silently undermining business performance. Within six months of implementation, the analytics platform identified 289 missed orders caused by agents encountering perpetual loading spinners in the purchase flow, resulting in conversion drops of up to 26%. The system also detected a 286% increase in frustration errors on the Select Offer button preceding the buy flow, correlating with a 36% decrease in conversion. These insights enabled the telecommunications company to prioritize fixes for the most business-impactful issues, shining a bright light on previously invisible problems. For CIOs evaluating their Salesforce Lightning investments, the message is clear: building applications represents just the beginning. True success requires comprehensive visibility into user experiences, application performance, and business impact—capabilities that extend far beyond native Salesforce reporting to deliver the insights necessary for maximizing enterprise software investments. Learn more about Quantum Metric’s Salesforce Lightning Analytics [1] Salesforce FY 25 Annual Report: Leading the AI Agent Revolution. https://s205.q4cdn.com/626266368/files/doc_financials/2025/ar/Salesforce-FY25-Annual-Report.pdf [2] Ibid. source

From app errors to user adoption: The missing analytics layer in Salesforce Lightning Read More »

AI’s last mile just got a supercomputer, courtesy of ASUS and NVIDIA

They say that the most difficult part of transportation planning is last-mile delivery. A network of warehouses and trucks can bring products within a mile of almost all customers, but logistical challenges and costs add up quickly in the process of delivering those goods to the right doors at the right time. There’s a similar pattern in the AI space. Massive data center installations have empowered astonishing cloud-based AI services, but many researchers, developers, and data scientists need the power of an AI supercomputer to travel that last mile. They need machines that offer the convenience and space-saving design of a desktop PC but go well above and beyond the capabilities of consumer-grade hardware, especially when it comes to available GPU memory. Enter a new class of AI desktop supercomputers, powered by ASUS and NVIDIA. The upcoming ASUS AI supercomputer lineup, spearheaded by the ASUS ExpertCenter Pro ET900N G3 desktop PC and ASUS Ascent GX10 mini-PC, wield the latest NVIDIA Grace Blackwell superchips to deliver astounding performance in AI workflows. For those who need local, private supercomputing resources, but for whom a data center or rack server installation isn’t feasible, these systems provide a transformative opportunity to seize the capabilities of AI. Scaling up memory to meet the parameter count of large AI models A key piece of the puzzle for accelerating locally run AI workloads is available GPU memory. If a given model doesn’t fit into local memory, it may run very slowly, or it may not run at all. The 32GB of VRAM provided by the highest-end NVIDIA consumer-grade graphics card on the market, the NVIDIA GeForce RTX 5090, is sufficient for many smaller models. But scaling up your system’s VRAM to handle models with even more parameters isn’t necessarily a straightforward affair. Multi-GPU systems are a feasible solution for some users, but others have been looking for a solution that’s designed specifically for the needs of AI workflows. By equipping the Ascent GX10 and ExpertCenter Pro ET900N G3 with large single pools of coherent system memory, we’re able to put astonishing quantities of memory at your fingertips. The Ascent GX10 wields four times as much GPU memory as a GeForce RTX 5090, while the ExpertCenter Pro ET900N G3 offers up to 784GB — over twice as much GPU memory as a workstation equipped with four NVIDIA RTX PRO™ 6000 GPUs. AI supercomputer performance in a desktop PC form factor Designed from the ground up for AI workflows, the ASUS ExpertCenter Pro ET900N G3 will be one of the first pioneers in a new class of computers, based on the NVIDIA DGX station. This system is powered by the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip. Featuring an NVIDIA Blackwell Ultra GPU and an NVIDIA Grace CPU connected via the NVIDIA® NVLink®-C2C interconnect, this superchip provides a slice of data center performance in a desktop workstation. Even more so than today’s high-end desktop systems, the ExpertCenter Pro ET900N G3 ensures that businesses and researchers can develop and run large-scale AI training and inference workloads thanks to up to 784GB of large coherent memory. It all runs on the NVIDIA AI Software Stack including NVIDIA DGX OS, a customized installation of Ubuntu Linux purpose-built for optimized performance in AI, machine learning, and analytics applications, with the ability to easily scale across multiple NVIDIA DGX Station systems. The AI supercomputer in the palm of your hand: the ASUS Ascent GX10 The ASUS ExpertCenter Pro ET900N G3 is much easier to deploy than a solution based on rack servers, but there are situations where even a desktop-class form factor is still too large. The ASUS Ascent GX10 democratizes AI by putting petaflop-scale AI computing capabilities in a design that you can hold in the palm of your hand. The ASUS NUC lineup demonstrates our proven expertise in offering complete PC experiences in ultracompact designs. No mere iterative step forward, the Ascent GX10 takes our experience in the mini-PC market and melds it with the groundbreaking performance of the NVIDIA GB10 Grace Blackwell Superchip. This superchip connects a Grace CPU with 20 Arm cores with a robust Blackwell GPU through NVIDIA® NVLink®-C2C technology. All told, it delivers up to 1,000 AI TOPS of processing power, with 128GB of coherent unified system memory allowing the system to handle 200 billion parameter AI models. Need the Ascent GX10 to handle even larger models, such as Llama 3.1 with its 405 billion parameters? Integrated NVIDIA® ConnectX®-7 Network Technology allows you to harness the AI performance of two Ascent GX10 systems working together. Part of a complete AI solution set ASUS stands out from every other manufacturer on the market with the breadth of AI products that we’re able to offer. The ExpertCenter Pro ET900N G3 and Ascent GX10 slot into a complete lineup that meets the needs of AI enthusiasts at every level. For those looking to build their own AI PC out of consumer-grade components, for those who need AI performance built into their everyday laptop, for enterprises who need a single-rack AI server solution, even for those institutions looking to design, deploy, and operate a data center for AI applications, the ASUS product portfolio is ready. Yet the ExpertCenter Pro ET900N G3 and Ascent GX10 are far more than mere additions to our AI product stack. The jump from AI PC to AI supercomputer is nothing less than revolutionary, and these systems give you this level of performance in a complete turnkey solution that fits on a desktop. Aspects of these systems are still in development, but we’ll share more details as soon as we’re able. In the meantime, explore how ASUS can help your organization seize the capabilities of AI. source

AI’s last mile just got a supercomputer, courtesy of ASUS and NVIDIA Read More »

Avoiding costly ERP and cloud systems implementation failures

At the same time, the shift from bespoke applications (purpose-built to meet a company’s own business processes and requirements) to accepting mass market solutions has put more pressure on solution due diligence. Said differently, the onus is now on the purchaser of cloud services to confirm not only that the solution they are buying meets their requirements, but also that the documentation of that solution, as referenced in the contract, matches what has been sold and promised. While the former often occurs on the surface level during the sales process, the latter occurs less often because the amount of time available to do that due diligence is being compressed due to the timing imperative.  The result is that, in many deals, this due diligence is not occurring, or does not occur until after the customer has signed a contract for a non-cancellable, non-refundable multi-year financial commitment with very limited exit rights — and absolutely no exit right for fit-gap issues that should have been identified in pre-contracting due diligence and solution validation. I should point out here that, in some deals, this is a “known known” inasmuch as the client fully acknowledges that solution due diligence will not be done until after subscription signing. Those clients are taking a calculated risk and making a bet that what is good enough for the mass market will be good enough for their company and that their business processes can be realigned with the capabilities of the mass market solution. Those deals are not the subject of this article.  With regard to the remainder, any good sourcing and legal team early on in the deal process will ask the business and technical leads if they have validated the documentation for the services to confirm that it aligns with expectations for performance, capabilities, etc. Often, the answer is a somewhat sheepish yes. If there is not an implementation partner or independent consultant engaged to assist with that process, we can be fairly certain that solution validation is only occurring on a limited basis, if at all. Why? source

Avoiding costly ERP and cloud systems implementation failures Read More »

AI interoperability challenges persist, even after new protocols

This platform approach, done correctly, can anticipate many of the trust, risk, governance, and other potential problems related to AI interoperability, he says. “By doing the platform-based deployment, you bake in your responsible AI principles,” he adds. “We support the idea of having a well thought out and responsible AI process that supports agent AI integrations and, in terms of operability, cuts across applications, but is governed through the platform.” Senan also advises CIOs to consider agents that can handle several tasks across multiple applications, instead of stringing several agents from different vendors together to assist an employee. For example, a business analyst in the oil and gas industry may work with a single agent to summarize industry reports from PDFs, process data from the company’s SAP system, and interact with Microsoft’s Office suite, instead of using three agents. source

AI interoperability challenges persist, even after new protocols Read More »

CIO50 Australia Team of the Year 2025 finalists unveiled

The finalists for the Team of the Year categories in this year’s CIO50 Australia have been announced.  Part of Foundry’s global CIO awards program, the awards recognise IT teams that exemplify excellence in specific areas.   These categories include the Culture & Inclusion award, acknowledging teams that foster a positive and innovative work culture; the Customer Value award, commending teams that prioritise delivering value to customers; the Transformation award, recognising teams successfully leading impactful business initiatives; and the Innovation in Emerging Tech award which recognises the drive for progress through the strategic use of newer technologies.  source

CIO50 Australia Team of the Year 2025 finalists unveiled Read More »

Humanizing AI: Empowering people, not replacing them

For example, in customer service, AI agents can instantly surface relevant knowledge articles, suggest next-best actions or triage inquiries allowing human agents to spend more time resolving nuanced issues with empathy. In marketing, AI can generate campaign drafts or segment audiences, while humans refine messaging and creative direction. In software development, AI can write and test routine code, giving engineers more time to architect systems and solve complex problems. These redesigned workflows blend machine efficiency with human judgment, leading to better outcomes across the board. To make this possible, leaders should consider creating a culture where AI is seen as a productivity partner, not a threat. That starts with transparency and trust. Employees need to understand how AI decisions are made, what agents are doing, and how their own roles are evolving. But trust alone is not enough. Organizational readiness is often the limiting factor. According to a recent McKinsey survey, only 1% of organizations rate their generative AI initiatives as mature. Many remain stuck in pilot purgatory, where AI shows promise but fails to scale. source

Humanizing AI: Empowering people, not replacing them Read More »