CIO CIO

Securing data in the AI era

As businesses increasingly rely on cloud-driven platforms and AI-powered tools to accelerate digital transformation, the stakes for safeguarding sensitive enterprise data have reached unprecedented levels. The Zscaler ThreatLabz 2025 Data@Risk Report reveals how evolving technology landscapes are amplifying vulnerabilities, highlighting the critical need for a proactive and unified approach to data protection. Drawing on insights from more than 1.2 billion blocked transactions recorded by the Zscaler Zero Trust Exchange between February and December 2024, this year’s report paints a clear picture of the data security challenges that enterprises face. From the rise of data leakage through generative AI tools to the undiminished risks stemming from email, SaaS applications, and file-sharing services, the findings are both eye-opening and urgent. The 2025 Data@Risk Report sheds light on the multifaceted data security risks enterprises face in today’s digitally enabled world. Some of the most noteworthy trends include: AI apps are a major data loss vector: AI tools like ChatGPT and Microsoft Copilot contributed to millions of data loss incidents in 2024, particularly social security numbers. SaaS data loss is surging: Spanning 3,000+ SaaS apps, enterprises saw more than 872 million data loss violations. Email remains a leading source of data loss: Nearly 104 million transactions leaked billions of instances of sensitive data. File-sharing data loss spikes: Among the most popular file-sharing apps, 212 million transactions saw data loss incidents. AI applications: A new data loss hotspot Generative AI tools such as ChatGPT and Microsoft Copilot are revolutionizing how enterprises work—but not without consequences. These platforms accounted for 4.2 million data loss violations, revealing how personal identifiers, intellectual property, and financial data are routinely at risk. SaaS ecosystems: Simplifying workflows, complicating security More than 872 million data loss incidents were flagged across SaaS platforms. Popular applications such as Microsoft 365, Salesforce, and Google Workspace, which have the largest share of violations, highlight the tension between collaboration and compliance. Email: A legacy risk with perennial consequences Despite newer tools and platforms, email remains at the forefront of data loss. Microsoft Exchange and Gmail collectively saw 104 million transactions containing billions of data loss incidents. The most common leaks included medical data, social security numbers, and source code. File-sharing platforms: Productivity with a heaping side of risk File-sharing giants like Google Drive, Microsoft OneDrive, and Dropbox logged 212 million transactions that involved data loss. Sensitive information—ranging from proprietary source code to financial records—flowed unchecked in billions of individual violations across these transactions. While the report reveals massive volumes of data loss across the most popular applications, it also provides a roadmap for organizations to act decisively before data leaks or exfiltration happen. By adopting a unified, AI-driven approach to data security, businesses can turn these risks into opportunities and secure data across every channel, wherever it resides. Best practice recommendations from the 2025 Data@Risk Report include: Use AI to discover and classify your data: Implement a Zero Trust Architecture (ZTA), enabling advanced data loss prevention (DLP) policies across endpoints and networks, and leveraging AI-powered platforms to identify risks in real-time. By taking these steps, enterprises can safeguard their data while enabling productivity and innovation to thrive. Understand your data loss channels: Map out all the channels through which data flows within and outside your organization—email, SaaS apps, AI tools (e.g., Microsoft Copilot), BYOD, cloud storage, and physical storage devices. Each channel presents unique risks and requires tailored security controls. Lean on your Zero Trust Architecture: Transition from a perimeter-based security model to a ZTA that enforces least-privileged access. Use identity-based access control, granular policies, and Secure Access Service Edge (SASE) to inspect all internet traffic, segment networks, and minimize your organization’s attack surface. Secure GenAI and AI tools with granular controls: For generative AI tools like ChatGPT and Microsoft Copilot, enforce granular controls on user sessions, such as input or output restrictions. Block unsafe prompts that might expose sensitive data during user interactions. Additionally, monitor anomalies in user behavior (e.g., excessive queries) and flag or block activities that violate data security policies. As enterprise AI transforms workflows and accelerates innovation, the challenges of managing and securing data grow in parallel. From sensitive prompts leaked in generative AI tools to data loss across SaaS platforms, email, and endpoints, Zscaler offers best-in-class tools to secure data in this rapidly evolving landscape, providing visibility, control, and Zero Trust protection for enterprise applications worldwide. This allows enterprises to: Find sensitive data across endpoints, inline, and cloud with AI-powered auto data discovery and classification. Protect data in motion with full TLS/SSL inspection and inline DLP for web, email, BYOD, and GenAI apps. Secure data at rest in clouds and on endpoints with unified policy, sharing controls, and device posture. Simplify operations with unified end-to-end incident response using a single, integrated console with Workflow Automation. Protecting enterprise AI apps from data loss Zscaler also delivers a full suite of best-in-class products to secure generative AI tools like ChatGPT and Microsoft Copilot. AI app visibility: As employees rapidly adopt AI tools like ChatGPT and Microsoft Copilot, Zscaler ensures enterprises never lose visibility over sensitive inputs or outputs. Smart input prompt blocking: Zscaler uses AI/ML-driven URL filtering and policy enforcement to categorize AI app activity and automatically block unsafe or unapproved input prompts. Deep visibility into AI workflows: Innovative categorization of user prompts lets security teams track, analyze, and make educated decisions about AI application security. For instance, Zscaler policies can: Monitor for sensitive user data (e.g., social security numbers) in real time. Block prompts related to intellectual property leakage. Secure collaboration via isolation: Prevent accidental data transfers in AI applications, without stifling productivity: Browser isolation for AI tools: Zscaler’s Browser Isolation technology allows employees to interact with AI tools securely by rendering applications in an isolated virtual browser. Clipboard usage, file uploads, and downloads can be restricted while still enabling prompts. Prevent accidental data exfiltration when employees interact with generative AI apps, such as ChatGPT or OpenAI-powered interfaces. Safe pixel rendering: By rendering applications as “pixels,” Zscaler ensures sensitive information never physically

Securing data in the AI era Read More »

Performance, compliance, and control: The on-premises advantage for AI workloads

The cloud has many well-known benefits, most notably limitless on-demand scalability and high reliability, both of which are ideal capabilities for hosting AI workloads. However, according to a recent Business Application Research Center (BARC) report, only 33% of AI workloads are hosted in public clouds. On-premises and hybrid environments almost evenly split the remainder, with on premises having the slimmest of edges (34%).[1] Certainly, the cloud can be the right choice for some AI workloads. If the enterprise needs to serve users in disparate locations with low latency, the public cloud’s global infrastructure could serve that use case well. Many IT professionals also prefer using hyperscalers’ pre-built AI services and large language models because they eliminate the complexity of model deployment, scaling, and maintenance. But as many in IT have discovered, there are also many good reasons for keeping AI workloads on premises. For starters, AI workloads are notoriously resource intensive. If a model takes longer than expected or requires multiple iterations to train, cloud-based graphics processing unit pricing, which can run over $100 per hour, can rapidly rack up massive overruns. Likewise, if there is a need to transfer large data sets from the cloud, egress fees can further increase costs, and the time required to move data can extend project timelines. Also, given that AI models require intense compute resources, low network latency can be critical to achieve real-time inference, and shared cloud resources may not provide the level of consistent performance required. Finally, many AI applications handle sensitive information, such as trade secrets or personally identifiable information that falls under strict regulations governing the data’s use, security, and location. Ensuring the required level of compliance and security may be difficult in a public cloud, due to the lack of control over the underlying infrastructure. “Market dynamics are increasing buyer interest in on-premises solutions,” says Sumeet Arora, Teradata’s chief product officer. Of course, building out an AI-ready infrastructure on premises is no simple task, either. An on-premises solution gives IT complete control over compliance and security, but these tasks remain challenging, especially when doing custom integrations with multiple tools. Additionally, on-premises solutions need to maintain a complex infrastructure, with the power, speed, and flexibility to support the high demands of AI workloads. Luckily, the market has matured to the point where tightly integrated, ready-to-run AI stacks are now available, which eliminates complexity while enabling compliance, security, and high performance. A good example of just such a pre-integrated stack is Teradata’s AI Factory, which expands Teradata’s AI capabilities from the cloud to make them available on premises. “Teradata remains the clear leader in this environment, with proven foundations in what makes AI meaningful and trustworthy: top-notch speed, predictable cost, and integration with the golden data record,” Arora continues. “Teradata AI Factory builds on these strengths in a single solution for organizations using on-prem infrastructure to gain control, meet sovereignty needs, and accelerate AI ROI.”  This solution provides seamless integration of hardware and software, removing the need for custom setups and integrations. And, because it’s all pre-integrated, users won’t have to gain multiple layers of approval for different tool sets. As a result, organizations can scale AI initiatives faster and reduce operational complexity. Many practitioners prefer on-premises solutions to build native retrieval-augmented generation (RAG) use cases and pipelines. Teradata AI Microservices with NVIDIA delivers native RAG capabilities for ingestion and retrieval, integrating, embedding, reranking, and guardrails. Users can query in natural language across all data, delivering faster, more intelligent insights at scale. This comprehensive solution enables scalable and secure AI execution within the enterprise’s own datacenter. While cloud provides scalability, global access, and infrastructure on-demand for AI workloads, many organizations may prefer on-premises solutions for better cost control, security compliance, and performance consistency. Integrated AI stacks can make on-premises deployment a much simpler task while accelerating time to value. Learn more about how Teradata’s AI Factory can help your organization with on-premises deployment. [1] Petrie, K,  Cloud, On Prem, Hybrid, Oh My! Where AI Adopters Host their Projects and Why, Datalere, April 3, 2025. source

Performance, compliance, and control: The on-premises advantage for AI workloads Read More »

Global AIDS Healthcare Foundation gains financial transparency to save lives

Breakthroughs in HIV/AIDS treatments have enabled most HIV-infected patients with good healthcare to live full lives. This leads many people who live in developed countries to assume that AIDS is under control. Not so. HIV/AIDS is decimating marginalized populations in countries where awareness and healthcare are scarce.  According to UNAIDS, in 2023, an estimated 1.3 million people contracted HIV, and 630,000 died from acquired immunodeficiency syndrome (AIDS) as a result of an HIV infection.  Fighting the good fight with the right stuff Always on guard, the AIDS Healthcare Foundation (AHF) advocates for AIDS patients and distributes life-saving HIV medicines that prevent transmission. AHF has played a leading role in HIV/AIDs healthcare since it opened a hospice in 1987 to “fight for the living and care for the dying.” Today, the foundation serves over 2.2 million people in 47 countries worldwide by providing the HIV medicines that save  lives.  But that’s not all. Over the years, AHF has learned that living situations affect access to healthcare and, therefore, controlling AIDS requires a holistic approach, from developing low-income housing to offering pharmaceutical services. AHF’s mission is to prioritize people over profit through its “AHF Circle of Care.”  To do so, AHF offers a range of nonprofit services–medical care, pharmaceutical, and other support programs—all designed to accommodate an HIV patient’s journey, regardless of their financial status. AHF generates revenue primarily by billing insurance providers for HIV patient medications. And, the more efficiently its finance system runs, the more patients AHF can reach. Accurate data is essential for an effective and equitable global HIV response AHF co-founder and president Michael Weinstein is wary of inaccurate statistics that can create a false sense of progress, weakening political will, funding, and momentum for HIV/AIDs prevention. The foundation relies on accurate data on the spread of HIV to inform financial planning that supports global services from pharmacy distribution to healthcare centers. In order to leverage one set of data across 47 countries, AHF decided to move to the cloud. According to Lyle Honig Mojica, CFO, “AHF operates in 47 countries, managing over 250,000 monthly transactions and thousands of bills daily. Having full visibility into our sales cycle and related documentation within one, cloud-based system is crucial.”  To reflect the realities of all populations–especially marginalized groups—and ensure an equitable HIV/AIDS response, AHF’s cloud-based finance system supports real-time data sharing and analysis. It also helps AHF gauge the effectiveness of advocacy campaigns designed to generate awareness. Data says: provocative campaigns spark controversy and awareness  Now AHF runs data-driven advocacy campaigns to get the word out. For example, a provocative billboard campaign in Uganda looking to promote sexual health awareness, asked: “Is your spouse cheating?” As anticipated, this sparked controversy as well as a community discussion that significantly raised public awareness. AHF repurposes successful campaigns like this for use in other regions by tailoring the messaging to unique cultural and local sensitivities.  By analyzing real-time KPI data from ongoing awareness campaigns—such as an increase in new patients—AHF can ensure future campaigns are both effective and appropriate.  47 countries share financial transparency from 1 integrated system  47:1 – AHF met its overall objective for a cloud-based finance system that would align operations across regional bureaus worldwide with updated financial management and associated data workflows. AHF’s technology partner, Nagarro, played a crucial role in ensuring AHF fully leveraged its new cloud-based ERP solution.  “As SAP digital architects, Nagarro helped AHF understand the benefits of redesigned business processes provided by SAP S/4HANA Cloud Public Edition before implementation, guiding us to avoid replicating old processes,” explains Mojica. “They encouraged us to adopt SAP best practices, even when it meant challenging familiar methods.” The new system has transformed patient healthcare and advocacy services. Now, data processing is automated with centralized records. And, core finance accounting processes payments in minutes with real-time data access and cost-source transparency to high volumes of pharmaceutical sales transactions.  250K average monthly prescriptions transacted with transparency As a result, AHF can manage the financial data of more than 250,000 monthly transactions across 47 countries with transparency. By integrating all business lines, AHF enhances workflows gaining real-time oversight of processed payments with access to financial data—covering sales, costs, and expenses.  “Advocating patient care with real-time data, rather than wasting time looking for vital information, empowers our administrators and operational staff to make quicker decisions,” says Mojica.  To provide transparency into financial transactions, the cloud-based system streamlines data encompassing 12 subsystems, 23 domestic company codes, 65 global entities in group reporting, 938 profit centers, and 166 plants. This enables daily insights into accurate tracking despite cost fluctuations and high, global transaction volumes.  Tossing a lifeline to more than 2 million people around the world With HIV and AIDS-related infections continuing to grow at an alarming rate, optimizing processes globally ensures more patients receive critical care sooner. For example, health information processing varies by location, need, application and settings; therefore, sharing patient financial data between systems streamlines AHF administration. With enhanced data sharing, AHF operates more efficiently worldwide to double down on its mission to serve the most vulnerable people by promoting comprehensive HIV/AIDS care worldwide. For their extraordinary accomplishments, AHF was selected as an SAP Innovation Award 2025 winner in the Cloud ERP Champion category. Learn more about the cloud-based deployment from the AHF pitch deck.  Watch their World AIDS Day 2024 Global Recap video to see awareness in action.   source

Global AIDS Healthcare Foundation gains financial transparency to save lives Read More »

Your team is losing a workday every week— here’s how AI can win it back

A recent survey finds cross-functional corporate teams waste a full day out of every work week trying to find information they need while three-quarters of workers suffer from poor communication that affects the speed and quality of their work. The impact of these problems is alarming, according to the survey by Atlassian of 200 Fortune 1000 executives and 12,000 knowledge workers. A mere 7% of executives express confidence that they know exactly how the work their company’s teams are doing supports the organization’s biggest goals. Nearly three-quarters (74%) say poor communication slows both speed and quality. At the same time, 89% of the executives say their organization must move more rapidly than ever to keep up with the competition. Many issues contribute to the problem, with communication being chief among them. Team members come from different corporate lines of business distributed across geographies and time zones, making coordination difficult. Siloed groups may use different tools to accomplish similar tasks, which can make it difficult to find and share information effectively. Real-time meetings chew up time that could be better spent pursuing goals. The role of AI in effective collaboration A practical remedy is a platform that serves as a common knowledge base and collaboration workspace, with a single set of integrated tools to improve access to information. Such a platform consolidates data, helps employees plan work, sets and tracks goals, integrates existing tools, and — importantly — embraces artificial intelligence agents as teammates. AI agents acting as virtual team members can enhance data by finding relevant insights within it and summarizing information to make it more digestible. They can answer questions via search and chat and make recommendations based on each employee’s specific role. The agents can also save time by tackling simple organizational tasks that would otherwise fall to human team members. “People who strategically collaborate with AI – meaning they’re building it across their different workflows – are already saving nearly a full workday every single week,” says Molly Sands, head of Atlassian’s Teamwork Lab. That is no stretch given the report’s finding that executives and knowledge workers alike spend 25% of their work week just searching for the information they need. Elements of an effective collaboration platform A consolidated work platform can also mean companies need fewer discrete tools, thereby lowering costs. It should help employees manage tasks while keeping their focus on important goals, including a view that shows what tasks have been assigned to whom and due dates for completion. AI agents can even assign tasks, generate Jira tickets, and send reminders to keep team members on track. The result is better workflows and timely delivery of results. Versatile communications solutions can save time and keep everyone in the loop while reducing the number of meetings. Team leaders can record videos outlining goals and setting priorities, then leave them for the rest of the team to view during work hours in whatever time zone they reside. When meetings become necessary, AI can take meeting notes and highlight action items. The bottom line is a collaboration package with tight integration of essential tools enhanced by AI provides corporate teams with an effective environment that can reclaim wasted time.  And that’s a goal worth pursuing. Learn more about how Atlassian’s Teamwork Collection combines the power of tools including Jira, Loom, and Confluence with AI agents that help keep your teams focused and productive. Visit the Teamwork Collection site.  source

Your team is losing a workday every week— here’s how AI can win it back Read More »

AdventHealth turns on ambient voice technology to tame physician burnout

“We approached the rollout in phases, beginning with the clinical areas that required the most documentation,” says Dr. Razzouk. “We listened closely to feedback from initial users, offered training and support, and grew a physician champion network to encourage adoption.” How ambient voice technology works during patient visits Ambient voice technology is designed to operate in the background, capturing conversations during clinical visits, transcribing them, and compiling detailed notes.  In practice, this means physicians open the app as they enter the exam room, either through a dedicated device or smartphone. The system passively listens to the conversation using advanced AI algorithms to analyze the audio, transcribe the conversation, and identify relevant information like diagnoses, medications, and treatment options. After the visit, the system auto-generates notes and automatically populates them into AdventHealth’s EHR system, Epic, for review and approval. source

AdventHealth turns on ambient voice technology to tame physician burnout Read More »

Data sovereignty and AI: Why you need distributed infrastructure

The volume of data that enterprises need to manage continues to grow exponentially. At the same time, regulations around data locality, residency, and sovereignty continue to multiply across jurisdictions worldwide. Companies must be vigilant about keeping up with rapidly evolving national and regional policies around who can access specific data; how it’s collected, processed, and stored; and where it’s accessed from or transferred to. It’s getting more complicated every day. Effective data governance is essential to ensuring AI transparency and compliance with emerging regulations. This means considering what’s required to access the data and knowing exactly which path the data will follow to its destination. Before businesses can establish data governance policies, they need to understand local laws and how those laws impact where they can generate, collect, and store data. In McKinsey’s findings from their State of AI survey, 70% of respondents said they’ve experienced difficulties with data, including defining processes for data governance.[1] Further complicating data management is the massive amount of data that companies are sourcing to train their AI models. Not only do they need to ensure their data doesn’t get used by the wrong AI models, but they must also ensure their models use the right data in the right places. In order to meet global data sovereignty laws and regulations, companies must also carefully consider where they’ll store their AI data. The good news? Distributed infrastructure and a future-proof AI data strategy can help companies navigate and manage the complexities of data sovereignty in an AI-driven world. Understanding data sovereignty Data sovereignty refers to making data collected or stored in a specific locality, country, or region subject to the governing entity’s laws and regulations. Many jurisdictions have created and are enforcing rules around how data is accessed, stored, processed, and moved within their borders. Data stored within specific borders is governed by that jurisdiction’s legal framework, regardless of the company’s headquarters location or ownership. For instance, a company based in California that gathers data from individuals or businesses in multiple countries must follow each country’s data sovereignty and localization laws, even though the company is in the U.S. Some laws set conditions around cross-border transfers, while others prohibit them altogether. For instance, in some jurisdictions, companies need to demonstrate a legal requirement to move the data, retain a local copy of the data for compliance reasons, or both. Other regulations govern whether companies can access data stored in a region, generate insights, and then export those insights to headquarters for further analysis or model training. Subsets of data sovereignty, such as data localization and residency, relate to laws and regulations that govern aspects of data management. Data residency refers to the physical (geographic) location where a business stores its data. Businesses can select a specific region for regulatory compliance, security, or performance optimization. However, many industries, including finance, healthcare, and government, may be required to store data in specific jurisdictions to comply with local laws. It’s important to note that storing data in a particular country does not necessarily mean it’s governed only by that country’s laws. Companies may still be subject to foreign legal obligations based on their country of incorporation or contractual agreements. Further, a governing entity can enforce strict data security, access control, and localization requirements, which could include controlling access to data by users or companies based outside its borders. Some laws also grant government agencies access to data without the owner’s consent. Making data sovereignty compliance an essential part of their AI strategies can help companies incorporate and prioritize continuous monitoring for new or changing laws. How data sovereignty influences AI infrastructure decisions Companies must adapt their data management practices for compliance and ensure they have the right AI infrastructure in the right locations. Understanding your entire data estate–what data you own, where it came from, and how it’s structured–can reveal the privacy or regulatory risk associated with that data. Then there’s the matter of where to store data. While choosing a public cloud provider may seem convenient, it often means relinquishing some level of control, such as knowing exactly where the data is stored. Importantly, companies can’t rely on cloud providers to enforce data sovereignty requirements on their behalf. Knowing the exact geographic location of the infrastructure in question is crucial to ensure it aligns with relevant data sovereignty rules. Expanding from a single cloud provider or incorporating private infrastructure may make sense to avoid vendor lock-in and data-related costs. Consider what would happen if your cloud provider needed to failover from a cloud in London to another in Amsterdam. Would the network path go directly from the U.K. to the Netherlands, or would it traverse through other countries, introducing additional data sovereignty regulations? If the data you’re transmitting is highly regulated, then it would be especially important to have visibility into the underlying infrastructure. You can typically only get that level of visibility if you own the infrastructure. While much of the responsibility for complying with data sovereignty regulations falls to the company that owns the data, cloud service providers and storage solution vendors can help. They can be transparent by providing details about where specific data is stored and disclosing how they manage data transfer paths in the case of cloud failovers or other outages. To enable interconnected, distributed AI infrastructure, it’s essential to establish secure connectivity with the ability to rapidly connect to (or disconnect from) many different services and locations and respond to any changes or additions to the regulatory landscape. Doing so allows companies to access data quickly, transfer data securely, and exchange data seamlessly with ecosystem participants. It’s crucial to have complete transparency into what your distributed infrastructure looks like and how it’s all connected. You need to be able to attest to how your data is being handled all the way through, from collection to storage to processing to transfer. Understanding and documenting this across the entire value chain will set you up for maximum compliance with data sovereignty regulations. You can’t afford not

Data sovereignty and AI: Why you need distributed infrastructure Read More »

An innovation rocketship within reach

Businesses, small and large, have many questions about adopting generative AI, from where to start to more specific concerns about power and cooling. Dell Technologies Vice Chairman and COO Jeff Clarke wants any company considering its path into AI and GenAI to know that the way forward may be clearer than expected. The AI space has advanced by leaps and bounds in the last year, and it’s easy to see how some businesses could fear being left behind. But like Chairman and CEO Michael Dell, who took to the stage at Dell Technology World in May, Jeff is optimistic about the trends shaping AI adoption. The capabilities AI brings to users are within reach. “It’s a pretty reasonable discipline that everyone in this room is capable of, or has probably already done, inside their organizations,” Jeff told an audience of thousands during his keynote at Dell Technologies World in Las Vegas in May. And they can take comfort in the fact that Dell’s own GenAI journey is no different than theirs. In customer meetings, Jeff hears some familiar questions: Where do I start? Where is our data, and is it AI-ready? How do I choose a use case? What’s the ROI? Can I afford AI? Do I have the space, power, and cooling? Is this just another IT rabbit hole? Dell was no different when it began its AI journey two years ago, Jeff said. After getting over the shock of realizing Dell had more than 900 GenAI projects in process with data governance that was not as tight as it should’ve been – and a lack of strategy and architecture to support it – the company got to work quickly. With the right data, governance, and architecture in place, and a chief AI officer, the company could roll out the strategy and operating framework that became an early form of the Dell AI Factory. To illustrate the point, Jeff highlighted Dell’s effort to extract value from a handful of services organization datasets at one time. This massive quantity of data was spread across many on-prem tools. AI techniques like machine and deep learning, GenAI with a combination of RAG, prompt engineering, and LLMs allowed the company to build a support assistant and make the most of that disparate service data, increasing productivity and customer satisfaction. “This didn’t require the latest models, the latest GPUs, nor did it require significant resources,” Jeff said. “This GenAI use case is deployed on-prem where the data resides and is protected and secured by our data protection products. We moved GenAI to the data. This data is stored on Dell ECS storage in our standard data center… in a single rack, no additional power, no additional cooling was needed, and the ROI was less than three months.” This support assistant example is one of six primary use cases that have emerged to help businesses capitalize on GenAI. The others are content creation and management, natural language search, design and data creation, code generation, and document automation. Every business, small and large, has those needs, Jeff said. “It’s time to get busy. This is the most disruptive technology I’ve seen in my career. Speed matters. Your competition is moving fast.” CoreWeave Co-Founder and Chief Strategy Officer Brian Venturo joined Jeff on stage for a discussion about what Jeff called “Big AI.” CoreWeave’s hyperscale capabilities are powered by Dell Technologies, and Jeff took the opportunity to ask Brian about the future of AI. “There’s not going to be a killer app,” Brian said. “There’s going to be a subtle impact to peoples’ lives. It’s not going to be that one application you’re interacting with throughout the day. It’s going to be nuanced. The market and the world has to understand that we have to invest in the infrastructure.” After sharing his perspective on the coming of agentic AI, Jeff introduced Arthur Lewis, president of Dell’s Infrastructure Solutions Group, who said Dell’s goal is to help customers turn data into intelligence and complexity into clarity. Arthur introduced the Dell AI Data Platform, including Project Lightning, which is designed for agentic AI workloads. He also announced a partnership with Google to bring Gemini models on-prem exclusively for Dell customers, and a partnership with Cohere to simplify the deployment of agentic AI technology on-prem. Arthur welcomed Cohere Co-founder and CEO Aidan Gomez to talk about the integration of the Cohere North agentic AI platform with Dell’s infrastructure. The Cohere North platform makes it easy for teams to deploy AI agents securely and with strict control over data. Arthur discussed new PowerStore and PowerProtect Data Domain products, as well as Dell’s PowerFlex 5.0 and the Dell Private Cloud, which is designed to make deploying and managing private cloud environments easy through validated designs for industry-leading cloud systems. Sam Burd, president of Dell’s Client Solutions Group, also took the stage to encourage customers to imagine what they could do with AI models that put big goals within reach. AI PCs, Sam said, are the tools that make those models work by bringing AI power to customers’ data at the edge. Investing in AI PCs now is critical for future success, Sam said. “The first step on this incredible journey is investing in PCs to future-proof your journey,” Sam said. Sam welcomed Rob Johnson, USAA assistant vice president of information security, to discuss how the financial services company is using AI now and how it plans to use it in the future. Speaking of the future, Sam teased the Dell Pro Max Workstation, which will launch later this year with a Qualcomm AI100 PC inference card. Sam said the system is the industry’s first enterprise workstation with an enterprise-grade discrete NPU capable of running inference on a 109 billion parameter model. “The question is no longer whether AI belongs on the endpoint,” Sam said. “Instead, it is what will you do with server-grade compute power folded into your laptop. That choice is the new frontier.” Jeff boiled the keynote down into a few closing words: “AI has taken off like a rocket

An innovation rocketship within reach Read More »

Why a robust strategy is needed to scale AI and deliver growth

The digital innovation plans of many businesses are starting to stall, with AI experiments yet to blossom into successful enterprise deployments.   CIOs needs to ensure they have a holistic strategy that underpins their AI plans, taking in data, infrastructure, skills and governance.   The failure to do so could leave businesses trailing AI-enabled rivals. Global investment in AI is set to more than double in 2025, according to Lenovo’s CIO Playbook 2025 which is produced in association with IDC. 1  But Foundry’s AI Priorities Study shows that 62 % of organisations remain stuck at the pilot or researching stage.2 The next imperative is clear: move beyond isolated pilots to build integrated AI programs that deliver sustained, repeatable value.  Many AI pilots succeed in principle but go nowhere in practice. They prove technical feasibility yet fail to connect with day-to-day operations.   The causes are familiar: fragmented data environments, infrastructure that isn’t built for AI-scale workloads, limited internal expertise, and unresolved questions around governance, compliance, and security.  Until these systemic issues are addressed, most AI value will stay locked in pilot mode.  Four enterprise shifts that enable AI at scale   Scaling AI isn’t just a matter of spend. Success depends on structural changes across systems, skills, and strategy. Here are four shifts that release powerful AI potential across the business.  Build AI-ready data foundations Most enterprises lack the data backbone needed to scale AI. A recent MIT Technology Review Insight report found that 78 % of organisations don’t have robust data foundations, citing data quality, timeliness, and siloed systems as key challenges.3 Without accessible, governed, and cleansed data, models can’t scale or maintain trust. That’s why leading companies are now automating data governance, locking in control and reliability as AI adoption spreads.   Modernise infrastructure for AI workloads AI workloads demand high-performance compute for training, inference, and real-time decisions. According to the CIO Playbook 2025, 65% of organisations now run AI on hybrid or on-prem infrastructure.4 Hybrid architectures provide the control, compliance, and agility enterprises need to scale effectively. When infrastructure meets AI’s demands, execution becomes far more reliable and strategic.   Close the AI talent and skills gap Limited in-house expertise remains one of the biggest blockers to achieving AI at scale. Forward-thinking CIOs are investing in upskilling, building cross-functional teams, and bringing in external partners where needed. But tools and talent alone are not enough. Enterprise success depends on strong coordination between IT, data science, and business teams. This alignment turns AI from a technical asset into a powerful strategic capability.   Establish enterprise-grade AI governance Governance is now a make-or-break factor in scaling enterprise AI. Yet, just 20% of enterprises have governance policies in place and are enforcing them, according to the CIO Playbook 2025.5 These frameworks must tackle data privacy, model explainability, bias, and operational risk head-on. Strong governance not only ensures compliance, it also builds confidence and unlocks responsible and sustainable growth across the organisation.  The path forward for CIOs   AI has now moved well beyond proof-of-concept. But achieving impact at scale across the enterprise takes more than experimentation. It demands a coordinated strategy that brings together the right data, infrastructure, skills, and governance.  When these foundations are in place, AI delivers more than automation. It drives measurable ROI, sharper resilience, powerful innovation and a long-term competitive edge.  But there is no need for CIOs navigating this shift to start from scratch. The CIO Playbook 2025, complete with practical frameworks and strategic guidance, offers detailed support to help leaders successfully achieve the transition from pilot to performance at scale.  Read the CIO Playbook 2025 now.   _______________________________________________________________________________________________________________________________________________ [1] CIO 2025 Playbook [2] Foundry AI Priorities Study [3] Snowflake in partnership MIT Technology Review, How solid data strategies are fueling generative AI innovation, October 2024 [4] CIO Playbook 2025 [5] Ibid source

Why a robust strategy is needed to scale AI and deliver growth Read More »

How AI can turn category management into a powerful driver of efficiency

Optimizing procurement and analyzing the spending within a business are challenging tasks that can be potentially transformed by modern technology.  Traditional category management struggles with complexity, shifting market dynamics, and data silos, making it difficult for businesses to stay agile. As procurement functions seek to accelerate digital transformation, more organizations are modernizing category management with AI. An AI-powered intelligent category management (ICM) solution addresses the challenges category managers have traditionally faced by providing real-time visibility into the supply chain and a centralized platform for data-driven decision-making. By leveraging automation and predictive analytics with an ICM solution, businesses can take the next step in their digital transformation journeys while streamlining operations, adapting to market trends faster, and optimizing category performance for increased efficiency and profitability. What is intelligent category management? Intelligent category management is the next step in the evolution of category management, or the process of shifting from managing individual purchases to strategically overseeing groups of related goods as unified, value-driving business areas. By embedding AI to deliver continuous, insight-driven strategies, ICM helps procurement teams make faster, more informed decisions, anticipate risks, and uncover new savings opportunities — all while adapting to real-time market dynamics. “ICM intelligently gathers marketing intelligence, monitors category trends, performs real-time data analysis, provides strategy recommendations, and tracks execution — all within a collaborative platform,” Alex Zhong, Global Head of Product Marketing at GEP, says. “With upcoming agentic AI, ICM will further empower users by autonomously surfacing risks and opportunities, suggesting next-best actions, and even initiating strategic tasks to streamline execution and accelerate value capture.” Three transformative benefits of intelligent category management Leveraging AI and data-driven insights, ICM can transform procurement by improving collaboration, increasing resilience, and enhancing decision-making. 1. Better alignment between procurement and other business functions More than half (64%) of procurement leaders believe AI and genAI will transform their roles within five years, suggests a recent study from The Hackett Group.[1] ICM will play a key role in enabling this transformation by using AI to consolidate workflows, reduce manual tasks, and surface powerful insights. With fewer repetitive tasks, procurement professionals can spend more time working with stakeholders in other departments — like finance and business units — which increases alignment between procurement and other functions. This improved cross-functional collaboration helps ensure procurement strategies are fully integrated into broader business goals, making it easier to achieve organizational objectives. 2. Stronger business strategies By providing real-time visibility into supplier disruptions, ICM helps procurement teams determine the best strategies to accomplish business goals — whether it’s cost savings, risk mitigation, sustainability, or supplier innovation. “ICM proactively flags risks — such as supplier concentration, pricing volatility, or regional exposure — and recommends mitigation strategies,” Zhong explains. “With AI-driven insights and scenario planning, ICM enables procurement teams to monitor risk impacts, adjust strategies rapidly, and align with business continuity goals, making the supply chain more adaptive.” 3. Smarter actions, faster outcomes with AI recommendations By automating processes and integrating data from multiple sources — including internal spend, supplier performance, and external market signals — ICM delivers real-time deep analysis that helps procurement teams act faster and more confidently, with AI-driven data enrichment and error correction to ensure accuracy, Zhong says. Decision-making support isn’t stagnant, either: ICM helps with both execution and tracking outcomes, enabling teams to take action and modify strategies as needed as new data comes in. While ICM is already helping improve decision-making today, its value will only grow with the evolution of agentic AI. “Strategies will no longer be static documents but live, adaptive frameworks that evolve in real time based on shifting data,” Zhong predicts about next-gen ICM. “Human-AI collaboration will deepen, enabling rapid iteration and faster decision cycles.” Learn more about how intelligent category management can reshape your procurement function. [1] The Hackett Group, 64% of Procurement Leaders Say AI Will Transform Their Jobs, April 2025 source

How AI can turn category management into a powerful driver of efficiency Read More »

US judge issues split decision in antitrust case against SAP

Prepare to pay “I view this as a net win for SAP,” he continued. “SAP customers that want to use Celonis over the native SAP Signavio solution should be prepared to pay for the privilege. The broader issue of antitrust will likely prove much harder to build a case against SAP based on the plethora of competitive options in the marketplace. It is hard to prove antitrust violations based on price alone for one segment of a large and multi-faceted solution.” On broader data management issues, Bickley said that SAP was implying that “SAP customers or third-party vendors accessing SAP data layers via the ODP via RFC protocol may be in a state of non-compliance with SAP’s copyright rules. SAP has also invoked obscure licensing rules around the HANA database, informing customers that they must upgrade from the runtime version of HANA to the enterprise version based on how Celonis accesses the database,” Bickley said. “This could result in fees ranging from hundreds of thousands of dollars to millions of dollars for SAP customers, depending on the size of their DB. Lastly, SAP is pulling a page from the cloud hyperscaler playbook by now charging for data egress via their Datasphere solution. Yes, it is exorbitantly costly, but not unlike data egress from the hyperscalers’ clouds themselves. So it is technically feasible for Celonis to access the SAP data layer as they have previously; however, the underlying cost will be exponentially higher for SAP customers.” Robert Kramer, a VP and principal analyst for Moor Insights & Strategy, said that this SAP case — not unlike many American antitrust cases — needs to convince a judge or jury that the accused vendor has crossed that nebulous line between aggressive competition and illegal behavior. source

US judge issues split decision in antitrust case against SAP Read More »