CIO CIO

Optimizing patient care at Sanofi through AI

“This year, we’ve taken a step further with the creation of our global hub, which will generate more than 300 highly qualified jobs,” says Pérez. “This new center was created with the aim of leading a digital transformation from Europe with a real impact on global health, accelerating the development of innovative therapies through the responsible use of AI.” From this center, they work on strategic projects that apply AI throughout the entire value chain, focusing on areas such as real-world evidence generation (RWE), clinical analysis and statistical programming, scientific and R&D support, and development of predictive models for logistics and the supply chain. “Everything we do from this center has a common goal: to offer faster, more accurate, and sustainable responses to global health needs, combining science, technology, and data to transform the future of medicine,” adds Pérez. Keeping an eye on AI regulation August 1 marked one year since the EU AI Act came into force, a necessary step, according to Pérez, that provides legal certainty and builds trust in sectors where traceability, transparency, and data quality are fundamental. However, the pace of innovation in healthcare is rapid, so regulatory frameworks to support this progress are necessary. “Future guidelines from the European Medicines Agency (EMA), along with initiatives such as regulated testing environments, can facilitate the validation of innovative AI-based solutions under controlled and safe conditions,” says Pérez. “And, above all, close collaboration between institutions, the private sector, and patient organizations will be key for Europe to lead the digital transformation of healthcare with responsibility and ambition.” source

Optimizing patient care at Sanofi through AI Read More »

Why LLMs fail science — and what every CPG executive must know

We live in an era where generative AI can draft complex legal agreements in minutes, design plausible marketing campaigns in seconds and translate between dozens of languages on demand. The leap in capability from early machine learning models to today’s large language models (LLMs) — GPT-4, Claude, Gemini and beyond — has been nothing short of remarkable. It’s no surprise that business leaders are asking: If an AI can write a convincing research paper or simulate a technical conversation, why can’t it run scientific experiments? In some circles, there’s even a whispered narrative that scientists — like travel agents or film projectionists before them — may soon be “disrupted” into irrelevance. As someone who has spent over two decades at the intersection of AI innovation, scientific R&D, and enterprise-scale product development, I can tell you this narrative is both dangerously wrong and strategically misleading. source

Why LLMs fail science — and what every CPG executive must know Read More »

Beyond siloed AI: How vertical and horizontal intelligence create the truly smart enterprise

Looking ahead: Agentic workflows and AI-first design The next frontier is agentic AI — autonomous agents that execute multi-step tasks across systems on behalf of users. Imagine a revenue operations agent who identifies a missed upsell opportunity, drafts a proposal, schedules a meeting and alerts the account executive. Revenue operations should always seek to equip sales teams with AI-driven insights to proactively manage deal health, mitigate potential risks and close deals faster.. Every enterprise function across legal, procurement, finance and sales will be supported by a digital twin — powered by AI agents and orchestrated by agentic workflows. These workflows empower agents to act as a force multiplier for enterprise productivity and performance. For example, by developing contract intelligence agents built for the enterprise, Icertis is driving the next era of AI innovation for commercial relationships, accelerating strategic outcomes and maximizing contract value.  However, this requires not just smarter models but AI-first process design, robust orchestration layers, accurate, reliable and quality business-grade data, and tight system integration. source

Beyond siloed AI: How vertical and horizontal intelligence create the truly smart enterprise Read More »

AWS launches ‘sovereign by design’ New Zealand cloud region

AWS’ local partners, meanwhile, include Accenture, CyberCX, Datacom, Deloitte, The Instillery, Lancom, MongoDB and Westcon-Comstor. The region is powered by renewable energy through a long-term agreement with Mercury NZ for electricity generated at its Turitea South wind farm. “This investment in digital infrastructure and Amazon’s commitment to digital skills can accelerate New Zealand technology businesses and help New Zealanders to move into highly skilled, secure, and well-paid technology jobs—which exist right across the economy, from tech companies to various sectors including agriculture, finance, retail, professional services, government, and many more,” NZTech CEO Graeme Muller said. source

AWS launches ‘sovereign by design’ New Zealand cloud region Read More »

The cost of complexity: Why consolidation remains IT’s biggest missed opportunity

The path to complexity In my experience, complexity rarely arrives all at once. It creeps in through a combination of leadership habits, unclear processes, overlapping tools and skill gaps. This might sound like I’m taking a jab at some technology leaders, but it’s a reality I’ve seen across industries: leaders set in their ways, making purchasing decisions without fully understanding the technical requirements. When leaders approve overlapping solutions, delay decommissioning unused tools or fail to empower technical teams to choose the right platforms, complexity becomes embedded in the organization’s DNA. Somewhere along the way, the mentorship aspect of leadership disappeared. We’ve replaced coaching and enabling with gatekeeping and sign-off authority. The result? Disconnected teams, poor adoption and wasted resources. And when decision-making is divorced from technical reality, complexity grows unchecked. source

The cost of complexity: Why consolidation remains IT’s biggest missed opportunity Read More »

How Apptio (an IBM company) transforms enterprise IT planning with portfolio tool

Keith Shaw: Hi everybody, welcome to DEMO, the show where companies come in and show us their latest products and platforms. Today, I’m joined by Rodan Zadeh. He is the Director of Product at Apptio, which is an IBM company. Welcome to the show, Rodan. Rodan Zadeh: Thank you. Keith: So, I think initially when we set this up, you were not part of IBM, but in the process of scheduling, IBM purchased you. Tell us a little bit about Apptio, how it relates to the IBM acquisition and integration, and then what you’re going to show us here today. Rodan: Absolutely. First of all, thank you for having me on the show. Apptio is a recent acquisition by IBM, as you mentioned, and our portfolio of products is focused on technology business management — managing the business of technology. What’s interesting is that it’s a fantastic fit for IBM’s IT automation group. Our goal is to provide customers the ability to build intelligent applications. Before you build something, you want to understand the investment aspect — where is the business value — and be able to manage that as you go. So, the investment part is key, but it’s also about building, deploying, and managing. In the portfolio, you see several IBM products that complement what we’re doing. It’s a perfect fit for IBM’s larger portfolio in IT automation. Keith: Right. And if someone is interested in just the Apptio piece, they don’t have to invest in the other IBM offerings, correct? Rodan: Correct. Absolutely. But if they want to, there are complementary solutions available. Keith: Great. So, what are you showing us today? I believe it’s strategic portfolio management? Rodan: Yes. This product is typically designed for enterprise program management offices (PMOs), as well as many C-suite executives who are responsible for digital transformation and strategic investment decisions. It helps them understand: Are we investing our money in the right areas? Do we have the right resources to deploy a certain technology? How do we connect business strategy with the technology being built — from the highest level down to the actual implementation? Keith: So it helps CIOs, IT leaders, and finance teams understand where their technology spending is going, and what kind of impact it’s having on the company? Rodan: Exactly. It’s about cost visibility — being able to show costs to other parts of the organization through practices like showback or chargeback. It also helps understand resource allocation, both people and infrastructure, and how that work connects to financial impact. Keith: Okay, so what exactly is strategic portfolio management showing me? Rodan: It gives a holistic view of what’s taking shape in the technology domain — strategy at the highest level, down to specific initiatives and work being delivered. For example, we integrate data from systems like Workday, SAP, and Jira, so CIOs and CFOs can see a complete picture of investments, people, and outcomes. source

How Apptio (an IBM company) transforms enterprise IT planning with portfolio tool Read More »

How Samsung completes your Zero Trust Architecture

Let’s cut to the chase: The average data breach now costs $4.88 million1—and even with a Zero Trust strategy in place, your risk increases if mobile isn’t part of it. Every mobile device connected to your corporate network is a potential breach point. And with work happening everywhere, traditional perimeter security can’t keep up. It’s time to go beyond Mobile Device Management for Enterprise security and extend your Zero Trust strategy to mobile. Zero Trust must include mobile devices Zero Trust is becoming the standard for enterprise security—and for good reason. The U.S. government is pushing agencies to adopt it. Enterprises are following suit. And the philosophy is simple: Never trust, always verify. Assume breach. Give least privilege access. Every access request is checked in real time based on current risk signals. It’s like checking someone’s passport and scanning them for threats, not just assuming someone’s safe because they were last week. That works well for laptops and desktops… but what about mobile? Security Operations Center (SOC) platforms—such as Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR)—are evolving to incorporate Zero Trust Architectures (ZTA), particularly with AI-assisted security features. However, a significant gap remains in the availability of high-quality data from mobile devices. This lack of data hinders accurate analysis of security posture and limits the ability to take necessary actions. This is where Samsung comes in. Zero Trust-ready Samsung Galaxy devices give you deeper insight As the maker of Samsung Galaxy devices and the cloud-based Knox Suite, Samsung offers a uniquely integrated approach to mobile security—capturing deep device telemetry to power SOC platforms and support a true ZTA. Samsung Galaxy devices are secured from the chip up, with Samsung Knox built in at the manufacturing stage. This hardware-based foundation extends visibility and control across your security infrastructure, making these devices a trusted part of any Zero Trust strategy. What’s Knox Suite? It’s Samsung’s all-in-one solution for managing and securing Samsung Galaxy work devices. It also provides the controls and insights needed to support a Zero Trust approach—reducing risk and unlocking cost savings. Even better, thanks to their exclusive partnerships, you can prevent, detect, and remediate threats faster, reducing potential risk: With Microsoft Intune, Samsung Galaxy devices, managed or unmanaged, verify their integrity before connecting to corporate resources. With Knox Asset Intelligence (a product of Knox Suite), real-time mobile signals flow into SIEM tools like Microsoft Sentinel, so your security team sees mobile threats just like any other alert. With Samsung Galaxy and Knox Suite, mobile devices are no longer a security gap; they’re part of your ZTA. Why this matters now CIOs and IT leaders are under pressure to prevent data loss, not just react to it. That means extending Zero Trust to mobile. Samsung surveyed in 2024 and found that IT decision-makers and the workforce named data protection as their top concern when considering mobile security. Samsung is the first OEM to deliver Zero Trust capabilities directly on mobile, with full device-level visibility, seamless integrations, and cloud-ready control. If Zero Trust is your goal, mobile must be part of the strategy, and Samsung makes that possible. Ready to learn more about Samsung Knox security? Learn more here! Cost of a Data Breach Report 2024 | IBM source

How Samsung completes your Zero Trust Architecture Read More »

Easing the pressure on the electrical grid with AI

It’s said that the U.S. electrical grid—a nationwide labyrinth of interconnected power plants, transmission lines, substations, and more—is the largest machine in the world. If that’s the case, this machine is starting to sputter. After decades of flat electricity demand, this infrastructure-heavy ecosystem has begun an era of dramatic complexity: AI-driven data centers are proliferating rapidly, each demanding hundreds of megawatts; the electric vehicle (EV) industry is booming, along with its fast-growing charging infrastructure sector; and all the while, the energy transition continues to confound utilities, which must ‘park’ new energy from renewables in vast storage systems strewn across the country, until it can be safely released onto the grid. The upshot: over the next five to 10 years, North America will be at “elevated or high risk” of energy shortfalls, according to the North American Electric Reliability Corp. (NERC).1 There is a way through the complexities, however. For Hitachi, a company with a heritage in energy and industrial technologies, as well as decades in AI, such challenges require a methodical approach. They require boiling down the industrial problems into multiple, workable mathematical problems; tackling one challenge at a time, methodically with AI; and utilizing the unique capabilities and expertise from Hitachi Group companies. This is how Hitachi is easing the mounting pressures on the grid. Taming computational demand It’s the kind of approach the company brought to a major regional grid operator in the U.S., that was looking for help reducing the extraordinary length of time needed for the requisite impact studies – formal reports needed before introducing new energy sources onto the grid. It’s also the approach Hitachi takes to build some of the most comprehensive and reputable datasets on energy generation and consumption, from which it can test models to achieve the most reliable outcomes. And while some view industrial challenges with a disruptive mindset, Hitachi knows when to embrace industrial AI as a “co-pilot.” Its technology works alongside systems in which utilities have made significant investments, rather than replacing them. “We will be part of the utility ecosystem as an accelerator,” says Bo Yang, vice president and head of the Energy Solutions Lab at Hitachi America R&D. According to Yang, disruptive innovation has its place. But there’s also a practical reason for this approach: the computational demands are enormous. Federal Energy Regulatory Commission (FERC) guidelines require exhaustive analysis of how new power generation affects grid stability, not just under current conditions, but across future scenarios and potential system failures. Take the grid project. When power companies apply to connect new generators, operator’s software must run tens of thousands of simulations to ensure grid stability. This process traditionally took years. Hitachi’s industrial AI technology speeds up the analysis by running multiple calculations simultaneously, a technique known as parallel processing, while the operator’s existing software stays in place to validate complex cases and final results. This hybrid approach reduced analysis times by 80%—reducing total review time from 27 months to within a year—while maintaining the rigorous safety standards the industry demands. The village approach Connecting new generators to the grid requires complex collaboration across the energy industry. Multiple stakeholders—project developers, independent system operators, transmission operators, regulators, and others—each rely on data-driven analysis to make critical decisions. The entire “village” must reach consensus before new generation can connect to the grid. Success in this environment requires both deep industry knowledge and advanced AI capabilities. The Hitachi team working on the grid operator’s project includes power engineers with decades of experience who understand the mathematical and operational foundations of power system analysis. This expertise enables them to pinpoint which parts of the process can be accelerated by AI and which require traditional approaches. “AI specialists often have strong algorithmic skills, but they may not be focused on the right industry problems,” Yang says. “Meanwhile, legacy analytical tools—some dating back decades—have limitations that new AI methods can overcome when applied thoughtfully.” Breaking down complex problems Industrial challenges often involve highly complex, interconnected processes that can be broken down into smaller mathematical problems. In power systems, for example, this means separating the analysis into distinct tasks. Every task focuses on different aspects of grid stability and performance. Each component presents opportunities for AI acceleration while maintaining the physical constraints and safety requirements that govern power system operations. Industrial AI doesn’t need to reinvent the physics of power systems. It needs to accelerate the computational analysis that utilities already understand. This systematic approach extends beyond individual projects. The same industrial AI framework developed for power grid applications applies to other physics-based analytical processes across industries—from rail transportation systems where safety is paramount, to manufacturing operations requiring precise control, to mobility solutions demanding real-time optimization. “The key is understanding the specific problem you’re solving and then determining which AI technique to apply,” Yang says. This problem-first methodology suggests how the same framework might work across different industries. Industrial AI is different Utilities carry enormous responsibility for infrastructure that powers entire regions. This creates a fundamental challenge for AI adoption: algorithms alone aren’t enough. Effective implementation requires deep understanding of operational constraints, regulatory requirements, and the physics of power systems that takes years to accumulate and validate. This reality makes industrial AI fundamentally different. It’s not about disrupting existing systems. It’s about enhancing them with the precision that critical infrastructure demands. The most revolutionary AI is often the most methodical—not because it lacks ambition, but because it simply can’t afford to fail. _____________________ For more information about Hitachi’s industrial AI work, visit: AI Resource Center – Hitachi Digital 1 Urgent Need for Resources Over 10-Year Horizon as Electricity Demand Growth Accelerates: NERC | American Public Power Association source

Easing the pressure on the electrical grid with AI Read More »

Redefining the edge: Setting new standards for AI infrastructure

The rapid advancement of AI is transforming industries. Today’s businesses need decision-making speed, operational resilience, and personalized experiences to maintain a competitive advantage. They also need to meet changing consumer and enterprise expectations for acceptable performance. Proximity-based AI infrastructure at the edge is essential for meeting these rising standards. The AI landscape is evolving, from training AI models in centralized locations to a more distributed paradigm that includes workload inference at the edge. It’s due in part to the rise of agentic AI inference, which makes decisions at the edge and relies on low latency. Newer AI models are more specialized and smaller, making training possible at the edge. Businesses are using these specialized domain-specific models because they’re a better fit for use with specific industries, functions, and proprietary datasets. All these factors make the edge a more strategic compute location, given its proximity to where data is generated, stored, and processed. Where is your primary edge? Businesses need to rethink where their edge is. In fact, there are multiple edges where a business might choose to perform AI inference. The metro edge is gaining traction as a crucial location for AI inference workloads because it offers the best balance of low latency, data privacy, and cost efficiency. Choosing the metro edge involves moving inference data to a different facility, such as a colocation data center, within the same concentrated geographical area as the data sources. This approach aggregates inference workloads in a single location with a typical latency of less than 10 milliseconds and removes the need for businesses to operate private infrastructure within their own facilities. Application types also influence the identification of primary edge locations, as the tolerance for latency varies. Machine-facing applications typically require high speed and low latency. With human-facing applications, a few extra milliseconds of delay wouldn’t even register. A radiologist searching for medical records doesn’t need an immediate response, but a self-driving car trying to avoid pedestrians does. Industry-specific requirements, such as where data is generated and processed and how data sources are accessed, also determine the location of a company’s edge(s). Businesses may need to meet edge latency requirements for a variety of functions: Logistics companies: Track deliveries, manage delivery routing, streamline warehouse operations, and monitor and ensure cargo security. Content delivery companies: Provide real-time, customized recommendations and ensure low-latency streaming, lag-free gaming, and immersive AR/VR experiences. Smart cities: Use real-time video analytics and sensor data processing for traffic management, crime detection, and emergency response. Healthcare organizations: Ensure continuous monitoring, seamless remote robotics surgeries, and rapid data analysis for faster diagnostics and personalized treatment. Use cases at the edge will continue to evolve and grow, requiring businesses to design for both today and tomorrow. Data placement choices impact cost and compliance Where businesses choose to run AI workloads and store data affects egress fees and cost structures (OPEX versus CAPEX). It also impacts performance, latency, and network infrastructure requirements. The volume of data generated daily at the edge can quickly add up. For instance: Smart factory: 1 petabyte Airplane: 4 terabytes Autonomous car: 3 terabytes Moving that data to a centralized location for AI inference significantly impacts latency, cost, and bandwidth, making edge the obvious choice. Additionally, data protection and privacy laws are now in effect in 144 countries, with more on the way. Some regulations are complex and require careful consideration of where to store data. For instance, you can’t store data collected in Germany on an AWS cloud in the U.S.; it must stay in Germany. It’s one reason you need a distributed infrastructure for AI. Raising the bar on AI innovation at the edge Let’s explore two examples of companies innovating with AI at the edge and deploying AI infrastructure for real-time collaboration and high-speed connectivity with customers and partners. Nanyang Biologics, a biotech startup specializing in AI-driven drug discovery, leveraged Equinix Fabric® software-defined interconnection for edge computing to enable real-time collaboration with research partners. Doing so increased speed and accuracy at a lower cost. Alembic is a marketing intelligence SaaS company that provides customers with data-driven insights into their marketing activities, enabling them to optimize spending and improve sales funnel movement. They positioned an AI infrastructure stack closer to customers to enable high-performance inference at the edge. Alembic also leveraged Equinix’s ecosystem of network service providers (NSPs) and Equinix Fabric for high-speed connectivity to its customers and partners, ultimately delivering a higher-performance product. Other industries innovating with AI at the edge include: Financial services for trading strategy and fraud detection. Media and entertainment for digital content creation and game development. Autonomous vehicles for pedestrian and traffic sign detection, and lane tracking. Robotics for manufacturing, construction, and navigation. 4 steps to prioritize proximity at the edge Transitioning to a proximity-first infrastructure strategy requires a deliberate, integrated approach. We recommend working your way through these steps to create your roadmap. First, audit your current AI infrastructure. You’ll want to map data flows and identify performance bottlenecks caused by latency. Next, define your edge requirements: Assess the demands of real-time applications, including latency, bandwidth, and compliance. You’ll also want to review your company’s requirements for high-performance connectivity, compute, and data storage. Now you’re ready to design a flexible, multicloud and multi-provider data strategy that enables you to distribute workloads intelligently to create the right balance between edge and centralized infrastructures, and to deploy country-specific or region-specific infrastructure for compliance. Finally, leverage interconnected, neutral AI partner ecosystems to access the vast range of partners required to support AI workloads. Implement your AI infrastructure at the edge Distributed infrastructure helps companies accelerate AI innovation and future-proof their AI strategies. While we’ve focused on the edge in this blog, your AI infrastructure strategy needs to include a mix of cloud and edge infrastructure and a robust network to securely connect all your resources. Equinix AI data centers are strategically located in the world’s most connected markets to help enterprises future-proof their operations. Our global footprint spans 270+ interconnected colocation data centers in 76 metros worldwide, ensuring access to all the major cloud

Redefining the edge: Setting new standards for AI infrastructure Read More »