CIO CIO

Are enterprises ready to adopt AI at scale?

Whether it’s a financial services firm looking to build a personalized virtual assistant or an insurance company in need of ML models capable of identifying potential fraud, artificial intelligence (AI) is primed to transform nearly every industry. In fact, a recent Cloudera survey found that 88% of IT leaders said their organization is currently using AI in some way. AI’s ability to automate repetitive tasks leads to significant time savings on processes related to content creation, data analysis, and customer experience, freeing employees to work on more complex, creative issues. But adoption isn’t always straightforward. The path to achieving AI at scale is paved with myriad challenges: data quality and availability, deployment, and integration with existing systems among them. To overcome those challenges and successfully scale AI enterprise-wide, organizations must create a modern data architecture leveraging a mix of technologies, capabilities, and approaches including data lakehouses, data fabric, and data mesh. Barriers to AI at scale Despite so many organizations investing in AI, the reality is that the value derived from those solutions has been limited. The factors influencing this success vary and aren’t just confined to purely technical limitations. There’s also an element of employee buy-in that can cause AI adoption to lag behind, or even stall out altogether. Cloudera’s survey revealed that 39% of IT leaders who have already implemented AI in some way said that only some or almost none of their employees currently use any kind of AI tools. So, even if projects are being implemented widely, in more than one-third of cases, the employees simply aren’t using it. Another challenge here stems from the existing architecture within these organizations. They may implement AI, but the data architecture they currently have is not equipped, or able, to scale with the huge volumes of data that power AI and analytics. This requires greater flexibility in systems to better manage data storage and ensure quality is maintained as data is fed into new AI models. As data is moved between environments, fed into ML models, or leveraged in advanced analytics, considerations around things like security and compliance are top of mind for many. In fact, among surveyed leaders, 74% identified security and compliance risks surrounding AI as one of the biggest barriers to adoption. These IT leaders are faced with a simultaneous need for a data architecture that can support rapid AI scaling and prepare users for an evolving regulatory landscape. This challenge is particularly front and center in financial services with the arrival of new regulations and policies like the Digital Operational Resilience Act (DORA), which puts strict ICT risk management and security guidelines in place for firms in the European Union. Rapidly evolving regulatory requirements mean organizations need to ensure they have total control and visibility into their data, which requires a modern approach to data architecture. Building a strong, modern, foundation But what goes into a modern data architecture? While every platform is different, there are three key elements organizations should look out for data lakehouses, data mesh, and data fabric. Each of these accounts for a modern data architecture approach to data management that can help adhere to security requirements, break through barriers like data silos and deliver stronger outcomes with AI adoption enterprise-wide. Before we go further, let’s quickly define what we mean by each of these terms. A data mesh is a set of best practices for managing data in a decentralized organization, allowing for easy sharing of data products and a self-service approach to data management. A data fabric is a series of cooperating technologies that help create a unified view of data from disparate systems and services across the organization. Then there’s the data lakehouse—an analytics system that allows data to be processed, analyzed, and stored in both structured and unstructured forms. With AI models demanding vast amounts of structured and unstructured data for training, data lakehouses offer a highly flexible approach that is ideally suited to support them at scale. A data mesh delivers greater ownership and governance to the IT team members who work closest to the data in question. Data fabric presents an effective means of unifying data architecture, making data seamlessly connected and accessible, leveraging a single layer of abstraction. Those benefits are widely understood, with 67% of IT leaders surveyed by Cloudera noting that data lakehouses reduce the complexity of data pipelines. Similarly, both data mesh and data fabric have gained significant attention among IT leaders in recent years, with 54% and 48% of respondents respectively stating they planned to have those components in place by the end of 2024. Whatever the end goal of an organization’s AI adoption is, its success can be traced back to the foundational elements of IT and data architecture that support it. And the results for those who embrace a modern data architecture speak for themselves. For example, Cloudera customer OCBC Bank leveraged Cloudera machine learning and a powerful data lakehouse to develop personalized recommendations and insights that can be pushed to customers through the bank’s mobile app. This was made possible by the hybrid data platform OCBC Bank utilized, enabling them to fast-track AI deployment and provide a major return on investment. With a strong foundation of modern data architecture, IT leaders can move AI initiatives forward, scale them over time, and generate more value for their business. To learn more about how enterprises can prepare their environments for AI, click here. source

Are enterprises ready to adopt AI at scale? Read More »

How Celanese makes people central to its digital transformation

When we decided to transform, we wanted to see a big change in our operating model over the next four years, not next month or quarter. This required a major mindset change for our team, so we put people at the center of the strategy. Considering the magnitude of change from inorganic acquisitions coupled with digital transformations, our CEO asked both our CHRO and me to drive the culture change within the organization. While I’ve been driving change throughout my career, this was my first time to serve as an appointed change leader for a large global company. I had to learn a lot in a short amount of time. We decided that change, agility, and value would be key to put people at the center of the transformation. One of the first things we did was build an enterprise change management group, which our CHRO and I decided to put in IT, since we were driving so much of the transformation. With agility, we make bold decisions quickly and pivot when needed. This allows us to move fast and create a sustainable momentum. Our digital program is about driving value, not implementing fancy technology like robots, twins, and drones. We focus on driving revenue, productivity, yield, reliability, and safety, and we measure through monthly operational KPIs. source

How Celanese makes people central to its digital transformation Read More »

AI & the enterprise: protect your data, protect your enterprise value

Data exfiltration in an AI world It is undeniable at this point in time that the value of your enterprise data has risen with the growth of large language models and AI-driven analytics. This has made data even more of a target for bad actors and increased the damage resulting from malicious or accidental exposures. Sadly, this is the new reality for CISOs, with data exfiltration creating unprecedented risks. Stolen datasets can now be used to train competitor AI models. And with powerful AI techniques that extract deep details from stolen datasets, even small data losses can have seismic impacts. Human error in data loss Human error remains a critical weak link in data loss. For example, employees might inadvertently broadcast corporate secrets by inputting sensitive company information or source code into public-facing AI models and chatbots. Unfortunately, these human errors can lead to catastrophic data breaches that no policy or procedure can entirely prevent. Training and policy are critical, but mistakes can still occur, and no amount of training can change the behavior of a malicious insider. Traditional Data Loss Prevention (DLP) solutions have been around for decades, but their adoption and effectiveness have been mixed. However, the new data theft risks in the AI era may finally push DLP into the spotlight. Modern DLP solutions are enhanced with AI capabilities and offer more automated, context-aware protection. They can better understand data patterns, user behaviors, and potential exfiltration scenarios. This evolution makes DLP more effective and less intrusive, potentially overcoming historical adoption barriers, although deployment complexity may still present a hurdle. source

AI & the enterprise: protect your data, protect your enterprise value Read More »

SAP sustainability tracking rollout focuses on data consistency, outlier detection

The complexity and hard science elements of environmental sustainability programs make them among the most complex and challenging compliance areas for CIOs. In May 2023, SAP announced that it was pursuing such a sustainability program, and on Friday it finally rolled out the program, in the sense that it said that it “has finished beta testing and is now generally available,” according to SAP spokesperson Hanna Heine. “SAP Sustainability Data Exchange helps facilitate standardized carbon data exchange between partners along the supply chain, supporting organizations to move from estimates to actuals in their upstream emission data,” SAP said in a statement. “The application allows users to share emissions data to help implement their net zero strategy and take climate action by identifying products or processes with high potential for CO2 reduction, avoiding double emissions counting, and optimizing footprints with actual supplier data. It helps drive scalability, standardization, and trust in carbon data exchange across the supply chain.” Addressing data accuracy issues The program gathers its data in two ways: directly, via monitoring sustainability-relevant data within SAP ERP systems used by suppliers and other partners, and indirectly, via questions answered by those same companies.  source

SAP sustainability tracking rollout focuses on data consistency, outlier detection Read More »

Building Sevita’s first enterprise data platform

Sevita is dedicated to providing adults, children, and their families innovative services and support designed to lead to growth and independence despite physical, intellectual, or behavioral challenges. With person-centered care, the company works to foster independence, improve quality of life, and promote overall well-being for the individuals they serve. As such, the data on labor, occupancy, and engagement is extremely meaningful. Here, CIO Patrick Piccininno provides a roadmap of his journey from data with no integration to meaningful dashboards, insights, and a data literate culture. You’re building an enterprise data platform for the first time in Sevita’s history. What’s driving this investment? When I joined in July 2022, the company had spent the prior 24 months completing more than 20 acquisitions, and the IT team was busy bringing all these new systems online. Our legacy architecture consisted of multiple standalone, on-prem data marts intended to integrate transactional data from roughly 30 electronic health record systems to deliver a reporting capability. But because of the infrastructure, employees spent hours on manual data analysis and spreadsheet jockeying. We had plenty of reporting, but very little data insight, and no real semblance of a data strategy. source

Building Sevita’s first enterprise data platform Read More »

Google prepares Jarvis to fight the AI ‘computer use’ war

Google’s efforts to use AI underpinned by LLMs to automate user tasks is very similar to the “computer use” ability released by Anthropic last week, which experts believe could revolutionize the automation market once rolled out as a finished product as a huge amount of work continues to be done over computers. Anthropic’s “computer use” ability, in turn, enables developers to instruct Claude 3.5 Sonnet, through the Anthropic API, to read and interpret what’s on the display, type text, move the cursor, click buttons, and switch between windows or applications — much as today’s robotic process automation (RPA) tools can be instructed — much more laboriously — to do.    While Jarvis seems to be aimed at consumers, the technology could also be used across enterprises given that many development activities, workflow and automation management, CRM, ERP, etc are accessed over the browser via web-based clients or interfaces.   source

Google prepares Jarvis to fight the AI ‘computer use’ war Read More »

Why CIOs must lead the charge on ESG

New EA governance structures  Governance principles regarding how to measure architectural designs for ESG and responsible AI and who EA engages to report on governance decisions must also change. During an architecture review, architects and engineers should consider the sustainable impacts (both positive and negative) that each option provides. The EA competency needs to become proficient in understating the sustainable IT standards taxonomy and how to effectively assess, research and vet technology/solution options for their ability to achieve positive business outcomes while ensuring they meet sustainability goals.  Capturing the pros and cons of the recommended options and the decision records of what was voted on and agreed to and by whom will be critical for traceability. These records tracked metrics on cost reduction, technology currency, service reuse, security compliance and adherence to data principles. New key metrics to capture include:   Alignment with strategic goals. Measures how architecture initiatives support business transformation goals, sustainability targets and key performance indicators (KPIs).  Compliance and governance. Regulatory compliance ensures that architecture designs comply with industry standards and regulations, including GDPR, HIPAA or ESG frameworks. This should include new ISO standards.  Technology alignment and rationalization  Technology debt reduction. Measures progress in reducing outdated or redundant technology systems.  Standardization and integration. Assesses adherence to technology standards, including cloud migration, integration with legacy systems and enterprise-wide IT solutions.  Cost and resource optimization  Cost efficiency. Tracks cost savings from architecture initiatives, such as through optimization of IT infrastructure, cloud usage, or vendor management.  Resource utilization. Measures the efficient use of infrastructure, personnel and technology to maximize output and minimize waste.  Innovation and future-readiness  Scalability, flexibility and accessibility. Assesses the adaptability of the architecture to future business needs and technology evolution.  Sustainability metrics  Sustainable IT. Tracks initiatives tied to energy efficiency, carbon footprint reduction and green IT practices.  ESG compliance. Ensures architecture projects align with ESG goals set by the organization.   These metrics ensure that an architecture review board maintains oversight on critical architectural decisions, aligning technology with business goals, cost efficiency and regulatory standards while also enabling sustainable, future-ready IT environments. In the past, decision records went directly to the CIO, CDO and CISO as well as the business partner and executives owning the line of business. Now, at least quarterly, if not monthly, updates would flow to the traditional IT leadership but include the chief compliance officer, chief data officer, chief risk officer, chief sustainability officer and potentially other C-level executives with ESG oversight.   source

Why CIOs must lead the charge on ESG Read More »

Helping the public aboard the AI bandwagon

offer suggestions on how the call could be done better analyze projects and processes to suggest improvements create personas that we will ask business questions to, eventually leading to organizational digital twins These are just a few examples of how AI, rather than automating employees out of existence, can enhance employees and businesses. Last, businesses need to prioritize different elements of AI experience for users versus customers. For customers, choice is crucial. Businesses should offer human-like AI to help customers, but customers should also have the choice of direct interaction with a person. For users, accuracy is paramount. AI doesn’t need to be human-like, especially when it is supplementing employees’ work, but it needs be accurate, so users are getting enhancements to their abilities rather than more work and poor experience. Lead with design, acceptance follows It probably comes as no surprise that the Horsey Horseless, the car with the wooden horsehead on the hood, was never a success. Quick fixes are not enough to promote public acceptance of new technology. Instead, we need to focus on developing clear use cases that offer real value for people while providing a seamless, great experience. source

Helping the public aboard the AI bandwagon Read More »

CIOs face mounting pressure as AI costs and complexities threaten enterprise value

“Every enterprise must assess the return on investment (ROI) before launching any new initiative, including AI projects,” Abhishek Gupta, CIO of India’s leading satellite broadcaster DishTV said. “It’s essential to evaluate all AI initiatives using the same criteria. Once a specific business use case for AI is identified, a thorough cost estimation should be conducted and compared against the anticipated business outcomes to ensure alignment and value. Without a precise understanding of how AI expenses scale, companies risk underestimating costs by as much as 1,000%, making financial missteps that could cripple broader technology initiatives, Gartner said. “As a CIO, you need to understand your AI bill,” LeHong stressed. “You must understand the cost components and pricing model options, and you need to know how to reduce these costs and negotiate with vendors. CIOs should create proofs of concept that test how costs will scale, not just how the technology works.” source

CIOs face mounting pressure as AI costs and complexities threaten enterprise value Read More »