CIO CIO

The keys to selecting a platform for end-to-end observability

DevOps and security teams managing today’s multicloud architectures and cloud-native applications are facing an avalanche of data. On average, organizations use 10 different tools to monitor applications, infrastructure, and user experiences across these environments. Such fragmented approaches fall short of giving teams the insights they need to run IT and site reliability engineering operations effectively. Indeed, around 85% of technology leaders believe their problems are compounded by the number of tools, platforms, dashboards, and applications they rely on to manage multicloud environments. Part of the problem is that technologies like cloud computing, microservices, and containerization have added layers of complexity into the mix, making it significantly more challenging to monitor and secure applications efficiently. At the same time, the number of individual observability and security tools has grown. This has resulted in visibility gaps, siloed data, and negative effects on cross-team collaboration. Moreover, teams are constantly dealing with continuously evolving cyberthreats to data both on premises and in the cloud. Clearly, continuing to depend on siloed systems, disjointed monitoring tools, and manual analytics is no longer sustainable. To address this, 79% of organizations are currently using or planning to adopt a unified platform for observability and security data within the next 12 months. But before an organization makes the leap to a unified observability platform, it’s important to examine three essential qualities. Find and prevent application performance risks A major challenge for DevOps and security teams is responding to outages or poor application performance fast enough to maintain normal service. Additionally, these teams struggle to tell the difference between important information and false alarms — especially when many hundreds or thousands of notifications pour in at once. Identifying the ones that truly matter and communicating that to the relevant teams is exactly what a modern observability platform with automation and artificial intelligence should do. Ideally, an observability solution should be able to streamline and simplify technology stacks, enabling organizations to replace multiple tools with a single platform. With AIOps, it is possible to detect anomalies automatically with root-cause analysis and remediation support. It should also be possible to analyze data in context to proactively address events, optimize performance, and remediate issues in real time. To predict events before they happen, causality graphs are used in conjunction with sequence analysis to determine how chains of dependent application or infrastructure incidents might lead to slowdowns, failures, or outages. This enables proactive changes such as resource autoscaling, traffic shifting, or preventative rollbacks of bad code deployment ahead of time. Therefore, it’s important to look for an AI-based observability solution that not only predicts and prevents issues but also enhances observability to allow teams to take preemptive action before problems escalate into outages and provide ongoing visibility into service-level fulfillment. See into cloud blind spots Versatile, feature-rich cloud computing environments such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform have been a game-changer, enabling DevOps teams to deliver greater capabilities on a wider scale. However, the drive to innovate faster and transition to cloud-native application architectures generates more than just complexity — it’s creating significant new risk. Expansive multicloud environments are generating disparate data sets and views, making it difficult to see the big picture. Data often lacks context, hampering attempts to analyze full-stack, dependent services across domains, throughout software lifecycles, and so on. Furthermore, 89% of CISOs say microservices, containers, and Kubernetes have also caused application security blind spots. One reason for this is that it is common for application teams to deploy and utilize services on different clouds to take advantage of those features that best match their use case or familiarity. One study found that 93% of companies have a multicloud strategy to enable them to use the best qualities of each cloud provider for different situations. In addition to the challenges of managing multicloud environments, DevOps and security teams find it difficult to maintain visibility into cloud-native architectures as Kubernetes becomes the dominant platform for modern applications. Kubernetes architectures enable organizations to quickly and easily scale services to new users and drive efficiency gains through dynamic resource provisioning. Yet, this same dynamic quality is why 76% of technology leaders find it more difficult to maintain visibility into this architecture compared with traditional technology stacks. To fulfill DevOps and security teams’ need for multicloud insights, observability platforms should enable native data streaming from the major cloud providers for a real-time cloud monitoring experience. It also helps to have access to OpenTelemetry, a collection of tools for examining applications that export metrics, logs, and traces for analysis. Some observability vendors also provide an agent or client to automate the collection and provide contextualization of telemetry and entity data. With these features working in tandem, teams can perform automated and intelligent root-cause analysis in multicloud and hybrid environments. Further, this approach allows organizations to drive cloud architectural improvements through insights into IT underutilization and dependencies. That’s why it’s critical to find an observability platform that can handle the scale and complexity of modern cloud-native workloads while providing continuous insights into the performance and reliability of containerized applications and serverless functions, regardless of where they are deployed. Get to the root cause of issues Most AI today uses machine learning models like neural networks that find correlations and make predictions based on them. Correlations informed solely by generative AI are essentially informed guesses or likelihoods of outcomes. This limits their capacity to explain why certain outputs occurred or to make reliable decisions in new situations. AI that relies on large language models (LLMs) is also known to generate answers that may include outdated, vulnerable, or inefficient patterns. According to one survey, 98% of technology leaders said they are concerned that generative AI could be susceptible to unintentional bias, error, and misinformation. This growing awareness of the limitations of correlation-based AI is driving increased interest and research into causal AI, which aims to determine the precise underlying mechanisms behind events and outcomes. Increasingly, causal AI use cases are enabling organizations to identify the root cause of problems, facilitate remediation, and drive intelligent automation. AI systems that can explain the reasons

The keys to selecting a platform for end-to-end observability Read More »

Why we said no to AI chatbots

It’s also where I coined a mantra: “No instructions!” as a means of truly embracing a frictionless experience for employees using IT services. Why should they need instructions for our services when billions of people buy smartphones, and they don’t come with instructions? Yes, if someone wants to do something beyond the basics, they may require help and documentation, but again, that’s no longer basic support — it’s a corner case. The results that ultimately ruled out AI chatbots came from my time in a private equity-backed company that I joined in 2019, where I ultimately led IT Employee Productivity services, spanning a large portion of the IT infrastructure organization. Within weeks of joining, I discovered our L1, L2 and L3 teams were overwhelmed by support tickets. I remember asking my most senior O365 engineer/architect how he spent his time. His reply shocked me; 80% of his time was doing L1 and L2 tickets! This, coupled with multiple major in-flight initiatives, was a scenario that would not be tenable. As a result, I realized this would require a two-part, simultaneous move—jumping from reactive firefighting to a strategic, product-aligned model in one coordinated move. One move involved aligning teams to product/service ownership, giving them A–Z ‘accountability’ as described by the RACI model. The second would be getting the right people doing the right work by optimizing the support of our services. For the sake of this article, I will focus on the latter, a shift-left initiative that, in essence, is the moving of tickets to the left from L3 to L2 and L2 to L1.  source

Why we said no to AI chatbots Read More »

Join our live discussion – CIO Essential Insights: IT Leadership in the Age of AI

How AI is changing the CIO role How the advent of AI is changing the CIO role, and the conversations CIOs are having with line of business, the CEO, and the board, The big challenges CIOs are facing right now. How do CIOs need to change their leadership styles to meet those challenges. (See also: Rethinking and realigning IT for the AI era.) The CIO as a business leader We will share research which shows how LOB leaders view their CIO colleagues. Which persona best describes their own CIO? Options included: risk assessor, primarily focused on issues of risk; consultant, who provides advice and guidance to the business when asked; the voice of reason, who highlights issues and challenges when considering new technology and initiatives; and a business leader, who is proactively bringing ideas and solutions to move the business forward. More than half of the LOB execs surveyed said their CIOs are in the business leader category. Is that good news or bad news? With the CIO role being decades old at this point, should all LOB execs see the CIO as a business leader? We will discuss what makes a CIO a leader of the business. source

Join our live discussion – CIO Essential Insights: IT Leadership in the Age of AI Read More »

Episode 2: Slalom

Overview Over the past few years, much of the conversation around AI and generative AI has focused on productivity – which is understandable, considering the potential of these tools to accelerate workflows and automate defined tasks. However, that’s far from the full story. In episode 2 of The Art of the Possible podcast, Jennie Wong, Ph.D., Global Industry Director for Education at Slalom, and Patrick Frontiera, Higher Education Strategy Leader of IT and Campus Operations at AWS, explore an exciting, less-discussed use case: hyper-personalization. Register Now source

Episode 2: Slalom Read More »

Episode 1: Caylent

Overview Generative AI has the potential to redefine productivity, create novel applications, and reinvent customer experience. But without a strategic approach, you could not only miss out on the promise of this powerful tool, but also drain time, energy, and resources away from other mission-critical initiatives across your organization. To that end, Kristen Backeberg, Director of Global ISV Partner Marketing at AWS, and Val Henderson, President and CRO at Caylent, recently sat down to discuss maybe the most important consideration around adoption: How to tailor your generative AI strategy around clear goals that can drive your organization forward. Register Now source

Episode 1: Caylent Read More »

How to win allies and influence boards

Karen Higgins-Carter, a former CIO at Gilbane and Webster Bank, among others, also sits on two boards and says a common mistake she sees is CIOs applying AI reactively, and as a means to independently shape company strategy. “But success in the age of AI will come from leading adaptive change across the organization,” she says. “The most effective CIOs are those who partner broadly and visibly, showing their boards that their strengths lie not just in technical expertise, but in orchestrating enterprise-wide transformations.” Running a marathon Steve Randich, CIO of FINRA, has been presenting to boards since he first became a CIO in 1996. A lot has changed since then, of course. “In particular, board members have become much more engaged, critical, and demanding,” Randich says. “And, from my perspective, more challenging.” FINRA has long been lauded as a pioneer in public-cloud adoption, but the organization’s board needed a lot of convincing. source

How to win allies and influence boards Read More »

The AI disruption: From global business to your breakfast table

Artificial intelligence is no longer just a boardroom buzzword. It is fundamentally transforming global business, workforce dynamics and regulatory landscapes and its ripple effects now reach the breakfast table. AI’s impact is no longer confined to executives or technologists; it touches every consumer, worker and policymaker. As a data professional, I see firsthand how challenging it is to keep pace with these changes. AI’s effects are far-reaching, spanning employment, education, wealth, health and daily living. Yet, understanding the full scope of these impacts is daunting. In an age when information is curated by algorithms, the true breadth of AI’s influence is often obscured.  Consider this: the rising cost of eggs in the U.S. is, in part, an AI story. While this may seem far-fetched, the connection is real and understanding it is crucial for business leaders as you prioritize AI investments and validate which investments will result in the most positive outcomes for your business, e.g., profitability, sustainability, consumer impact, efficiency, etc. source

The AI disruption: From global business to your breakfast table Read More »