CIO CIO

A blueprint for scaling 1:1 personalization in modern retail

In A scalable framework for digital transformation in retail, I outlined three pillars for growth: mobile, personalization and experiential commerce. Of these, personalization is often the most difficult to execute at scale, yet it’s also the most powerful driver of long-term customer loyalty and brand differentiation. Too often, personalization is defined narrowly as serving targeted messages to pre-defined customer groups. While segmented personalization improves on one-size-fits-all marketing, it still treats customers as part of a crowd. Another common misconception is equating personalization with product customization, such as allowing customers to choose colors, engravings, or product configurations. While customization can enhance engagement, it requires the customer to take the initiative. Personalization, on the other hand, adapts the experience automatically based on who the customer is and what they need in the moment. True 1:1 personalization goes further: dynamically adapting every touchpoint to a customer’s unique real-time signals, from browsing behavior to micro-interactions, creating a journey that is theirs alone. For me, personalization is the ultimate act of customer obsession, a way to make every interaction feel personal, intentional and aligned with the brand’s values. source

A blueprint for scaling 1:1 personalization in modern retail Read More »

5 critical questions every organization should ask before selecting an AI-Security Posture Management solution

In the era of rapidly advancing artificial intelligence (AI) and cloud technologies, organizations are increasingly implementing security measures to protect sensitive data and ensure regulatory compliance. Among these measures, AI-SPM (AI Security Posture Management) solutions have gained traction to secure AI pipelines, sensitive data assets, and the overall AI ecosystem. These solutions help organizations identify risks, control security policies, and protect data and algorithms critical to their operations.  However, not all AI-SPM tools are created equal. When evaluating potential solutions, organizations often struggle to pinpoint which questions to ask to make an informed decision. To help you navigate this complex space, here are five critical questions every organization should ask when selecting an AI-SPM solution: #1: Does the solution offer comprehensive visibility and control over AI and associated data risk? With the proliferation of AI models across enterprises, maintaining visibility and control over AI models, datasets, and infrastructure is essential to mitigate risks related to compliance, unauthorized use, and data exposure. This ensures a clear understanding of what needs to be protected. Any gaps in visibility or control can leave organizations exposed to security breaches or compliance violations. An AI-SPM solution must be capable of seamless AI model discovery, creating a centralized inventory for complete visibility into deployed models and associated resources. This helps organizations monitor model usage, ensure policy compliance, and proactively address any potential security vulnerabilities. By maintaining a detailed overview of models across environments, businesses can proactively mitigate risks, protect sensitive data, and optimize AI operations. #2: Can the solution identify and remediate AI-specific risks in the context of enterprise data? The integration of AI into business processes introduces new, unique security challenges beyond traditional IT systems. For example: Are your AI models vulnerable to adversarial attacks and exposure? Are AI training datasets sufficiently anonymized to prevent leakage of personal or proprietary information? Are you monitoring for bias or tampering in predictive models? An effective AI-SPM solution must tackle risks that are specific to AI systems. For instance, it should protect training data used in machine learning workflows, ensure that datasets remain compliant under privacy regulations, and identify anomalies or malicious activities that might compromise AI model integrity. Make sure to ask whether the solution includes built-in features to secure every stage of your AI lifecycle—from data ingestion to deployment. #3: Does the solution align with regulatory compliance requirements? Regulatory compliance is a top concern for businesses worldwide, given the growing complexity of data protection laws such as GDPR (General Data Protection Regulation), NIST AI, HIPAA (Health Insurance Portability and Accountability Act), and more. AI systems magnify this challenge by rapidly processing sensitive data in ways that can increase the risk of accidental breaches or non-compliance. When evaluating an AI-SPM solution, ensure that it automatically maps your data and AI workflows to governance and compliance requirements. It should be capable of detecting non-compliant data and providing robust reporting features to enable audit readiness. Additionally, features like automated policy enforcement and real-time compliance monitoring are critical to keeping up with regulatory changes and preventing hefty fines or reputational damage. #4: How well does the solution scale in dynamic cloud-native and multi-cloud architectures? Modern cloud-native infrastructures are dynamic, with workloads scaling up or down depending on demand. In multi-cloud environments, this flexibility brings a challenge: maintaining consistent security policies across different providers (e.g., AWS, Azure, Google Cloud) and services. Adding AI and ML tools to the mix introduces even more variability. An AI-SPM solution needs to be designed for scalability. Ask whether the solution can handle dynamic environments, continuously adapt to changes in your AI pipelines, and manage security in distributed cloud infrastructures. The best tools offer centralized policy management while ensuring that each asset, regardless of its location or state, adheres to your organization’s security requirements. #5: Will the solution integrate with our existing security tools and workflow? A common mistake organizations make when adopting new technologies is failing to consider how well those technologies will integrate with their existing systems. AI-SPM is no exception. Without seamless integration, organizations may face operational disruptions, data silos, or gaps in their security posture. Before selecting an AI-SPM solution, verify whether it integrates with your existing data security tools like DSPM or DLP, identity governance platforms, or DevOps toolchains. Equally important is the solution’s ability to integrate with AI/ML platforms like Amazon Bedrock or Azure AI. Strong integration ensures consistency and allows your security, DevOps, and AI teams to collaborate effectively. Key takeaway: Make AI security proactive, not reactive Remember, AI-SPM is not just about protecting data—it’s about safeguarding the future of your business. As AI continues to reshape industries, having the proper tools and technologies in place will empower organizations to innovate confidently while staying ahead of emerging threats. Learn how Zscaler can help address AI and Data security with a comprehensive AI-Powered DSPM solution. Schedule a custom 1:1 demo today.  source

5 critical questions every organization should ask before selecting an AI-Security Posture Management solution Read More »

Why actionable observability is the new competitive edge

In today’s digital-first and cloud-centric world, observation is not limited to collecting metrics, logs and scars; action must be eligible, directly driving business results. Action empowers the observation organizations to quickly detect and resolve the issues, customize user experience and align IT operations with broader business goals. In this blog, we will explore how to significantly enhance observability capacity for business success, using real-world examples and solid numbers. What does it mean to make observability actionable? Traditional monitoring often answers “what happened” but not “why” or “how it impacts the business.” Actionable observability means: Connecting technical data with commercial purposes such as revenue growth, user retention and operational efficiency. Real-time insight that leads to fast and wise decision-making. To focus on issues that are based on commercial impact. Cross-functional cooperation facilities where IT, product and business teams share an integrated understanding. Constant improvement by closing the feedback loop for action from data.  3 key pillars for actionable observability Metrics: Quantitative data like latency, error rates and request volume. Logs: Detailed event records to diagnose and audit. Traces: End-to-end request flows that diagnose bottlenecks and failures. This observability data is then analyzed with AI/ML to generate alerts, predict outages and surface root causes relevant to business KPIs. source

Why actionable observability is the new competitive edge Read More »

Coinbase CEO: Developers who don’t use AI will be fired

He also revealed that AI adoption is expanding beyond engineering to include design, planning, and finance departments, and that AI is even contributing as a participant in internal decision-making processes. Armstrong stated, “We’re testing the limits of it — when can it actually start to be the decision-maker on some things and do better than humans,” implying that AI is moving beyond being an assistant to eventually transform the very way businesses operate. John Collison: What are other ways in which Coinbase is crypto-pilled, AI-pilled, works differently from a company founded 10 or 20 years prior?Brian Armstrong. Like a lot of companies, we’re leaning as hard as we can into AI. What does that mean, concretely?We’re doing a lot of the best practices. We made a big push to get every engineer on Cursor and Copilot. Then another the question was, ‘Well, are they actually going to use it?’ Because a lot of them onboarded when I— Was it you or Tobi [Lütke, member of Coinbase board] who mandated it?I mandated it. Yeah. You required people to have a call with you who—That’s true. I did do that. You required people to justify to you, the CEO, if they weren’t using AI code.That’s true. Originally— Sorry, maybe I’m not meant to tell that story?No, I don’t mind. It’s actually a good story. Originally they were coming back and saying, ‘All right, over the next quarter, two quarters, we’re going to get to 50% adoption.’ I said, ‘You’re telling me— why can’t every engineer just onboard by the end of the week?’ So I kind of went rogue. I posted in the all-in Slack channel. Just a light dusting of founder mode.I said, ‘AI’s important. We need you all to learn it and at least onboard it. You don’t have to use it every day yet until we do some training, but at least onboard by the end of the week. And if not, I’m hosting a meeting on Saturday with everybody who hasn’t done it, and I’d like to meet with you to understand why.’ Now, a few people were on vacation. There was a list of— Anyway, I jumped on this call on Saturday and there were a couple people that had not done it. Some of them had a good reason because they were just getting back from some trip or something, and some of them didn’t, and they got fired. Wow.Some people really didn’t like it, by the way, that heavy-handed approach, but I think it did set some clarity at least that we need to lean into this and learn about it. What’s your experience of AI coding been so far? It’s clear that it is very helpful to have AI helping you write code. It’s not clear how you run an AI-coded code base and what the best way to do it is.I agree. I think we’re still figuring that out too. One thing we started doing is every month we host, we call it an AI Speed Run where one of the engineers volunteers that month to run a training for how they’re using it. We try to cherry-pick the people, the teams that are doing it the best. We’re doing about 33% of code written by AI now. We have a goal to get to 50% by the end of the quarter. Let’s see if we get there. You probably can go too far with it. You don’t want people vibe coding these systems moving money. We’ve really encouraged people to really — you have to code review it and have the appropriate checks in place on that with humans in the loop. But some of the front-end piece, etc., you can iterate faster. We want to make sure it’s used not just in the engineering teams. It really should be any team. Design is using it heavily. Product managers. I think FP&A could even be using this as, ‘Ingest all the data and tell me what you forecast the revenue to be.’ We’re getting to a world where, even as CEO, by the way, I use it a lot. We use a decision-making process called RAPIDS and everyone writes their input. We have a row now for AI that writes its input in as one of the people that help make decisions. We’re testing the limits of it — when can it actually start to be the decision-maker on some things and do better than humans. source

Coinbase CEO: Developers who don’t use AI will be fired Read More »

Why are tech firms offering the feds such deep discounts?

Deals may also help ‘non-government’ firms He added that ServiceNow also serves about 85% of the Fortune 500, but would not confirm whether enterprise customers get the same levels of discount. “We work individually with each new customer, as well as our existing customers at the time of renewal, on the most effective pricing structure for their needs,” he said. Adam Mansfield, who leads Microsoft and ServiceNow advisory practices at consulting firm UpperEdge, said both organizations are “using aggressive discounts to remove friction in sales cycles, win government market share, and accelerate AI adoption in a highly visible sector. For non-government organizations, this signals that both vendors are willing to trade margin for strategic positioning, and buyers can use that as leverage in negotiations.” But especially with ServiceNow, he said, “the consumption-based licensing tied to AI poses hidden risks for customers. Once use commences, usage often exceeds thresholds and limits, driving spend higher than expected. Enterprises must properly negotiate strong guardrails up front to avoid the cost overruns that these vendors are banking on.” source

Why are tech firms offering the feds such deep discounts? Read More »

ITSM buyer’s guide: Top 21 IT service management tools

EasyVista The chores of discovering, tracking, and monitoring a big enterprise of people and machines are unified with the EasyVista approach. The goal is to integrate both ITSM with IT operations management, because many challenges straddle the line. The EV Discovery tool, for instance, will map out a network so that any trouble tickets are connected to a central reference. In some cases, AI and predictive statistics offer users projections about how and when a problem might be fixed. Integrations and codeless configuration can speed adoption.  Freshworks Freshservice The goal for Freshservice is to help each team “deliver delight” to users, according to Freshworks. The ticket-based system is one part of a larger group of tools for managing IT operations and assets. The Freshservice ticketing system is designed to be “omnichannel,” which means tickets can be created and handled with either phone, email, text, or other messaging platform. It’s integrated with various discussion boards (Slack, Teams, etc.) so the problem can be discussed, assigned, or maybe even deflected to a standard set of documentation. The tool’s AI engine, known as “Freddy,” can help automate the workflow and speed resolution by answering some questions, raising alerts for some tickets and guiding everyone to the right resources. HaloITSM The focus of Halo ITSM is organizing the assets under the control of IT and tracking all problems as it evolves. Artificial intelligence plays an important role in organizing the department’s knowledge and using it to orchestrate problem-solving. In the best cases, users can fix their own issues through the self-service portal. The customizable workflows ensure the AIs can generate good recommendations and help the IT team close tickets faster. Halo also markets the same platform to many other industries such as education, healthcare, and financial services so you can expect that the foundation is very general. source

ITSM buyer’s guide: Top 21 IT service management tools Read More »

Epicor adds AI agent to automate RFQs, speed supplier communications

ERP software provider Epicor has added a new AI agent — Epicor Prism Business Communications — to its Prism generative AI service, designed to help enterprises’ supply chain divisions automate request for quote (RFQ) workflows and accelerate supplier communications. Prism, released last year, is a network of vertical AI agents built specifically for the supply chain industries and integrated inside the company’s Industry ERP Cloud. The new AI agent, according to Epicor, will automate the RFQ process by reading supplier emails, pulling out pricing, lead times, and part details, and then pushing that data to a user to make more informed decisions via a conversational interface. source

Epicor adds AI agent to automate RFQs, speed supplier communications Read More »

How LogicMonitor uses AI to eliminate alert fatigue and streamline IT monitoring

Keith: I’m assuming green is good? David: Yes—green is good, yellow is caution, and red means something needs attention. We have a “group by” feature that allows dynamic grouping—by provider, resource type, and more. It updates the display in real time. When we click on a resource, we see its current state, alert history, and relevant metadata. Sometimes, understanding all the underlying infrastructure isn’t necessary. In service-based architectures, you may just need to know whether a key component is down. We help surface that at the right level. Let’s look at the application view. This breaks down all our apps—HA Proxy, time-series databases, etc. If I’m paged about an issue with Zookeeper, I can drill down to see which nodes are healthy and which ones are in an error state. We also show trend projections—what we expect to happen based on historical data. You can compare 24-hour and 7-day views to assess whether it’s a one-off issue or part of a larger pattern. You can then analyze whether the problem is localized or if other resources are impacted. Our forensic session feature simplifies log analysis. It highlights important log keywords—so you don’t need to manually search or build complex queries. If Zookeeper has no leader, we’ll highlight that in red so it’s immediately visible. From there, you can build dashboards for whatever matters—like AI workloads. We show GPU utilization, LLM input/output token metrics, vector database requests—all in one place. So you don’t need separate tools for each domain. Finally, let’s talk about Edwin AI. Edwin AI focuses on event intelligence and generative AI assistance. Remember all those alerts from earlier? Edwin correlates them into a single, actionable insight. We might take three seemingly separate alerts and merge them into one incident, say, on a virtual machine in Azure. We’ll show you the insight, when it was triggered, the underlying alert types (SNMP, uptime, web check, ping loss), and how they’re connected. We even offer GenAI-generated summaries—human-readable descriptions of the issue, potential root causes, and recommended remediation. We’re also working on AI agent functionality—like a chat assistant that lets you ask follow-up questions. You can say, “Tell me more about this log,” or “Explain this metric,” and the assistant will help you troubleshoot faster. source

How LogicMonitor uses AI to eliminate alert fatigue and streamline IT monitoring Read More »

Effective risk reporting to the board: Bridging technology and business

Nearly a decade ago, I tried and failed to convince the board of a company in midwestern Ohio of the need to invest in new threat intelligence tools, despite evidence of data egressing from the network to a likely state-sponsored attacker. Like many security leaders, I was not speaking the same language as the board.  Every day, mission-critical projects are halted and new investments are vetoed because directors are not properly briefed on cyber risk or the massive costs of inaction. One of the key challenges in risk management is converting relevant technological risk data into a format and language commonly understood by the business. The challenge of translating technological risk The disconnect often stems from differing languages and priorities. Technical teams typically focus on vulnerabilities, threat vectors, and system failures, while boards are concerned with risk and enterprise-wide impacts, such as financial losses, reputational damage, or regulatory non-compliance. Bridging this gap requires translating technology risks into business terms in the context of strategic goals.  Effective risk reporting to the board presents risk data in a concise, non-technical format and prioritizes exposures based on their potential impact on those strategic objectives, such as revenue, customer trust, or compliance. Critically, the reporting delivers measurable insights that enable informed decision-making, such as resource allocation or strategic adjustments. Understanding the structure and composition of the board, its place in the organization, its regulation, and the terminology it uses allows us to map our requests to their expectations more effectively. Key risk elements of reporting There are five key elements of a board-level report:  Guiding elements: Includes a level-setting on the current risk appetite of the organization, designed to garner agreement on the expected state and identify major inhibitors to achieving it. Threats: Who is targeting the organization, and what are their capabilities? This should set out that capable and determined adversaries threaten the board’s strategic objectives.  Assets: Define the most prized assets—the crown jewels—and tie those to the board’s objectives.  Risk mapping: Use a framework such as Basel II to map material risks to strategic objectives. The board frequently adheres to Basel standards and will be familiar with the process.  The ask: Set out which resources are required and why. One option is to use a  Loss Exceedance Curve—a graph that shows the probability of financial losses exceeding specific amounts, which helps organizations prioritize and quantify risks.  A framework for risk management With a clear understanding of effective risk reporting principles, organizations can use the 3-Lines of Defense framework to structure their assessment processes systematically. The model is well-suited for systematically identifying, assessing, and reporting the risk across the organization: First line of defense: Operational teams responsible for identifying and managing risks in day-to-day activities. Second line of defense: Risk management and compliance functions that provide oversight and ensure adherence to policies. Third line of defense: Internal audits that independently assess the effectiveness of risk management processes. In addition to the assessment and reporting, the 3-Lines of Defense framework also sets clear accountability at each level. Recontextualizing threats for board communication If there are no threats to exploit vulnerabilities, then the risk associated with those vulnerabilities is negligible, and the board will be unlikely to fund risk management initiatives. Collection and analysis of data on current and emerging threats is necessary. Threat actor types: The unique nature of your foe—which may change over time—should be shared in non-technical terms. State-sponsored hackers and serious criminals pose a greater threat requiring a more immediate response than, say, hacktivists. Threat frequency: Industry-specific research showing how frequently attacks occur and the most likely attack types. Consider how frequently an attacker comes in contact with key assets, especially in the case of an insider threat. Threat capability: Based on threat intelligence, what is the capacity of attackers to negatively impact the board’s strategic objectives?  Example losses: Understanding how peers are defending against the same or similar threats can be an important benchmark, especially when attacks result in financial losses.  Crown jewels: For the sake of brevity and impact, include the data crown jewels the cyber team is trying the defend alongside the threats.  Advanced reporting techniques To enhance the precision of risk reporting, organizations can adopt advanced methodologies like Basel II and Monte Carlo simulations. These approaches provide a structured way to quantify risks and assess their potential impact directly to strategic business outcomes and make credible requests for resources. Basel II is a framework for measuring and managing business risks, particularly in financial institutions. It works like a filing cabinet that categorizes similar risks into business-aligned ‘drawers’. Under Basel II, mapping cyber risk to strategic objectives may look something like this:  Strategic business objective – “Increase the amount of customers using two or more products to 40% by FY27.” Risk objective #1 – External fraud Risk objective #2 – Systems security Risk objective #3 – Credential stuffing (threat) with no lockout policy (exposure) on an EHR server (asset).  Monte Carlo simulations reveal the ‘most likely cost of inaction’ by modeling many possible scenarios through repeated random sampling, providing a probabilistic view of potential outcomes. By combining these methodologies, organizations can present the board with data-driven insights that support strategic decision-making. For example, a Monte Carlo simulation might reveal that a specific vulnerability has a 30% chance of causing a $10 million loss, enabling the board to prioritize mitigation efforts. Together, Basel II and Monte Carlo simulations provide a structured, data-driven view of cybersecurity risks in terms that support strategic decisions. Achieving more effective board communication A detailed understanding of the board construct is essential for aligning risk reporting with board expectations. It can be helpful, for example, to understand if the board’s audit and risk committee is standing or ad-hoc, and which directors serve on it. The Enterprise Risk Management team should report data directly to this committee, which usually involves: Structured reporting: Use standardized formats, such as dashboards or executive summaries, to present key risk metrics. Contextual analysis: Frame risks in terms of their impact on strategic objectives, using language that resonates with the board’s standards, be

Effective risk reporting to the board: Bridging technology and business Read More »

The partnership enabling Pegasystems to maximize open-source potential

Pega Infinity is Pegasystems’ low-code application development platform designed for workflow automation, customer engagement, AI decisioning, and RPA. But for it to work properly, it requires a wide range of open-source software and services. In the past, the company, based in Waltham, Massachusetts, put all these different services into Infinity. “But because everything was embedded inside the software, each time we wanted to scale up a specific service, we were essentially scaling up the entire platform,” explains Ramzi Souri, Pega VP of cloud technologies. To do so, their internal teams were constantly managing different open-source data infrastructure components, including data technologies like Apache Cassandra, Apache Kafka, and OpenSearch, which meant they didn’t have time to focus their energy on moving the business forward. As a result of this flawed strategy, the team decided to break everything up and have independent services available on the platform. “We had two options,” says Souri. “Either we needed to build everything ourselves and hire service teams and software development teams to focus purely on spinning up the different services, or we could look for a third-party vendor that met our security guidelines, operational style, and our deployment needs. We’re currently deployed on AWS and GCP, so this third party also had to make it possible for the service to run across multiple clouds and be able to offer the same level of service across all clouds to future-proof the solution.” source

The partnership enabling Pegasystems to maximize open-source potential Read More »