CIO CIO

Restrict, ignore, embrace: The shadow IT trilemma

You can either choose to penalize those who do not follow the rules    You can choose to ignore the trodden paths You can create additional crosswalks exactly there, where people have chosen to disregard the rules, since they prove to be the most comfortable and “user-tested.” This trilemma is more commonplace than some might think. Now, let’s move this to the dilemma of what to do with the shadow IT, options being:    Restrict (the traditional approach). Block tools that were authorized, enforce company-wide policies and monitor compliance. For sure, short-term gains are nearly guaranteed, but so are long-term losses in trust and morale. In the end, the likelihood of different workarounds emerging is as high as Burj Khalifa. Imagine a dev company blocking access to Claude, citing potential code leaks. Developers might migrate to ChatGPT, Gemini or Copilot or even worse, start using their personal PCs. Again — paths of lesser resistance. Restrictions may make sense in government or military contexts, where the risk of a leak could have national consequences. But when a private company tries to apply those same restrictions, it becomes overkill. You lose agility for a hypothetical risk that might never even materialize. Ignore (the passive approach). Turn a blind eye: avoids conflict, compounds risk. Due to the prevalence of shadow IT solutions, ignoring them completely runs a high risk of company or customer data ending up where it shouldn’t. The potential fallout being undoubtedly more damaging than addressing the issue head-on. Ignore it, and you’re ghosting your smartest people and potential innovations in sight. Embrace (the adaptive approach). Identify why tools gain traction, then integrate them safely. For instance, if a logistics company notices drivers using Waze instead of approved routing software, they can partner with Waze to develop a custom enterprise version with shipment-tracking features. Good for efficiency and good for morale. In fact, at Trevolution, teams are given the freedom to explore and choose their own AI agents; we don’t have a centralized decision around what developers must use. Everyone is given the freedom to experiment and to test their own stack. Then we host workshops to cross-pollinate the best practices. From here on, during team meetings, innovation happens.   Building better pathways   Monitoring tools can detect unsanctioned tools, and IT leaders can then evaluate their impact without necessarily sacrificing innovation. Zero Trust architecture also helps. Instead of straight up banning external apps, one can just limit their access to sensitive systems.   Essentially, I don’t view shadow IT as a problem to solve; instead, it’s a signal to interpret, which could (and should) serve as a wake-up call.  Many organizations to this day usually rely on IT teams to find, research and test new IT tools that could become the company’s standard. But what if solutions came from the bottom up, instead of the norm, which is top down? What if organizations rethought and reconsidered the tools based on what employees (i.e., actual users) find comfortable, easy-to-use and, by the end of the day, useful for their work and output they produce? Listen to the feedback!   source

Restrict, ignore, embrace: The shadow IT trilemma Read More »

Business survival requires an IT org that stands apart

Do the people who work with the IT capabilities you provide believe you are authentically committed to their success and well-being? Does the IT organization view complaints as gifts? Is the IT mindset: Complain once, “Let me fix it”; complain twice, “Shame on me”; complain three times, “I should be replaced”? History is full of stories of bad things happening when the people in charge lose sight of the interests of the communities being served. French King Louis XVI being a classic case in point, having lost his head to the guillotine because he didn’t understand the plight of the common man. CIOs need to dismantle the components of stakeholder workplace and marketplace experiences and understand how everyday phenomena are impacted by IT. Envisioning the future One of the hardest things about the CIO job is that you must be five-nines perfect in the present and be borderline prescient about the future. In a perfect world, your “today” IT would be better and different than your competitors and your vision of the future is differentially more compelling and achievable. source

Business survival requires an IT org that stands apart Read More »

Real-time data: The foundation for autonomous AI

Today’s consumer fraud detection systems do more than just catch unusual spending sprees. Modern AI agents correlate transactions with real-time data, such as device fingerprints and geolocation patterns, to block fraud in milliseconds. Similar multi-agent systems are now revolutionizing manufacturing, health care, and other industries, with AI agents coordinating across functions to optimize operations in real time. Building these agentic systems requires more than bolting real-time analytics onto batch processing systems. And as competition drives the move to AI-mediated business logic, organizations must treat their data operations like living organisms, where components continuously learn and adapt. When any link in this chain fails, the entire system can spiral into costly inefficiency or missed opportunities. Architectural requirements Real-time AI systems demand a constant flow of fresh data to power decision-making (and execution) capabilities. This calls for a shift from batch pipelines to streaming-first architectures: systems that treat data as a series of events. Such systems must simultaneously handle massive data ingestion while serving real-time queries – a fundamental shift from traditional batch-oriented systems that might update customer insights nightly or weekly. Many organizations adopt zero-ETL patterns using change data capture, and replace time-based orchestration with event-triggered workflow, enabling AI agents to initiate business processes as conditions change. In this event-driven architecture, it’s not just throughput that matters. Latency – the delay between when data is generated and when it drives a decision – can become a limiting factor. Lost time is lost money.  The traditional approach of maintaining separate systems for databases, streaming, and analytics creates bottlenecks that AI cannot afford. Modern platforms must unify these functions, treating data as a continuous flow of events rather than static tables. This enables AI agents to maintain context across operations, learn from live data streams, and initiate actions without waiting for data to move between siloed systems. Reducing latency drives the need for specific architectural investments. CIOs building for real-time AI should prioritize several foundational technologies (see chart) that enable low-latency, agent-driven operations at scale. Architectural capabilities that support low-latency, agentic AI systems Technology capability What it does Why it matters Streaming data platform Continuously processes data as it’s generated Enables immediate response to business events Event-driven architecture Automatically triggers actions based on real-time signals Powers dynamic, automated decision flows Edge processing Runs AI or analytics close to where data is created Reduces lag in time-sensitive environments like retail or IoT Unified OLTP/OLAP system Combines transactions and analytics in one platform Eliminates delays from moving data between systems Real-time data sync (zero-ETL) Detects and streams changes as they happen in source systems Keeps models and analytics fresh without traditional ETL pipelines Observability tools Monitors how data and AI systems are behaving in real time Ensures reliability, trust, and fast troubleshooting Scaling and operationalizing real-time AI Real-time AI projects often perform well in pilots, but complexity spikes when systems are exposed to the real world. Data inconsistencies, duplication issues, model drift, and coordination breakdowns commonly arise when pipelines operate independently. Agents can lose context without a shared, real-time view of the data, leading to conflicting or redundant actions. Scaling isn’t just a technical lift, either. Teams and systems alike need shared context and unified architecture to maintain performance and ensure agent-driven decisions remain trustworthy. Many teams fall into the trap of treating real-time functionality as a dashboard upgrade. But we build real-time systems to drive action, not just surface insights. CIOs who focus only on data throughput risk missing broader challenges, such as addressing feedback loops, reworking business logic, and creating full-system observability. Without those, organizations might get faster data but not faster outcomes. This organizational shift demands new ways of measuring success. Traditional metrics like database query performance or model accuracy, while still important, don’t capture the health of a real-time AI system. Organizations must now track metrics like data freshness, inference latency, and model drift, measuring how quickly AI models degrade as real-world conditions change. These measures directly impact business outcomes: a stale model or millisecond delay can mean missed opportunities or lost customers. Governance and visibility When AI systems must make autonomous decisions in milliseconds, however, traditional governance approaches can fall short. Real-time business fundamentals will hinge on live visibility into what data feeds which decisions and why — especially when multiple AI agents share context and learn from each other. This requires both real-time monitoring capabilities and explainable AI (XAI) tools that can trace decisions back to their underlying logic and data. These capabilities should be built in from the start, as they are fundamental platform features that are difficult to add later. Furthermore, the decision quality of live AI agents will degrade unless data quality is constantly maintained. This requires specialists who understand both technical requirements and business impact. It’s a technical and organizational challenge to ensure their AI agents remain governable, explainable, and well-fed with reliable data at the speed of real-time operations. But these fundamentals are core to adapting to the AI era. Looking ahead In the next 12-18 months, fueled by evolving technology and competitive pressure, businesses will transform their AI operations. Instead of single-purpose AI agents that merely react to events, we’ll see networks of specialized AI agents working together. For example, in retail, inventory management agents will collaborate with pricing agents and marketing agents to optimize stock levels, adjust prices, and trigger promotions in real time. In financial services, risk assessment agents will work alongside market analysis agents and customer service agents to provide personalized investment advice while maintaining regulatory compliance. The key to preparing for this evolution is laying strong, AI-ready foundations. Organizations should focus on event-driven systems that support AI agents and fast decision loops, leveraging emerging standards like the Model Context Protocol (MCP) for connecting agents to enterprise data and Agent2Agent (A2A) for enabling collaborative workflows. While the technology landscape is moving quickly, success depends on getting the basics right: modular, flexible architectures, strong data foundations, and aligned teams ready to evolve with the technology. Enterprise platforms like Google Cloud’s

Real-time data: The foundation for autonomous AI Read More »

Quantum machine learning (QML) is closer than you think: Why business leaders should start paying attention now

The enterprise technology landscape is witnessing a remarkable shift. While most discussions around quantum computing focus on distant breakthroughs and theoretical applications, a quiet revolution is happening at the intersection of quantum systems and machine learning. Quantum machine learning (QML) is transitioning from academic curiosity to a practical business tool, and the timeline for enterprise adoption may be shorter than many anticipate. The quantum advantage: Beyond classical limitations To truly appreciate how QML is evolving, and how those changes might end up having a huge impact on business technology, it is important to first understand how it differs from current forms of computing. Traditional computers process information in binary states, using ones and zeros. Quantum computers, however, operate on quantum bits (qubits) that can exist in multiple states simultaneously through a phenomenon called superposition. This fundamental difference enables quantum systems to process complex, interdependent variables at scales and speeds that classical machines simply cannot match. While current quantum hardware still faces significant limitations — including error rates, decoherence, and the need for extreme cooling — consistent progress in quantum simulation and optimization is confirming the technology’s transformative potential. The key insight is that quantum systems don’t need to be perfect to be useful; they need to be better than classical alternatives for specific problem sets. Why QML matters: Unlocking new performance frontiers The rapid growth of AI has played a key role in unlocking the potential of QML because it has created a foundation for the technology to be integrated into existing models. QML represents a hybrid approach that combines quantum circuits with classical machine learning models to unlock performance improvements in targeted, data-intensive domains. This isn’t about replacing classical AI wholesale; it’s about identifying specific use cases where quantum advantages can be leveraged within existing enterprise AI workflows. Early-stage experimentation across industries is already demonstrating measurable improvements: Accelerated training: Complex models that typically require extensive computational resources can be trained more efficiently using quantum-enhanced algorithms, reducing both time-to-insight and energy consumption. High-dimensional data handling: Quantum systems excel at processing datasets with many variables and sparse data points, scenarios where classical methods often struggle or require significant preprocessing. Enhanced accuracy with limited data: QML can achieve greater model accuracy with smaller sample sizes, particularly valuable in regulated industries or specialized domains where data is scarce or expensive to obtain. The timeline is shortening: From theory to practice One of the most compelling aspects of QML is how well its inherently probabilistic nature aligns with modern generative AI and uncertainty modeling. Just as classical computing advanced despite early hardware imperfections, current-generation quantum systems are producing measurable results in narrow but high-value use cases. The progression mirrors the early days of cloud computing or AI: initial skepticism gave way to pilot projects, which demonstrated clear value in specific applications, ultimately leading to widespread enterprise adoption. Today’s quantum systems may be imperfect, but they’re becoming increasingly consistent in delivering advantages for well-defined problem sets. What enterprises can do today: Practical entry points Organizations don’t need to wait for quantum hardware perfection to begin exploring value. Several practical entry points offer immediate opportunities for experimentation and learning: Risk scenario simulation: Financial services and insurance companies can use quantum systems to simulate rare or complex risk scenarios that are computationally intensive for classical systems. This includes stress testing portfolios under extreme market conditions or modeling catastrophic insurance events. Enhanced forecasting: Quantum-inspired sampling techniques can improve forecasting accuracy and sensitivity analysis, particularly for supply chain optimization, demand planning, and resource allocation. Synthetic data generation: In heavily regulated industries or data-scarce environments, QML can generate high-quality synthetic datasets that preserve statistical properties while ensuring compliance with privacy regulations. Anomaly detection: Quantum systems excel at identifying subtle patterns and anomalies in complex datasets, particularly valuable for fraud detection, cybersecurity, and quality control applications. Specialized industry applications: Early adopters are finding success in claims forecasting, patient risk stratification, drug efficacy modeling, and portfolio optimization — areas where the quantum advantage directly translates to business value. Building quantum readiness: Strategic considerations For enterprise leaders considering QML adoption, the focus should be on building organizational readiness rather than waiting for perfect technology. This means investing in quantum literacy across technical teams, identifying use cases where quantum advantages align with business priorities, and developing partnerships with quantum computing providers and research institutions. The talent dimension is particularly critical. Organizations that begin developing quantum expertise today will have significant advantages as the ecosystem matures, whether they pursue projects by training existing data scientists or recruiting quantum-aware talent. This isn’t just about understanding quantum mechanics; it’s about recognizing how quantum capabilities can be integrated into existing AI and data science workflows. The enterprise imperative: Early movers’ advantage QML is no longer confined to research laboratories. It’s becoming a tool with real strategic potential, offering competitive advantages for organizations willing to invest in early-stage experimentation. The companies that begin building quantum capabilities today — starting with awareness, progressing to experimentation, and developing internal expertise — will be best positioned to capitalize on the technology as it continues to mature. The question isn’t whether QML will impact enterprise AI, but rather when and how. Organizations that treat quantum computing as a distant future technology risk being left behind by competitors who recognize its emerging practical value. The time for quantum awareness and preparation is now. As we’ve learned from previous technology transitions, the companies that lead aren’t always those with the most resources; they’re the ones that recognize inflection points earliest and act decisively. For QML, that inflection point is approaching faster than most expect.​​​​​​​​​​​​​​​​ Learn more about EXL’s data and AI capabilities here. Anand “Andy” Logani is executive vice president and chief digital and AI officer at EXL, a global data and AI company. source

Quantum machine learning (QML) is closer than you think: Why business leaders should start paying attention now Read More »

ServiceNow: Latest news and insights

IDC Link: ServiceNow 1Q24 results announce ‘AI platform’ era May 6, 2024: ServiceNow’s 1Q24 earnings call introduced the term “AI platform” as a positioning statement, notes IDCs Stephen Elliot, who sees compelling possibilities for the company and customers alike. “Early indications and discussions from Now Assist customers highlight positive cost savings and staff productivity,” he writes. Generative AI takes center stage in latest ServiceNow release March 20, 2024: ServiceNow’s latest platform release, dubbed Washington DC, moves the cloud-based IT management and operations software company sharply in the direction of generative AI, with new features designed to help companies working with that technology. IDC Market Note: ServiceNow entwines partnerships with genAI development March 13, 2024: IDC’s Snow Tempest examines ServiceNow’s latest announcements, which reveal an intertwined strategy of generative AI development and industry partnerships aimed at targeted industries and use cases. “These partnerships offer ServiceNow a strategic adoption opportunity for organizations considering work with generative AI.” source

ServiceNow: Latest news and insights Read More »

4 things that make an AI strategy work in the short and long term

Across these companies, the common thread is practical implementation. Most AI gains came from embedding tools like Microsoft Copilot, GitHub Copilot, and OpenAI APIs into existing workflows. Aviad Almagor, VP of technology innovation at tech company Trimble, also notes that more than 90% of Trimble engineers use Github Copilot. The ROI, he says, is evident in shorter development cycles, and reduced friction in HR and customer service. Moreover, Trimble has introduced AI into their transportation management system, where AI agents optimize freight procurement by dynamically matching shippers and carriers. These examples show that value creation from AI doesn’t require massive investment in bespoke platforms. Often, the best results come from building on proven, scalable technologies and integrating them thoughtfully into existing systems. Build a culture that encourages AI fluency Technology may be the essential element, but culture is the catalyst. Successful AI programs are supported by organizational habits that promote experimentation, internal visibility, and cross-functional collaboration. A culture of curiosity and iteration is just as critical as a strong technology stack. source

4 things that make an AI strategy work in the short and long term Read More »

The one decision that sets agentic AI leaders apart

In nearly every sport, just a few numbers define success. Football, tennis, cycling, golf, running, Formula 1—it’s all the same. Score more goals, break serve, win the sprint, hit fewer shots, run faster, dominate the laps. When we talked with more than 2,000 executives from the world’s largest companies—representing over $48 trillion in combined GDP—we discovered a single defining metric that separates winners in agentic AI from the rest.  Enterprises that prioritize sovereignty over their AI and data—treating it as mission-critical—are over 70% more likely to see exceptional ROI from agentic AI investments. That number isn’t theoretical. It shows up consistently across regions as diverse as the U.S., Japan, the UAE, and Europe. And it’s a daily occurrence, because agentic AI now touches every part of a company’s interactions, 24 hours a day, seven days a week, 365 days a year.  Think about that impact. A 70% advantage at that scale is like driving three seconds faster than every car, every lap, on a 60-lap race. The compounding effect is impossible to ignore. The most successful organizations in our 2025 study have already made the pivot. They’ve prioritized a sovereign AI and data platform approach to ensure compliance, maximize data utility, and scale GenAI across the enterprise. The surveyed group of C-suite and senior leaders (whose enterprises comprise 500+ employees) considered 400 variables, including beliefs, investments, ROI, agentic AI areas of investment, and more. And they’re not dabbling. They’re going deep with their commitments. Consider these four distinctions:  Breadth: The leaders have scaled agentic AI across an average of 11 business functions. The laggards? Just four.  Depth: Leaders embed AI into their operations 2.5x more deeply than others, moving faster from pilots to production.  Belief + urgency: These leaders don’t just experiment; they act on the belief that sovereignty, data ownership, and open infrastructure are essential for AI success. Returns: For every dollar invested, top performers see a 5:1 return, compared to 2:1 in the next tier.  Sounds obvious, but when we correlate the performance of every one of these organizations’ agentic AI activities (across 15 agentic business functions) with their economic performance and the nature of their strategy and vision, we get this simple answer. Believe in the mission criticality of sovereign AI and data, believe and execute on the plan to be your own AI and data platform, then go mainstream in as many agentic AI areas as possible.  In fewer than 1,000 days, 90% of enterprises will face the real impact of data and AI gravity. The question is whether they’ll have made the structural decisions to handle it or be overwhelmed by it. These early leaders show us that succeeding with agentic AI doesn’t start at the code level. It starts with three core commitments:  Breaking data silos Securing model access without compromise Commitment to building a platform strategy that makes AI work for you, not the other way around If you want to be in the leader “power zone” for GenAI, believe and deliver on the necessary sovereignty of your AI and data, wherever, whenever, and however you need it.  EnterpriseDB The marriage of AI and data will be a $17 trillion economy by 2028. If that were a nation, it would rival the world’s third-largest economy. Your enterprise will almost certainly participate through the development and deployment of intelligent, AI-infused applications. We’re already seeing this in motion. From autonomous systems in logistics to OSS/BSS platforms in telecoms and copilots in customer service, the next wave of enterprise investment is heading toward agentic, embedded AI. Just look at NVIDIA, Salesforce, and others who are reporting extraordinary economic returns from real-world, intelligent applications.  Here’s what unites leaders:  Build for sovereignty, speed, and compliance: The most successful enterprises aren’t experimenting at the edges—they’re embedding AI into the core of their business, fast. They’re treating sovereignty as non-negotiable, building their own AI and data platforms to ensure security, compliance, and control from day one. Those deeply committed to sovereignty as mission-critical are realizing 12.5x greater ROI than those still in pilot mode. Empower teams through accessible AI development: Leading organizations are investing in AI factories with low-code and no-code environments, enabling subject matter experts across functions to build, test, and deploy use cases securely and efficiently. Standardize on proven, open technology: 81% of these enterprise leaders told us that an open-source strategic data infrastructure is their future. Focus on solutions that offer enterprise-grade performance, governance, and cross-environment deployment capabilities on top of open architecture. Make AI and data factories sustainable: What once required hundreds of millions in investments is now achievable at a fraction of the cost. With $4,000 NVIDIA deskside production units, organizations can stand up sovereign AI and data factories across departments. Success starts with a simple, nontechnical decision: Create a mission-critical focus to become your own sovereign AI and data platform. Then, put agentic AI directly in the hands of those who need it most in your organization. It’s the exact strategy that 70% of leading enterprises follow to win day in, day out, across the globe. To learn more about EnterpriseDB’s sovereign AI and data platform, visit here. source

The one decision that sets agentic AI leaders apart Read More »

Reimagining the Spotify model for the human-AI enterprise

In a large SaaS enterprise, liquid workflows underpin their tech ops command centre. AI agents triage 90% of incoming tickets using past incident data, system telemetry and log analytics. High-complexity or ambiguous issues are flagged to human engineers with fully-prepped case histories.  How it evolves the organization The shift moves the engineering culture from reactive firefighting to resilience design. Issue resolution becomes a source of learning, not just closure. Teams spend more time fortifying architecture and less time swamped by alerts.  Persona interplay: Site reliability engineers (SREs) are looped in only when AI confidence thresholds are exceeded.  Engineering managers use incident patterns to realign capacity and reduce tech debt.  AI orchestration agents proactively reroute workloads to minimize disruptions.  Result: MTTR is halved, engineering morale improves and uptime becomes a board-level strength. The system scales as the business grows, without ballooning headcount, and incident tickets are automatically triaged by AI based on severity and historical resolution. Low-complexity tickets are auto-resolved or escalated to bots; humans handle only the top 20% of edge cases.  4. Agentic chapters & guilds: Co-learning networks  Why this matters  Upskilling has to evolve. In the Human-AI enterprise, learning isn’t episodic — it’s ambient. Guilds and chapters don’t just grow people, they train AI agents alongside them in the flow of work.  How it might work in practice  As teams build patterns or frameworks, those assets are captured and reinforced through chapter-reviewed standards. AI copilots are trained on this evolving body of knowledge, continuously nudging users with the latest techniques and auto-flagging outdated practices.  What you actually get out of it  Your org becomes self-improving. New joiners get onboarded faster. Engineers stop writing legacy code. And AI agents start becoming smarter contributors, not just passive assistants.  Example use case: Software engineering guild  At a global fintech company, the backend chapter documents secure GraphQL API standards. These are turned into living guidelines inside AI copilots used by all engineers. The copilots don’t just autocomplete — they enforce real-time compliance to chapter-reviewed standards.  How it evolves the organization The organization builds living documentation embedded into the developer workflow. Knowledge becomes executable and shareable. Engineers get better, faster and AI agents level up alongside them.  Persona interplay: Chapter leads push validated practices into shared copilot models.  New engineers onboard in days — not weeks — guided by AI nudges.  Senior engineers contribute scalable mentorship via shared AI patterns.  Result: Code review time is cut by 40%, defect density drops and onboarding time is reduced by 60%. AI copilots evolve as the engineering body of knowledge expands, and the backend chapter curates best practices around GraphQL APIs. AI copilots trained on this input help new engineers generate compliant code in IDEs and flag legacy patterns, reducing code review cycles by 40%.  5. Embedded governance via agentic councils  Why this matters  As AI becomes pervasive, governance can’t be reactive. Agentic Councils bring compliance into the design layer — constantly auditing, alerting and guiding both human and AI behavior in real time.  How it might work in practice  Agentic councils blend human ethics leads, risk officers and real-time AI monitors that flag anomalies in decision logic, user fairness or policy alignment. They provide dashboards showing drift, override patterns and trust scores by agent.  What you actually get out of it  You reduce risk before it escalates. You operationalize trust. And you can confidently scale AI without triggering compliance bottlenecks.  Example use case: Financial underwriting  At a top-tier bank, loan approvals are streamlined through an embedded agentic council. AI agents provide risk scores and approvals, which are then reviewed against fairness dashboards. Human analysts are triggered only when demographic variance is detected or override patterns spike.  How it evolves the organization The underwriting model becomes dynamic, explainable and governed in real time. Regulatory confidence soars while operational friction drops.  Persona interplay: Compliance officers get real-time drift and override metrics.  AI owners see model health and retraining windows.  Business executives approve policies backed by traceable fairness logic.  Result: Loan approval timelines reduce by 25%, model bias is mitigated proactively and governance becomes a competitive differentiator in an increasingly AI-sceptical industry. , agentic governance reviews AI recommendations on loan approvals, flagging edge cases with demographic variance. A dashboard alerts executives to retrain models monthly, enabling bias mitigation without regulatory intervention.  Spotify Model 2.0: Comparative summary  Element Spotify 1.0 Spotify 2.0 – Human-AI enterprise Squads  Human-only agile teams  Composite teams (Humans + AI agents)  Tribes  Product-aligned team clusters  Cognitive mesh tribes with shared AI memory  Chapters & guilds  Skills and learning communities  Co-learning with AI agents  Workflows  Agile sprints and Kanban  AI-orchestrated liquid workflows  Governance  Retrospectives and human councils  Embedded agentic governance with audits  How enterprises can get started  Implementing the Spotify 2.0 Model isn’t about a big-bang rollout — it’s about designing a controlled evolution. This is not a plug-and-play framework; it’s a transformation journey that requires education, experimentation and continuous reinforcement.  Step 1: Start with one adaptive business unit  Identify a forward-leaning team or business unit with high digital maturity and readiness for experimentation. Use this group as your first composite squad — ideally one working on product innovation, digital experience or internal automation. Assign an AI capability lead and embed cross-functional roles including AI engineers, product owners and user champions.  Step 2: Educate and align  Before deploying agents, run executive and squad-level workshops to introduce the Human-AI partnership principles. Use hands-on demos of AI copilots (e.g., summarization, coding assist, orchestration AI) to ground the vision in something tangible. Establish a shared understanding of “what good looks like” and where judgment vs. automation applies.  Step 3: Prototype use cases and metrics  Select 2–3 specific test cases within the pilot squad, such as: For each, define before/after metrics such as throughput, user satisfaction, SLA compliance or human-AI co-efficiency.  Step 4: Instrument for measurement and feedback  Deploy real-time instrumentation to track both qualitative and quantitative impact of Human-AI collaboration. This includes dashboards for: Task ownership distribution between humans and AI agents  Override frequency and rationale  Time-to-decision or

Reimagining the Spotify model for the human-AI enterprise Read More »

CIOs see AI prompting new IT hiring, even as boards push for job cuts

While AI may enable some workforce reductions, DiLorenzo doesn’t see huge AI-driven cuts in IT budgets and teams in the near future. Even as many organizations infuse AI into software development and tech support teams, most enterprise leaders DiLorenzo speaks with see a slowdown in hiring new developers and IT support staff, rather than layoffs for current staff, he says. “The encouragement that I give is that people who know AI will replace those that don’t,” he adds. “Whether it’s a functional or a technical job, it’s an important skill set to have. If a software developer is not using AI development tools, it’s going to be a challenge.” Job cuts already happening But in the weeks leading up to the release of the Deloitte report, Meta, Salesforce, Microsoft, Dell, and Intel collectively announced more than 24,000 job cuts related to AI, and Amazon CEO Andy Jassy predicted future job losses at his own company — and across the work world — as AI creates efficiency gains. It’s unclear how many of the layoffs were in IT departments, but about 800 of 2,000 jobs lost at Microsoft in May were software engineers. source

CIOs see AI prompting new IT hiring, even as boards push for job cuts Read More »

Ron Insana on why CIOs can’t wait for certainty in an unpredictable economy

Faced with inflation, global unrest, and constant tech disruption, CIOs are under pressure to stay innovative while also spending wisely. After all, IT budgets are tighter, talent is harder to find and hold on to, and economic signals today don’t always point in a clear direction. In these unpredictable times, CIOs need to think more like strategists than ever before. But, according to financial analyst and author Ron Insana, tech leaders can’t wait around for perfect clarity. They need to read early economic warnings and lead with confidence, even when everything feels up in the air. You probably know Ron Insana from CNBC, where he’s been analyzing markets and making sense of economic fluctuations for 40 years. He’s also an AI entrepreneur, which gives him a unique angle on how emerging technology aligns with big-picture business realities. source

Ron Insana on why CIOs can’t wait for certainty in an unpredictable economy Read More »