CIO CIO

Agentic AI: Balancing autonomy and accountability

Generative artificial intelligence (genAI) has been the dominant force for AI innovation, helping organizations work faster and smarter, with heightened creativity. The next wave of agentic AI raises the stakes, with the promise of autonomous, multistep workflows and independent decision-making. Yet organizations must strike the right balance between automation and accountability to capitalize on new work patterns at enterprise scale. A natural evolution of AI, agentic AI has gained traction this last year as a means of advancing operational efficiencies, trimming costs, and removing friction from customer and employee experiences. But as genAI use cases proliferate, enterprises have challenges in integrating with existing systems and tools and introducing autonomous action. In fact, despite upwards of $30 billion poured into genAI investments, 95% of organizations say they have yet to see any measurable profit-and-loss value, according to recent MIT report. The disconnect has led to rising interest in combining AI technologies to transform complex workflows and achieve desired business outcomes. “Fully autonomous and LLM [large language model]-only-based AI agents fall short, because for the enterprise, you need more than just autonomy,” said Marinela Profi, global AI and genAI market strategy lead at SAS, in a recent Foundry webinar “To achieve that decisioning component, we are starting to combine LLMs with tools, memory, and probabilistic components like traditional AI.” Three pillars of accountability Organizations are embracing AI systems’ ability to provide feedback and recommendations, but they are not yet comfortable with handing the systems full autonomy to make decisions and initiate actions without some level of human oversight. “Autonomy is great, but too much autonomy — especially in enterprise settings without oversight — can lead to unintended decisions, compliance issues, value violations, and brand damage,” Profi said. “Autonomy must be balanced with accountability, which means enterprises must know why an agent made a decision.” Before identifying or deploying agentic AI use cases, organizations need to establish mechanisms that align with three tenets of accountability: Explanation of why a particular decision is made Proper governance and traceability Human intervention for audits or overrides as needed Human-in-the-loop is also a critical factor for designing agentic AI applications. When application designers are automating a handful of tasks, system logs are often enough to explain any variances or corrections. But as complexity rises, human interaction is an essential part of workflow design, explains Eduardo Kassner, chief data and AI officer for the high-tech sector at Microsoft. “You’re doing it for quality, but what you really are doing is increasing usability because people trust the system more,” Kassner says. Another factor to consider is the build-versus-buy equation. Vendors are incorporating agents into their software, and many are offering prebuilt AI agents to simplify and streamline deployment. Although these off-the-shelf tools can jump-start implementation, some custom development is necessary, given the specificity of tasks; the complexity of data management; and security, compliance, and sovereignty requirements, Kassner says. As organizations move forward with agentic AI, the following criteria should be considered to ensure success: Reliability and accuracy Privacy Security, compliance, and sovereignty requirements Performance benchmarks Cost management Data access, governance, and management will be an ongoing challenge — and if done right, markers for success. “The key takeaway is: Don’t just automate or generate,” Profi said. “Orchestrate decisions with intelligence and trust. That is the real power and promise of agentic AI.” To learn more, watch this webinar here. source

Agentic AI: Balancing autonomy and accountability Read More »

Policy drive puts Indian enterprise software on a collision course with global giants

“While the transition away from global players may not happen overnight, this initiative lays the groundwork for Indian product companies to gradually gain greater mindshare and market share, both within the country and globally,” said Sharath Srinivasamurthy, research vice president at IDC. Subramanian pointed out that initiatives like the Digital MSME Scheme, which subsidizes IT adoption for smaller enterprises, and recognition for GST-compliant (India’s nationwide indirect tax system) solutions are giving local ERP vendors a stronger foothold against global rivals. Weighing value and reliability Zoho believes a shift is already underway. “We do anticipate a gradual but meaningful shift among Indian enterprises and government agencies as they re-evaluate their reliance on global providers,” Singh said, adding that the three key drivers are value, faster time-to-value, and local support with long-term affordability. Subramanian said that local ERP solutions could offer competitive pricing compared to global providers, making them attractive to Indian enterprises. “Indian-built ERPs are more likely to be tailored to local regulations, labor laws, and industry-specific compliance requirements.” source

Policy drive puts Indian enterprise software on a collision course with global giants Read More »

AI in healthcare: Why CRM alone isn’t enough

3. Collaboration IT can’t do this alone. Some of our best AI outcomes came when compliance officers, frontline users and clinical leads co-designed workflows and challenged assumptions. In one case, a nurse navigator pointed out that the model’s recommendations conflicted with how providers structured patient follow-ups. By bringing her into the design process, we adjusted the algorithm and the workflow together, resulting in faster adoption and more trust in the system. Cross-functional teams are not optional — they’re mission-critical. 4. Continuous learning Once deployed, AI must evolve. Monitor for model drift, feedback loops and unintended bias. Think of it as a digital organism, not a static tool. To support transparency and auditability, tools like Google’s What-If Tool allow teams to test how changes in input data affect predictions, helping to uncover potential bias before deployment. In practice, this means setting up monitoring dashboards, retraining cycles and governance reviews. On one project, we detected drift within six months as prescribing patterns shifted post-COVID. By retraining quickly, we avoided inaccurate prioritization that could have derailed trust in the system.   If you’re in a CIO or digital leadership role and planning to scale AI across patient engagement or healthcare operations, I’d offer the following guidance based on lessons I’ve learned (sometimes the hard way): source

AI in healthcare: Why CRM alone isn’t enough Read More »

From naval officer to tech executive: Lessons in reinventing leadership

Engines dead. Five Beaufort winds. A patrol ship drifting in open water. Twenty men staring at me — waiting for direction, waiting for calm, waiting for a decision. In those thirty minutes, while we fought to restart the engines, I felt the weight of accountability in its purest form. Fear was on every face, including mine. My first job wasn’t to solve the problem — it was to steady myself so that I could steady them. Only then could we focus on the solution instead of the danger surrounding us. Years later, as I led technology teams, I realized how familiar that feeling was. A system crash, a cyber incident, a project spiraling off-course — the team looks to the CIO in the same way my crew looked at me on that ship. The data is incomplete, the risks are real and the clock is merciless. And just like at sea, calm is contagious. If the leader panics, the team collapses. My career has taken me from commanding a naval vessel to writing code as a junior engineer, to founding a company and eventually to executive roles leading global technology initiatives. Every stage forced me to reinvent myself, often painfully, always urgently. Reinvention isn’t optional — it’s the core skill that has kept me relevant across two completely different worlds. source

From naval officer to tech executive: Lessons in reinventing leadership Read More »

Demand for junior developers softens as AI takes over

“Four years ago, I was that junior developer writing boilerplate CRUD code, proud of every clean PR I merged,” he says. “Today? I watch new grads struggle to land their first job, not because they’re unskilled, but because companies ask, ‘Why hire a junior for $90K when GitHub Copilot costs $10?’” Still, Agrawal sees a future role for developers who can work with AI. The best software engineers won’t be the fastest coders, but instead, they will be those who know when to distrust AI, he says. “At my company, my role has shifted from just coding to validating AI output, checking for edge cases, security risks, and logic gaps that AI can’t catch,” he says. “I’m not trying to out-code AI. I’m making myself essential by leading it with judgment.” source

Demand for junior developers softens as AI takes over Read More »

7 hard-earned lessons of bad IT manager hires

Here are seven lessons other tech leaders have learned the hard way when hiring for IT management roles. Don’t wait until it’s too late By the time Ani Mishra, engineering manager at DoorDash, realized he needed to hire an IT manager, it was already too late. He was managing too many people, overwhelmed by meetings, and starting to lean on senior engineers for help. “I thought, okay, I need to bring in a manager,” he says. “But the team was growing super-fast. It kept growing until I had 20 direct reports and I still did not have a manager.” Managing that many people is crushing. “It’s hard to keep track of what they’re all working on or how to set them up for success,” Mishra says. “I saw signs of dysfunction. People felt directionless and were getting blocked. Some brilliant engineers were taking on manager tasks because I was in back-to-back meetings and firefighting all the time. Productivity lowered because my top performers were doing things not natural to them.” source

7 hard-earned lessons of bad IT manager hires Read More »

What makes Korean Air’s IT fly

Small steps, big gains The three-year project started small and gradually scaled up. In practice, this meant building a minimum viable product first, using incremental wins to build confidence. Korean Air’s first step was a staff meal management app. By digitizing what had been a paper voucher system and rolling it out to more than 20,000 employees, the airline gained real-world experience in operating cloud systems, while simultaneously developing skills in SaaS and cloud technologies. Korean Air has now gone beyond cloud adoption to pursue application modernization, customer data integration, AI projects, and predictive platform development. And this year, it launched technologies such as an AI Contact Center and a passenger data analytics platform. “At the start, we weren’t directly preparing for the AI era,” Choi says. “But because we already built a solid cloud foundation, we were able to continuously expand into a variety of technology projects afterward. For Korean Air, the cloud migration wasn’t just an infrastructure replacement, it became a critical inflection point that created new growth drivers, and even reshaped our organizational culture.” From relying on to growing through outsourcing Today under Choi, Korean Air’s IT strategy departmentoversees the airline’s entire digital transformation, covering infrastructure technologies such as cloud, networks, and devices, as well as applications, data, AI, and ML. Around 150 employees are organized into nine teams, executing dozens of projects of varying scale each year. Over time, both the size and goals of this organization have evolved. source

What makes Korean Air’s IT fly Read More »

Before the next AI bet, confront the data reality

Artificial intelligence (AI) has evolved from a buzzword to a boardroom priority. But for most enterprises, scaling AI is proving harder than expected. Not because of a lack of ideas or ambition, but because of one silent saboteur: unready data infrastructure. In conversations with customers across industries, I see a pattern repeat itself. AI pilots work beautifully in isolation. But when it comes to deploying them at enterprise scale, the roadblocks emerge: poor data quality, fragmented storage, inconsistent governance, and brittle pipelines that buckle under the pressure of AI workloads. Our recent Omdia research, The State of Modern Data Platforms, confirms this. While 82% of organisations have either implemented or are implementing open-standards data platforms, a majority still struggle with accessing and integrating data across cloud, edge, and legacy systems. More alarmingly, only 30% have AI-augmented workflows in production—despite AI being a strategic priority. Scaling AI needs more than GPUs You can no longer treat data as a back-end IT concern. It’s the strategic foundation that determines whether your AI efforts scale or stall. AI at scale demands: Unified access to structured, semi-structured, and unstructured data Trusted, governed data pipelines that eliminate bias and risk High-performance architecture that supports real-time inference MLOps and DataOps frameworks that enable experimentation and agility And most importantly, it demands a mindset shift: from building one-off AI use cases to building AI-ready data infrastructure that can support continuous, organisation-wide intelligence. What’s breaking today’s AI ambitions? In our latest IDC spotlight report on AI-ready data, we found that 20% of AI projects in Asia-Pacific fail due to data-related challenges. That includes data trust issues, poor lineage, inconsistent access controls, and outdated integration methods. Customers we speak to surface three recurring problems: Legacy data estates that weren’t built for AI workloads or vector formats Siloed teams and toolchains that lead to redundancy and rework Governance gaps that increase regulatory risk and kill AI velocity The result? Slower time to insight. Higher costs. And a growing disconnect between AI ambition and AI execution. The GenAI shift: More data, more problems? Generative AI (GenAI) brings a new layer of complexity. Unlike traditional AI, GenAI models demand vast, high-quality, contextual data, and compute systems that can support RAG (retrieval-augmented generation), embedding stores, and streaming pipelines. Most enterprises aren’t ready. Why? Because they’re still wrestling with foundational issues: where their data lives, how it moves, who governs it, and how it connects to the AI layer. This is where the AI-ready data value chain becomes not just important, but foundational. As outlined in our latest IDC report, the value chain spans every stage of the data lifecycle—from strategic acquisition and cleansing, to contextual enrichment, to model training, deployment, and continuous feedback loops. It’s not just about moving data—it’s about activating it with trust, structure, and governance built in. This value chain also encompasses supporting activities like data engineering, data control plane governance, metadata management, and domain-specific annotation, which ensure AI models are trained on relevant, high-quality, and unbiased datasets. It brings together diverse roles across the enterprise: CISOs ensuring data security, CDOs aligning data with business priorities, and data scientists tuning AI models for contextual outcomes. Without this backbone, GenAI becomes an expensive experiment. With it, enterprises can scale AI with control, confidence, and measurable value. What leading enterprises are doing differently The most successful organisations we work with are doing five things right. Consolidating platforms to reduce fragmentation across cloud, edge, and on-prem Embedding governance by design: encryption, lineage, masking, consent, privacy Building for flexibility: open-source, containerised, multi-cloud deployments Operationalising AI pipelines with robust MLOps frameworks Partnering for scale rather than building everything in-house As our Omdia research found, only 12% of companies want to build their own platform. 52% prefer working with trusted partners that bring agility, compliance, and innovation together. Platforms like our own Vayu Data Platform embody this shift. Designed with AI workloads in mind, it brings together secure-by-design architecture, cloud-to-edge flexibility, and lifecycle automation for data ingestion, governance, and AI operationalisation. It’s this kind of architectural readiness that’s enabling our customers to move from isolated pilots to scaled, production-grade AI. If your data is siloed, your pipelines are manual, and your governance is patchy, your infrastructure isn’t ready for AI at scale. The good news is that you don’t need to start from scratch. You just need to start with intent: Reimagine your data architecture. Invest in AI-ready platforms that unify data and accelerate intelligence. Promote a culture where data isn’t just collected—it’s activated. Click Here for the report “ The State of Modern Data Platforms”. source

Before the next AI bet, confront the data reality Read More »

Proposed H-1B changes could redefine global talent acquisition

David Foote, chief analyst and research officer at Foote Partners, a firm that that focuses on what it describes as “the people (versus vendor) side of managing technology,” said that what is being suggested in the notice is it is “an attempt to be fairer, but salary is not a proxy for skill level. It never has been. Right now, a senior cybersecurity analyst in San Jose, that job is averaging almost $180,000 a year. That job in Grand Rapids, Michigan is about $108,000 a year.” Biggest challenge? How to make it work He said that within the current system, “the largest numbers of H-1B visas are in California, Texas, and Virginia. Why? Well, because in California there are a lot of tech companies, Texas, a lot of tech companies, and in Virginia, because you’ve got this whole area around Washington DC, which is just full of tech hires. It’s very easy to see that it’ll continue to benefit those areas.” Aside from the proposed changes “definitely being a disadvantage for startups and nonprofits and academia because there’s a lot of hiring in those areas,” Foote said, the proposal from the DHS “also adds complexity, and I think litigation risk as well.” source

Proposed H-1B changes could redefine global talent acquisition Read More »