CIO CIO

What CIOs must do before the next cyberattack

“No matter how good the technology is, we’ve heard of misconfigured services on web cloud services. The challenge companies face is often keeping things secure — even though they may be secure once — keeping them secure, ensuring that the processes are maintained, that the people are well-trained and that the technology is up to date.” Q: How should enterprise leaders position their organizations to counter the next wave of cyber threats? Lohrmann: “It starts with a really good understanding of your current environment — what we call the ‘as is’ environment — your current infrastructure. Then, knowing where things are going, having a good understanding of advances in artificial intelligence and advances in autonomous technologies. “For governments, and certainly in finance, what are the attacks that are being done today? Connect those dots and look at the attacks that are likely to happen in the future. source

What CIOs must do before the next cyberattack Read More »

Winning over IT value skeptics: CIO job No. 1

The IT supplier equation Step Two of winning over IT value skeptics involves IT agency — that is, the capacity of an individual to act independently and make their own free choices. An individual is thought to have agency when they feel they are the one in control, rather than being controlled by external forces or circumstances.  In an ideal world, IT agency features an economic actor — a CEO, COO, or CMO — setting forth a measurable objective or outcome. Optimally, the IT organization thereupon comes up with a set of strategies, programs, and budgets detailing how best to achieve that objective or outcome utilizing the technology portfolio available. I am not certain that anyone on the demand side of the IT value exercise feels they are in control. I am specifically concerned that we may have ceded too much authority to IT suppliers. In July 2025, there were nine firms with $1 trillion market capitalizations. Eight of them are tech companies. Conclusion being: Tech suppliers, particularly AI tech suppliers, are booming. The question IT value skeptics should be asking is, How much value is being captured by the demand side of the technology economy? source

Winning over IT value skeptics: CIO job No. 1 Read More »

Keeping humans in the AI loop

Two agents may collaborate by sharing information, but those communications can be monitored and controlled. This is similar to the way some companies, financial firms, for example, have controls in place to prevent collusion and corruption. Human on the loop Once all of an AI agent’s actions and communications are logged and monitored, a human can go from being in the loop to on the loop. “If you try to put a human in the loop on a 50-step process, the human isn’t going to look at everything,” says McGowan. “So what am I evaluating across that lifecycle of 50 tasks to make sure I’m comfortable with the outcomes?” A company might want to know the steps were completed, done accurately, and so on. That means logging what the agent does, tracking the sequential steps it performed, and how its behavior compares to what was expected of it. So for example, if a human user asks the AI to send an email, and the AI sends five, that would be suspicious behavior, he says. Accurate logging is a critical part of the oversight process. “I want a log of what the agent does, and I want the log to be immutable, so the agent won’t modify it,” he adds. Then, to evaluate those logs, a company could use a quality assurance AI agent, or traditional analytics. “It’s not possible for humans to check everything,” says UT’s Thuraisingham. “So we need these checkers to be automated. That’s the only solution we have.” source

Keeping humans in the AI loop Read More »

Predictive vs. generative AI: Which one is right for your business?

This creates real limitations in enterprise settings. For example, if an HR professional asks a generative AI to write a job description, the result may sound polished, but likely wouldn’t reflect the company’s unique requirements, team dynamics or mission. Most existing HR tools already have templated job descriptions built in, so using a chatbot adds little value and could even introduce risk if the output reflects biased or irrelevant information pulled from unknown sources. Even worse, when HR teams start using generative AI for more sensitive areas, such as performance evaluations, promotions or layoffs, it becomes dangerous. These systems are not built to make such judgments and can’t reliably or fairly support high-stakes decisions. How would you feel if your career path depended on the results of a chatbot query, not knowing if any other research or due diligence was done on your behalf? The case for predictive AI in HR Predictive AI, on the other hand, performs a fundamentally different task. Rather than generating human-like language, it identifies patterns in historical data to make informed forecasts about future behavior or outcomes. For example, predictive AI can analyze employee data to assess which candidates are most likely to succeed in a specific role, improving hiring accuracy and retention. source

Predictive vs. generative AI: Which one is right for your business? Read More »

The data-driven digital journey defining Boehringer Ingelheim

This technology makes it easier to obtain maximum value from the data and solves the problems of previous generations, adds Sanz. “The strategy includes an actionable data governance model, which helps ensure data quality and consistency for different use cases globally,” he says. Rosell adds Boehringer has always understood that data management is essential for knowledge, optimization of resources, and decision-making. And with cloud computing and adopting Snowflake, the organization has detected a tremendous opportunity to advance its ambition to eliminate redundancies, dispersion, and silos of data, and to promote higher levels of synergies and more transparent governance. In addition, Snowflake has a solid track record working with companies in pharma, which guarantees its ability to adapt to Boehringer’s specific needs, and allows teams to focus on generating value without complex technical distractions. A key feature, Català adds, is its ability to decouple from a specific cloud provider, granting them strategic freedom and no future dependencies. “Its SaaS model resulted in a significant reduction in costs associated with maintenance and administration, freeing up resources to invest in innovation,” he says. source

The data-driven digital journey defining Boehringer Ingelheim Read More »

Rethinking environment management: How flawed architecture begins with property files

In large organizations, environments go way beyond just dev, test, QA and prod environments; they typically exist as parallel streams of work, as staggered release trains and as complex branching structures. In my experience, maintaining legacy systems while also operating newer transactional platforms requires multi-year, multi-track programs with different business lines. In these architectures, environmental configuration settings are typically stored in property files in source control based on their related branching strategies. Property files, when first introduced, were thought of as configuration files. They have now become brittle, unscalable artifacts that put teams into what I would call a “configuration hell.” The tight coupling of environmental configuration settings with deployment decisions and branching strategies can become a messy tangle where each small configuration change introduces a chain of risk and liability into lower environments and in-flight projects. The need to redefine the environment configuration In today’s cloud-native environments, the expectation of zero downtime directly conflicts with legacy practices centered around property files. This rigid coupling introduces operational overhead and stretches recovery time objectives (RTO), as services must pause for updates. Worse, manual misconfigurations can undermine data integrity and escalate recovery point objectives (RPO), risking incomplete rollbacks or state corruption. source

Rethinking environment management: How flawed architecture begins with property files Read More »

How AI-driven middleware is rewiring cloud integration for the enterprise

The shift starts with how you think about middleware’s role. In the traditional model, it routes, transforms and delivers messages based on predefined rules. In the AI-driven model, it becomes an active decision-maker. Instead of just following static paths, it’s constantly evaluating the state of the system, predicting potential bottlenecks and adjusting flows in real time. Here’s how I’ve seen AI fundamentally change the architecture From monitoring to foresight. Feeding real-time telemetry into trained ML models lets middleware forecast failures before they happen. From static routing to adaptive orchestration. Decision engines learn optimal paths based on historical performance and current load. From manual exception handling to automated self-healing. Middleware can retry, reroute or quarantine issues automatically. One of the most eye-opening deployments I worked on was for retail inventory synchronization. Traditionally, inventory updates ran in fixed intervals and followed the same processing path every time. By adding a predictive model into the middleware layer, we could detect when certain product categories were at risk of overselling during flash sales and dynamically re-prioritize updates for those SKUs. That single change reduced oversell incidents by nearly a third during peak periods. What makes this approach powerful is that it doesn’t replace your existing integration platforms, but it enhances them. Whether you’re running Kafka, MuleSoft, Talend or TIBCO, the AI layer sits alongside your existing infrastructure, learning from it and acting on its behalf. Over time, it stops being a “bolt-on” and becomes part of the middleware’s DNA. source

How AI-driven middleware is rewiring cloud integration for the enterprise Read More »

Why regulations can outlive their usefulness

In the fast-paced world of cybersecurity, regulations often feel like a paradox. On one hand, they’re critical guardrails for a secure digital environment; on the other, they occasionally act like old locks on new doors: useful in theory but increasingly obsolete in practice. The trajectory of regulatory relevance raises a fascinating question: when will certain regulations outlive their usefulness in a rapidly evolving field such as cybersecurity? The story of the “castle’s firewall” Imagine a medieval castle tasked with defending itself against swarms of invading armies. The queen has installed an unbreachable stone wall to fortify her defenses–a seemingly perfect security measure for its time. For centuries, the wall protects the castle until the invaders begin deploying cannons. The once-unbreachable wall now crumbles under its own inflexibility, unable to adapt to new methods of attack. Instead of scrapping the inadequate defenses and innovating, the queen doubles down: thicker walls, deeper moats. But the result remains the same. Ultimately, the castle falls—not because the principle of defense was flawed, but because its reliance on outdated tools and methods led to stagnation. Cybersecurity regulations share striking similarities with that castle wall. Designed in the wake of major breaches or as a knee-jerk response to new trends, regulations are often built to withstand yesterday’s attacks rather than tomorrow’s threats. They provide a vital baseline of protection, but only if they evolve with the threats they aim to mitigate. Otherwise, they risk becoming liabilities, holding organizations back from agile responses to new challenges. Surprise in the numbers: The costs of stagnation To truly understand how regulations can overstay their welcome, consider the exponential rise of cybercrime. While organizations scramble to implement new technologies such as Zero Trust Architecture and AI-driven threat detection, it’s surprising to realize how often outdated regulations thwart these adaptations. Take, for instance, compliance mandates where on-premise data storage in certain industries like finance or healthcare is a way to alleviate data residency and privacy downstream requirements. Such regulations, designed in an era where cloud solutions were seen as unreliable, fail to account for modern advances in encryption and cloud security. Companies adhering to these mandates face ballooning costs for maintaining increasingly obsolete infrastructure–all while malicious actors exploit vulnerabilities in those legacy systems. The irony? These regulations once existed to ensure tighter data protection, yet now they serve as barriers to adopting more secure solutions. When does a regulation expire? Understanding when regulations have outlived their usefulness requires reflecting on their core purpose: Are they effectively protecting people, organizations, and assets against existing threats? Or are they safeguarding a bygone era’s problems while inadvertently creating new vulnerabilities? The key characteristics that signal regulatory expiration are: stifled innovation, like regulations that block the adoption of cutting-edge tools or techniques; inflexibility in the face of new threats, like defenders being forced into a position that keeps them a step behind malicious actors; and misalignment with industry standards, like failure to reflect technological innovation will create compliance headaches while failing to minimize risk. Evolving regulations, not discarding them The answer to whether regulations will one day outlive their usefulness is not about scrapping them entirely—especially in cybersecurity, where guardrails are indispensable. Instead, it’s about ensuring that regulations mirror the dynamic nature of threats, technologies, and solutions in the market. Governments, regulators, and industry leaders must collaborate to create frameworks that are nimble and proactive, rather than reactive, fossilized remnants of past environments. The “castle’s firewall” in our modern age doesn’t need thicker walls anymore; rather, it needs adaptive, transparent defenses that recognize the cannonballs of cybercrime barreling toward them. If cybersecurity regulations don’t align themselves with the tempo of change, their fate is all but sealed: irrelevance. In the end, the usefulness of regulations depends on their continuous evolution. A note to the CISO Regulators and auditors have a difficult job to define regulations based on industry-wide requirements (a lengthy process), which are generalized, and must then be measured against individual organizations. Meanwhile, it is not uncommon for security teams to treat audits as checkbox exercises and a disruption to operations. However, it’s also an opportunity for closer collaboration and for education. Engage in the regulation review process to share practical, best practice suggestions. And let’s not forget the power of compensating control to meet a requirement is not always understood by auditors and may require an explanation for how it is applied. To learn more about Zscaler, visit here. source

Why regulations can outlive their usefulness Read More »

An action plan to keep organizations safe with artificial intelligence Government directives aren’t enough to ensure security with AI

The White House recently published an “AI Action Plan,” full of recommended policy actions aimed at making effective use of artificial intelligence in industry and government. It comes on the heels of the EU AI Act, a comprehensive legal framework that regulates various facets of AI within the European Union, with enforcement of most provisions slated to begin in August 2026. From a security perspective, organizations of all stripes would do well to remember that security has always involved a shared responsibility model. Cloud providers make that abundantly clear in their agreements, but it applies across the board, for on-premises systems as well. Each vendor and end user organization must take responsibility for some aspects of security. It’s no different with AI. So, while security and IT professionals can and should pay attention to government laws and directives, they should also be aware that whatever any government produces will, by its nature, lag the reality on the ground, often by years. Such directives are based on yesterday’s threats and technology. It’s difficult to think of a technology that has evolved faster than AI is moving right now. New developments arise seemingly by the day, and with them, new security threats. Following are some words of advice to help you keep up. Pay attention, question everything, put up guardrails First, pay attention – to emerging laws like the EU AI Act and whatever may come from the U.S. AI Action Plan, but also to your users and AI technology itself. Dig deep into how your employees are actually employing AI, the challenges they’re having, and the opportunities it offers. Consider what dangers it may present if things go awry or that bad actors, whether internal or external, may try to inject. Keep up to speed with how AI is evolving. Yes, that may be a full-time job in itself, but if you stay tuned in and connected, you can pick up on the big developments. Next, question everything. Insist on explainability with all AI applications. Only by understanding how AI works can you begin to ensure that you can root out bias, privacy violations, and other misuses of data. You also need to ensure your AI is resistant to attacks, including data poisoning, by insisting on quality data standards, and protect against unacceptable risk, such as by insisting on human judgment when warranted. You’ll also need guardrails around your AI applications, especially as agentic AI begins to take hold. If AI systems are going to be trusted to make decisions on their own, you must treat them like any other user, subject to appropriate access controls. In short, zero trust applies to AI applications just as it does to other users. Collaborate to keep up If all this sounds like a lot of work, know you’re not alone. Collaborate with your peers. Join industry user groups to stay informed and learn best practices. Collaborate, too, with industry groups like the InfraGard National Members Alliance (INMA), the private sector component of the FBI’s InfraGard program. INMA is focused on educational programs, training events, and information-sharing initiatives. While there’s no question AI presents numerous security challenges, it’s not like we haven’t seen this before. Many will recall the angst over the EU General Data Protection Regulation and concern over how difficult it would be to comply with. GDPR did force change, but organizations weathered the storm and now we’ve seen U.S. states adopt many of the same tenets. Expect the same with AI, but don’t wait for government to force your hand. Read more of the latest thinking about the biggest IT topics of the day at the Palo Alto Networks Perspectives page. source

An action plan to keep organizations safe with artificial intelligence Government directives aren’t enough to ensure security with AI Read More »

Chef provides a powerful, automated way to ensure compliance in hybrid cloud environments

With some 90% of organizations now relying on the cloud for application development and deployment, questions arise about how best to address challenges, including ensuring compliance, audit preparation, and security.[i] The answer is to choose platforms that can take advantage of tools native to both the application development and cloud platforms to deliver efficiency and speed. The app dev and deployment challenges organizations face include: Reducing organizational cyber risk from using infrastructure that spans multiple clouds and on-premises environments Lack of consistent policy enforcement across hybrid environments Difficulty enforcing governance policies, accurately tracking compliance, and effectively dealing with audits In the face of these challenges, it can be difficult to maintain the speed and agility that DevSecOps is intended to deliver. But automated tools can address these challenges and ensure compliance, while delivering important benefits. Yes, Chef, please do automate Chef, the systems management and cloud infrastructure automation platform, has long helped organizations automate compliance tasks associated with on-premises systems but is now poised to take on the cloud. Chef is a configuration management tool that helps companies automate the configuration of hundreds or thousands of computers. It enables users to run periodic checks to ensure all systems remain in the desired, compliant state based on whatever standards they need to meet, including CIS Benchmarks, DISA STIGS, PCI-DSS, or custom rules. Should any deviation occur, the automated routine can alert an administrator to the issue. Users can configure routines in several ways, including with Chef Cookbooks (or playbooks) that detail steps, or by sending shell commands directly to Linux systems or PowerShell commands to Windows systems. Additionally, Chef Inspec enables testing and auditing of applications and infrastructure to ensure they align with the desired state. Continuous compliance checks, AWS-ready Progress Software, which owns Chef, has some 750 different cloud configurations and resources for Amazon Web Services (AWS) alone. “You can validate that any AWS cloud resource should have a specific security group and other settings” says Mike Butler, principled sales engineer for Chef at Progress. “Inspec allows you to generate compliance profiles for cloud-native objects.” With the profiles in hand, users can run compliance checks as often as they like to ensure there hasn’t been any drift from the compliant state. Most customers run them once per day, he says. In addition to providing continuous compliance monitoring, the Chef-driven automation routine also makes it easy to supply any data auditors may need to prove compliance. “If an auditor wants evidence from a month ago, you can supply that,” Butler says. Using Chef also makes for a highly scalable solution. Chef’s main purpose is to help automate tasks in large-scale environments. Most of the heavy lifting goes into building routines and pipelines. After that, “the larger you scale, the more value you get out of Chef,” Butler says. Ensuring systems stay in compliance not only helps companies meet regulatory requirements, but it also helps with cybersecurity. Security and privacy are typically the main reasons behind regulatory standards, so ensuring your systems remain in compliance means they’re also generally free from any obvious vulnerabilities that can result in data breaches. Chef can also use AWS metadata to build logical groups of systems inside Courier. If you need to deploy a security patch, for example, it can be performed by AWS zone, with filters applied. You could do the East group first and, once it completes with at least 90% success, the West zone starts. As an added benefit, Chef can also help companies manage machine configurations as they move from on-prem to the cloud. “One thing Chef Infra does really well is manage the configuration of your operating system,” Butler says. “You can move objects from on-prem to AWS, and 95% of it is the same, so you have persistence.” Learn more about how Chef can keep you safe and secure in the cloud. Visit us here. [i] Gitnux, “Cloud Industry Statistics,” April 29, 2025, https://gitnux.org/cloud-industry-statistics/ source

Chef provides a powerful, automated way to ensure compliance in hybrid cloud environments Read More »