Information Week

Why More Businesses Turn to Interim CIOs

Not every company has a CIO full-time, whether due to budget constraints or competing strategic priorities. But during periods of crisis, change, or transition, these companies often benefit from  hiring a temporary technology leader, someone who typically steps into an enterprise to manage IT operations. In short, an interim CIO. For many enterprises, an interim CIO is critical to maintaining continuity, culture, and hiring flexibility, says Jeff Le, managing principal at consulting firm 100 Mile Strategies, and a fellow at George Mason University’s National Security Institute. Interim CIOs are becoming popular among organizations of all sizes, observes Stephen Matthew Arndt, president and CIO at Silver Linings Technology, a firm that offers IT services to senior living and healthcare organizations. He notes that technology has shifted from being a support function to a foundation for enterprise growth, efficiency, and security. “Yet many mid-sized businesses and nonprofits can’t justify or afford a full-time CIO,” he says. “Interim and [part-time] fractional CIOs offer a practical solution — executive-level leadership on a flexible basis.” An interim CIO can also bring immediacy and exposure to outside perspectives to a troubled business. They frequently bring an added dimension, a look into what’s happening in real time to market competitors. “That’s a big benefit,” says Scott Maguire, president and technology lead at human resources consulting firm Korn Ferry. There’s also the agility component — working alongside permanent employees for as long as an organization needs, he adds. “That’s a good value proposition for businesses.” Related:CIO Watercooler Talk: Stepping Up as C-Suite Advisors Amid Disruption For time-sensitive issues, organizations benefit from the shorter timelines of recruiting for an interim position. For most organizations, the average time to recruit a permanent replacement can be anywhere from 90 to 120 days, Maguire says. “These companies can’t afford to patiently stand by until the successor arrives,” he warns. “An interim CIO gives businesses the breathing room to make the right permanent placement without being rushed and without falling behind.” Key Benefits Arndt states that interim CIOs are typically recruited to fulfill any of five basic needs: 1. Cost efficiency – Executive expertise without the overhead of a full-time hire. 2. Strategic alignment – Making technology decisions that directly support long-term business goals. 3. Security and compliance – Providing up-to-date protection and regulatory guidance in a risk-heavy environment. Related:Ride the Wave: Modern Businesses Brave Rough Waters 4. Vendor management – Better evaluation of platforms and negotiation of contracts. 5. Staff support – Serving as a mentor and bridge between IT teams and executives. Candidate Selection  Recruiting for an interim position shouldn’t follow the exact same process as hiring a permanent role. In some instances, enterprises can study how an interim leader performs during a specified trial period before being hired as a permanent CIO. “However, there’s a big difference between an interim that’s being considered for a permanent role and one who is simply a placeholder,” he warns. Placeholders who aren’t necessarily interviewing for a permanent role are usually free from the politics and budget dynamics that a permanent CIO may be reluctant to touch. When searching for an interim CIO, it’s important to look beyond technical credentials. Arndt says. “A strong interim CIO blends deep industry knowledge with the ability to communicate clearly across leadership and IT teams.” Peer referrals, managed service providers, and candidates with proven cross-industry experience are often the best sources, he notes. Keep the endgame in mind, Maguire recommends. “Unless there’s agreement on a leadership focus, the role of interim executives or professionals isn’t necessarily to set long-term strategies or initiate a transformation agenda,” he explains. Related:Who Should Manage AI? Addressing Challenges Without setting clear goals or acquiring leadership buy-in, organizations may fail to effectively implement the interim CIO’s decisions, Arndt says. Maintaining continuity presents another challenge; he notes that if leadership doesn’t plan for how the interim CIO’s strategies will be carried forward, important recommendations may be lost forever. “The real value of an interim CIO is their ability to align technology with business strategy,” Arndt says. “Overlooking leadership, communication, and change-management skills often leads to poor results.” Some businesses ask existing full-time employees to step in and double as interim CIOs. That’s generally not a good idea, Maguire warns. Covering both roles simultaneously is a challenge. “What will fall through the cracks? How will the team handle additional burdens? There’s an opportunity cost.” Final Thoughts An interim CIO can bring a fresh perspective to a stagnant enterprise, drawing on lessons acquired from serving multiple industries. “This can be transformative for companies trying to compete in today’s fast-moving, technology-driven environment,” Arndt says. Maguire says that in most cases an interim CIO’s role can be defined quite simply. “Interim CIOs are hired to steer the ship, score quick wins, offer alternative viewpoints and, if desired, establish stability without overstepping boundaries.” source

Why More Businesses Turn to Interim CIOs Read More »

Data Sovereignty Challenge: CIOs Adapt in Real Time

CIOs certainly are not new to the challenges of data sovereignty. How and where data is stored has been top-of-mind for CIOs, from the days of on-premises systems to the era of hyperscalers and Saas applications, notes Shannon Bell, executive vice president, chief digital officer, and CIO of OpenText, an information management solutions company. “It’s always been important to know where your data is and how you’re protecting it,” she said. But current factors make that job more complex than ever. AI is now in the mix. Geopolitical tensions are rising. And equally unnerving — the big tech companies are having to reconsider their data sovereignty promises. Data Sovereignty Challenges in 2025 Figuring out a data sovereignty strategy is not a simple task, with CIOs having to factor in potential challenges from multiple sources. Surveillance laws vs. Privacy regulations The US CLOUD Act gives the U.S. government authority over U.S. tech companies and could give it access to their customers’ data, regardless of where it is being held. The 2018 law allows US companies to challenge a government order to produce data, if the disclosure poses a material risk of violating foreign laws, but does not guarantee exemption. When push comes to shove, it therefore seems that U.S. surveillance laws could win out over privacy regulations in other jurisdictions, like the EU. A Microsoft executive said as much when speaking to the French Senate this summer; Anton Carniaux, director of public and legal affairs with Microsoft France, said the company “cannot guarantee” that it would not hand over data on French citizens to the U.S. government if faced with an injunction, The Register reports. Related:Vanishing Public Record Makes Enterprise Data a Strategic Asset The uncertainty is driving concern. “There’s been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. “The bottom line is that it is too difficult to disintermediate yourself from the North American public cloud providers,” said Woo. “Like it or not, they are the backbone of your global infrastructure.” Related:The EU AI Act is Here (Is Your Data Ready to Lead?) Customer Data Protection In addition to tensions between U.S. surveillance laws and EU privacy laws, CIOs of global organizations have to think about data protection requirements across all of their customers’ jurisdictions. “Data protection for a customer in Germany is different than the requirements for data protection for a customer in the U.S. or in Singapore,” explained Bell. CIOs have to decide whether to enforce different standards of regulation across their different jurisdictions, to comply with local law, or to apply a single gold standard across all their data, regardless of geography. This can quickly be complex to manage. “We have an entire compliance organization within my technology team that probably wouldn’t have existed 20 years ago,” said Bell. With the intense proliferation of data, it can be easy to make mistakes. Data can wind up where it isn’t supposed to be. “Getting transparency but also alignment and having that in a centralized repository is incredibly difficult,” said Woo. Mignona Coté, CISO of enterprise software company Infor, agreed “You can test, test, test, test, test but still you forgot one use case. And so there will be consequences. There will be things that you’ve got to fix.” Related:Why Master Data Management Is Even More Important Now While mistakes can and do happen across company operations, mistakes in data regulation can be particularly costly, Woo pointed out. Sovereignty issues can lead to legal troubles with local governments, fines, and even global reputational damage. In an effort to address these challenges and liabilities, public cloud providers have been grappling with sovereignty issues for years and developing specific sovereign solutions, including those designed for heavily regulated industries that struggled to adopt the public cloud, Woo said. But ChatGPT “turned everything on its head,” she said. The Added Complications of AI CIOs are expected to lead the charge on AI innovation – but in order for AI to achieve its hoped-for outcomes, CIOs need good information management. What data is being used to train models? Where is that data coming from? Is it safe? Are AI projects being deployed in a way that upholds privacy regulations across different jurisdictions? “There’s a nervousness around deployment of AI, and I think that nervousness comes from — definitely in conversations with other CIOs — not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don’t necessarily need all the bells and whistles of the public cloud providers, whether that’s for latency or performance reasons, or whether it’s for cost or whether that’s for sovereignty reasons,” explained Woo. “And so, there has been this push to create and move that AI to the local environment as well.” Conversely, CIOs should understand how they can use AI to improve and automate data management. “AI could be used as an enabler to see if the data is going somewhere else,” said Coté. The Pressure Is On Meanwhile, the clock is ticking. Sovereignty has become a top board-level concern, amid the global proliferation of data privacy laws and the legal requirement to comply with them. Executive leaders want to know that data is safe and that regulatory compliance is being met — without hampering a company’s operations. Customers want

Data Sovereignty Challenge: CIOs Adapt in Real Time Read More »

CIO Watercooler Talk: C-Suite Advisors Amid Disruption

For CIOs, planning for the future can mean cutting through the digital noise of shadow AI, new cybersecurity threats, and the aspirations that surround quantum computing. While such disruptions in the IT landscape warrant attention, the CIOs from TIAA, Extreme Networks, and Akamai put into perspective some of their more immediate, tangible concerns. Where they plan to focus their attention in the coming 12 months sounds less like chasing trends and more about prepping their organizations to use modern tech — including AI — to deliver results.  Sastry Durvasula, chief operating, information and digital officer at financial services organization TIAA, says the rapid evolution and adoption of AI in recent years has added new responsibilities for CIOs and opened avenues for professional growth.   “AI, its potential, and its disruptive power, I think, has created a set of strategic opportunities for the CIO to be the front-line leader for a company,” he says.  Translating for the C-Suite For Durvasula, who has spent most of his career in the financial services industry, and for the past 25 years has worked for companies that were often more than a century old, IT leadership is about translation. Working with such entrenched incumbents means finding ways to bridge modern tech with operations that have seen little change over the decades. With the fast pace of AI’s spread, C-suite leaders may need careful guidance to navigate these new tech frontiers. Related:Ride the Wave: Modern Businesses Brave Rough Waters “CIOs have such an important role at this juncture because they are the strategic advisors to the CEO, to the rest of the executive committee, and to the board,” says Durvasula. Workforce Transformation CIOs are also the people who connect technology transformation to workforce transformation. Given the significant emphasis on AI, Durvasula — as others have predicted — expects the workforce to be redefined by technology. He also says it will be a CIO’s responsibility to define that workforce of the future for their company — while also ensuring that the company’s culture remains intact. This includes upskilling and re-skilling colleagues, potentially through periods of anxiousness about their current jobs and future career opportunities, he says. Reengineering Workflows, Cybersecurity’s AI Challenge Those workforce issues intersect with his next concern, the workflows of the future in this rapidly evolving space: “How should we rewire some of the critical business workflows, operational workflows, technology workflows?” Sastry Durvasula, TIAA Durvasula believes new workflows, influenced by AI, could affect roles across the enterprise, from developers and managers to designers and customer-facing personnel. There will almost certainly be challenges in maintaining company culture during such a transition, and this may well represent the area where the CIO can have the most impact. “I feel like that is the biggest mandate and opportunity for CIOs,” he says. Related:Who Should Manage AI?  AI is also changing the cybersecurity landscape for CIOs, Durvasula says. While AI could be a net positive for cybersecurity when applied defensively, the tech also presents significant governance and security concerns, such as impersonations, nation-state-sponsored attacks, and other new threats and attack vectors. The challenge for CIO, he says: “How do we leverage AI to make our cyber capabilities robust?” Finally, AI may be a prominent catalyst for change, but Durvasula also keeps watch for other disruptions and prospects that could emerge. “I think that there is a new set of business opportunities, whether they are AI-powered or just more broadly fintech-powered,” he says. To that end, Durvasula says he and TIAA look for potential strategic opportunities through TIAA Ventures, the venture capital arm of the firm, which allows them to keep a close watch on innovation that surfaces in Silicon Valley for possible investment that may expand the business into new markets. Related:Forecast for Today’s CIOs Is Simple: Turbulence “There is a range of opportunities, whether it’s with startups that are coming up or with the big tech and hyperscalers that are obviously innovating,” he says, pointing out the need for CIOs to bring their expertise to such engagements, to help leverage those innovations and opportunities to support the company’s model. IT Still Must Serve Up ROI For Anisha Vaswani, chief information and customer officer at Extreme Networks, a networking solutions provider,  key priorities include supporting business strategies and goals. At Extreme, that has meant transformation projects that include app modernization and rationalization, she says. “We’re modernizing our front office; we’re modernizing our back office.” Vaswani says there has been a shift in the CIO community mindset when it comes to AI, with curious, early experimentation giving way to thoughtful, careful investments in the technology. Anisha Vaswani, Extreme Networks Extreme Networks has developed its own AI-native platform, Vaswani says, but she also remains tempered by an MIT study that posited that about 95% of generative AI pilots fail. Rather than totally dismissing the technology based on the study, she wanted to better understand the positive results. “My first question was, ‘What were the 5% that were successful?’” Vaswani asked. Those successful pilots, she says, focused on solving real problems, often in domain-specific areas, rather than serving as general tools. Modernization Remains the Name of the Game Modernization is also front and center for Kate Prouty, senior vice president and CIO of Akamai Technologies, a provider of cloud computing and security solutions. Prouty says her organization is focused on modernization in light of legacy platforms that may still be in use. “We’re constantly making decisions about where we make technology investments based on the critical business needs for the company,” she says. Without access to more modern platforms, she says, it could be impossible for internal teams such as finance, human resources, and procurement to make use of the AI technology being pushed by vendors to embed in platforms. “A big focus for IT is really starting to modernize these platforms,” Prouty says. For example, she says their back-office systems are migrating legacy apps off an on-prem Oracle to Oracle’s cloud. This is being done in phases with HR first,

CIO Watercooler Talk: C-Suite Advisors Amid Disruption Read More »

Vanishing Public Record Makes Enterprise Data an Asset

Public data in many countries, including the U.S., once seemed like a reliable source of information, but now that data is fragile and subject to political intervention and systemic neglect. For CIOs, the implications can be profound: without stable external datasets, internal information assets must evolve from being mere operational records into strategic differentiators, new revenue opportunities, and organizational lifelines.  “We are rapidly running out of public data that is credible and usable. More and more enterprises will start to assign value to their data and go beyond partnerships to monetize it. For example, wind measurements captured by a wind turbine company could be helpful to many businesses that are not competitors,” said Olga Kupriyanova, principal consultant of AI and data engineering at ISG.  While data manipulation is a timeless tale in politics, this year the U.S. government accelerated efforts to manipulate publicly accessible data. Even seemingly nonpolitical and innocuous data, such as climate and weather records, economic indicators and scientific research, were scrubbed or tilted toward one bias or another. This is a much bigger problem than some may realize.  “We’re entering a defining moment in AI where access to reliable, scalable, and ethical data is quickly becoming the central bottleneck, and also the most valuable asset. As legal and regulatory pressure tightens access to public data, due to copyright lawsuits, privacy concerns, or manipulation of open data repositories, enterprises are being forced to rethink where their AI advantage will come from,” said Farshid Sabet, CEO and co-founder at Corvic AI, developer of a GenAI management platform. Related:The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time Disappearing Public Data  For example, in early 2025, the U.S. government removed thousands of datasets and web pages, according to The New York Times, across agencies such as the EPA, NOAA, and CDC, effectively scrubbing key sources of climate, health, and environmental justice data from the public record. It was a serious and appalling move that continues to pose substantial risks for the private sector and individuals alike. Organizations depend on public data to function, and the public needs to know their risks in climate disasters, spreading communicable diseases, and economic factors like unemployment and inflation rates.   “Through our monthly Evidence Capacity Pulse Reports, we’ve documented specific operational impacts that have real-world implications for data users,” said Nick Hart, president & CEO of the Data Foundation, a non-profit organization based in Washington, D.C. that champions the use of open data and evidence-informed public policy. “For example, the National Weather Service reduced its workforce by over 500 employees, with 52 of 122 forecasting offices now having vacancy rates above 20%, leading to operational changes in weather forecasting that affects everything from agriculture to transportation planning.”  Related:The EU AI Act is Here (Is Your Data Ready to Lead?) Among the casualties was FEMA’s “Future Risk Index,” a sophisticated tool that mapped community-level exposure to floods, fires, extreme heat, and hurricanes. Its deletion not only undermined disaster planning but also erased a resource that insurers, city planners, and businesses depended on to understand climate risk. The tool was considered of such significance to public safety that The Guardian recreated it.   The economic consequences of such data loss are already visible. Analysts estimate that U.S. public data underpinned nearly $750 billion of business activity as recently as 2022, according to the Department of Commerce. The loss of such data blinds companies that build models for everything from supply chain forecasting to investment strategy and predictions. Removing or destabilizing these resources not only damages confidence in the government but also clouds economic outlooks, leaving enterprises and markets vulnerable, according to Reuters.    Related:Why Master Data Management Is Even More Important Now These disruptions are not contained within the U.S. alone. According to Reuters, officials in Europe have recognized the fragility of relying on American scientific datasets. Countries across the EU are accelerating efforts to build alternative systems for collecting and storing critical environmental and climate information. Activists, researchers, and civil servants have also launched “guerrilla archiving” projects to mirror and preserve data before it disappears.   Global trust in shared information infrastructure is indisputably fractured. But trust in American scientists remains firm. “In March, more than a dozen European countries urged the EU Commission to move fast to recruit American scientists who lose their jobs to those cuts,” according to Reuters. The resulting brain drain further diminishes access to information in the U.S.  Saving and Finding Public Data in Unexpected Places  Meanwhile, private researchers and some nonprofit organizations sprang into action to monitor and preserve public data. Two examples are the aforementioned data rescue efforts via guerrilla archiving in the EU and the Future Risk Index, which was recreated by The Guardian after FEMA was mandated to destroy it.  Another example is found in a group of researchers and students at the Harvard T.H. Chan School of Public Health who immediately began a data preservation marathon in an unholy race to scrape and download public data from websites faster than government agencies could take it down. The public data they managed to save was then distributed back to the public through repositories such as the Harvard Dataverse. Unfortunately, the changes to government websites happened faster than the researchers could react. Not all of the data was preserved.   Fortunately, all is not lost. For example, federal open data continues to expand. “Data.gov includes over 317,000 datasets as of our July 31 report, up from about 308,000 data assets in January. This demonstrates that while there are capacity concerns in some areas, data access continues to grow in others. We also observed that at the Department of Education’s National Center for Education Statistics — a federal statistical agency — a decision to remove remote access for restricted use education data was reversed which allows researchers access to data through the end of 2025,” said Hart.  Hart also said that The National Secure Data Service at NSF has continued issuing contracts to build an effective multi-lateral data

Vanishing Public Record Makes Enterprise Data an Asset Read More »

The EU AI Act is Here (Is Your Data Ready to Lead?)

The accelerated adoption of AI and generative AI tools has reshaped the business landscape. With powerful capabilities now within reach, organizations are rapidly exploring how to apply AI across operations and strategy.   In fact, 93% of UK CEOs have adopted generative AI tools in the last year, and according to the latest State of AI report by McKinsey, 78% of businesses use AI in more than one business function.  With such an expansion, governing bodies are acting promptly to ensure AI is deployed responsibly, safely and ethically. For example, the EU AI Act restricts unethical practices, such as facial image scraping, and mandates AI literacy. This ensures organizations understand how their tools generate insights before acting on them. These policies aim to reduce the risk of AI misuse due to insufficient training or oversight.  In July, the EU released its final General-Purpose AI (GPAI) Code of practice, outlining voluntary guidelines on transparency, safety and copyright for foundation models. While voluntary, companies that opt out may face closer scrutiny or more stringent enforcement. Alongside this, new phases of the act continue to take effect, with the latest compliance deadline taking place in August.  This raises two critical questions for organizations. How can they utilize AI’s transformative power while staying ahead of new regulations? And how will these regulations shape the path forward for enterprise AI?  Related:The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time How New Regulations Are Reshaping AI Adoption  The EU AI Act is driving organizations to address longstanding data management challenges to reduce AI bias and ensure compliance. AI systems under “unacceptable risk” — those that pose a clear threat to individual rights, safety or freedoms — are already restricted under the act.   Meanwhile, broader compliance obligations for general-purpose AI systems are taking this year. Stricter obligations for systemic-risk models, including those developed by leading providers, follow in August 2026. With this rollout schedule, organizations must move quickly to build AI readiness, starting with AI-ready data. That means investing in trusted data foundations that ensure traceability, accuracy and compliance at scale.  In industries such as financial services, where AI is used in high-stakes decisions like fraud detection and credit scoring, this is especially urgent. Organizations must show that their models are trained on representative and high-quality data, and that the results are actively monitored to support fair and reliable decisions. The act is accelerating the move toward AI systems that are trustworthy and explainable.  Related:Vanishing Public Record Makes Enterprise Data a Strategic Asset Data Integrity as a Strategic Advantage  Meeting the requirements of the EU AI Act demands more than surface level compliance. Organizations must break down data silos, especially where critical data is locked in legacy or mainframe systems. Integrating all relevant data across cloud, on-premises and hybrid environments, as well as across various business functions, is essential to improving the reliability of AI outcomes and reduce bias.  Beyond integration, organizations must prioritize data quality, governance and observability to ensure that the data used in AI models is accurate, traceable and continuously monitored. Recent research shows that 62% of companies cite data governance as the biggest challenge to AI success, while 71% plan to increase investment in governance programmes.   The lack of interpretability and transparency in AI models remains a significant concern, raising questions around bias, ethics, accountability and equity. As organizations operationalise AI responsibly, robust data and AI governance will play a pivotal role in bridging the gap between regulatory requirements and responsible innovation.  Related:Why Master Data Management Is Even More Important Now Additionally, incorporating trustworthy third-party datasets, such as demographics, geospatial insights and environmental risk factors, can help increase the accuracy of AI outcomes and strengthen fairness with additional context. This is increasingly important given the EU’s direction toward stronger copyright protection and mandatory watermarking for AI generated content.  A More Deliberate Approach to AI  The early excitement around AI experimentation is now giving way to more thoughtful, enterprise-wide planning. Currently, only 12% of organizations report having AI-ready data. Without accurate, consistent and contextualised data in place, AI initiatives are unlikely to deliver measurable business outcomes. Poor data quality and governance limits performance and introduces risk, bias and opacity across business decisions that affect customers, operations, and reputation.  As AI systems grow more complex and agentic, capable of reasoning, taking action, and even adapting in real-time, the demand for trusted context and governance becomes even more critical. These systems cannot function responsibly without a strong data integrity foundation that supports transparency, traceability and trust.  Ultimately, the EU AI Act, alongside upcoming legislation in the UK and other regions, signals a shift from reactive compliance to proactive AI readiness.  As AI adoption grows, powering AI initiatives with integrated, high-quality, and contextualised data will be key to long-term success with scalable and responsible AI innovation.  source

The EU AI Act is Here (Is Your Data Ready to Lead?) Read More »

Ride the Wave: Modern Businesses Brave Rough Waters

It often feels like the world is becoming increasingly unpredictable. The proliferation of AI, economic volatility, a shifting regulatory landscape, and other factors have created new challenges that today’s businesses need to account for. For organizations across nearly every industry, this added uncertainty can make business leaders (not to mention corporate boards) a little nervous.  The truth, however, is that today’s business leaders are no strangers to risk — and savvy organizations are actively factoring it into their plans for the future. Because while today’s threat landscape may look a bit different, the underlying risks — and how to address them — have largely remained the same. Businesses are constantly adapting to new regulations and compliance frameworks, facing down new cybersecurity threats, and evolving to meet the needs and preferences of consumers. Simply put, it comes with the territory.   And, while the looming threat of new technologies, regulations and economic uneasiness may feel like a tsunami on the horizon, risk-aware organizations can easily position themselves to ride the wave.   Today’s Volatile Risk Landscape  Most businesses understand that risk is inevitable, and responsible organizations already tend to have strong risk management practices in place. Of course, preparing for what might be around the corner isn’t easy. For example, it’s hard to predict exactly how AI will impact certain risk areas, when new regulations might emerge, or what economic conditions might look like a year from now. But that’s why savvy organizations are constantly creating, testing and reworking resilience plans, business continuity plans, crisis management plans and incident response plans. A strong risk management program doesn’t mean you have a plan in place for every possible scenario. But it does mean ensuring your entire organization is equipped to make informed, risk-based decisions, even on its very worst day.  Related:Who Should Manage AI? That said, while any business can benefit from a well-established, mature approach to risk management, the current velocity of change in the world necessitates an added degree of flexibility.   With conditions changing rapidly, it’s not always easy to tell if you’re preparing for the right scenarios. Economic uncertainty can lead to changing customer and commercial sentiments, supply chain volatility, hesitancy (or enthusiasm) regarding mergers and acquisitions, and other potentially disruptive developments. Even the most well-prepared businesses sometimes find that previously reliable scenarios don’t fit the current landscape, and when that happens, they need to be prepared to dust off their assumptions and reassess their risk models.  Related:Forecast for Today’s CIOs Is Simple: Turbulence Re-Evaluating Your Assumptions  If you’re a risk leader at a modern business, it’s important to consider whether you are using the right stress factors, and whether they are weighted properly. When conducting exercises, are you taking the scenario to the point of failure to better understand where potential breaking points exist? Perhaps most importantly, is scenario planning a “check-the-box” exercise, or is your organization truly interested in understanding where it has the potential to suffer a mortal wound? Risk management isn’t about responding to specific developments but about understanding where your weaknesses lie and ensuring you can compensate for them when the unexpected happens.  Testing is critical. It’s one thing to have a policy on paper, and another to put it into practice. It’s not enough to stress test a balance sheet or simulate a cyberattack on a specific system. It’s about wargaming a fleshed-out scenario and understanding what levers need to be pulled to address it. Maybe that means sourcing products or materials from alternative suppliers. If so, can you count on those suppliers to be available at an acceptable price point? If critical systems are taken down in a cyberattack, do you have backups ready? Is there a process for activating them, and do you know who to contact? Again, risk management is about preventing surprises, and that means testing everything, top to bottom, so you know exactly what you can count on and when.  Related:IT Leadership Is More Change Management Than Technical Management It’s also important to remember that “risk management” is not the same as “risk avoidance.” For example, any business could completely eliminate the risk of email phishing by cutting itself off from the internet, but that would have obvious drawbacks. Think of a medieval knight wearing a perfect set of armor with no weaknesses. Sure, he might be protected, but he’ll be effectively paralyzed. The same is true of businesses that take an overly cautious or conservative approach.   Managing risk is good, but don’t become so risk averse that it limits your agility and flexibility. You never want to be paralyzed in the face of change you want to be light on your feet and quick to adapt.  Many of today’s biggest “risks” illustrate the difference between risk management and risk avoidance, and AI is a prime example. In fact, AI is so ingrained in today’s technology that it’s hard to even call it an “emerging” technology anymore. While it’s true that AI comes with risks, avoiding the technology altogether is no longer an option.   Instead, businesses need to ensure they have the right processes in place to take advantage of the benefits while limiting the risks. By upleveling your risk management practices and reevaluating them on a continuous basis, you can make informed, risk-aware decisions that drive the institution forward.  You may not be able to predict the exact risks that will impact your business, but the right approach can make all the difference. When you see a tsunami of change on the horizon, don’t panic. With the right risk management practices in place, you can maintain the agility you need to stand tall and ride the wave.   source

Ride the Wave: Modern Businesses Brave Rough Waters Read More »

Who Should Manage AI?

Artificial intelligence is the premier technology initiative of most organizations, and it is entering the door through multiple departments in BYOT (bring your own technology), vendor, and home-built varieties. To manage this incoming technology, the trust, risk, and security measures for AI must be defined, implemented, and managed. Who does this? Most companies aren’t sure, but CIOs should get ready as the responsibility is likely to fall on IT. Here are some steps that chief information officers can take now.  1. Meet with upper management and the board  AI adoption is still in early stages, but we’ve already seen a series of embarrassing failures that have ranged from job discrimination that violated federal statutes, to the production of phony court documents, the failure of automated vehicles to recognize traffic hazards, and false retail promises presented to consumers that companies had to pay damages for.  Most of these disasters were inadvertent. They originated from users not checking the verity of their data and algorithms or using data that was misleading because it was wrong or incomplete. The end result was damage to company reputations and brands, which no CEO or board wants to deal with.  This is the conversation that the CIO should have with the CEO and the board now, even though user departments (and IT) might already be in stages of AI implementation. The takeaway from discussions should be that the company needs a formal methodology for implementing, vetting, and maintaining AI — and that AI is a new risk factor that should be incorporated into the enterprise’s corporate risk management plan.  Related:Forecast for Today’s CIOs Is Simple: Turbulence 2. Update the corporate risk management plan  The corporate risk management plan should be updated to include AI as a new risk area that must be actively managed.  3. Collaborate with purchasing  Gartner predicted that 70% of new application development will be from user departments. Users are using low- or no-code tools that are AI-enabled. The rise of citizen development is a direct result of IT taking too long to fulfill user requests. It’s also generated a flurry of mini-IT budgets in user departments that bypass IT and go directly through the company’s purchasing function.  The risk is that users can purchase AI solutions that aren’t properly vetted, and that can present risk to the company.  One way that CIOs can help is by creating an active and collaborative relationship with purchasing that enables IT to perform its due diligence for AI offerings before they are ordered.  Related:IT Leadership Is More Change Management Than Technical Management 4. Participate in user RFP processes for IT products  Although many users are going off on their own when they purchase IT products, there is still room for IT to insert itself into the process by regularly engaging with users, understanding the issues users want to solve, and helping users solve them before products are purchased. Business analysts are in the best position to do this, since they regularly interact with users — and CIOs should encourage these interactions.  5. Upgrade IT security practices  Enterprises have upgraded perimeter and in-network security tools and methods for transactional systems, but AI applications and data present unique security challenges. An AI chat function on a website can be compromised by repetitive user or customer prompts that trick the chat function into taking wrong actions. The data AI operates on can be poisoned so as to deliver false results that the company acts on. Over time, AI models can also grow obsolete, generating false results.  AI systems, whether hosted by IT or end users, can be improved by making revisions to the QA process so that systems undergo testing by users and/or IT trying to imagine every possible way that a hacker would try to break a system, and then trying these ways to see if the system can be compromised. An additional approach, known as red teaming, is when the company brings in an outside firm to perform the QA by trying to break the system.  Related:Digital Transformation Is a Golden Opportunity for RGP IT can install this new QA approach for AI, selling it to upper management and then making it a company requirement for the pre-release testing of any new AI solution, whether purchased by IT or end users.  6. Upskill IT workers  A new QA procedure to hacker-test AI solutions before they are released to production, or new tools for vetting and cleaning data before it is authorized for AI use, or methods to check the “goodness” of AI models and algorithms are all skills that will be needed in IT to achieve AI competence. Staff upskilling is an important directive, since less than one quarter of companies feel that they are ready for AI. Users are even less prepared, so would likely welcome an active partnership with a, AI- skilled IT department.  7. Report monthly on AI  The burden of AI management is likely to fall on IT, so the best thing for CIOs to do is to aggressively embrace AI from the top down. This means making AI management a regular topic in the monthly IT report that goes to the board, and also periodically briefing the board on AI. Some CIOs might be hesitant to assume this role, but it has its advantages. It clearly establishes IT as the enterprise’s AI focal point, which makes it easier for IT to establish corporate guidelines for AI investments and deployments.  8. Clean data and vet data vendors  IT is the data steward of the enterprise. It’s responsible for ensuring that data is of the highest quality, and it does this by using data transformation tools that clean and normalize data. IT also has a long history of vetting outside vendors for data quality. Quality data is essential to AI.   9. Work with auditors and regulators  Outside auditors and regulators can be extremely helpful in identifying AI best practices for IT, and in requiring AI practices for the enterprise which in turn can

Who Should Manage AI? Read More »

Forecast for Today's CIOs Is Simple: Turbulence

The role of the chief information officer in terms of the expectations heaped on them mirrors this century’s global weather patterns. Wait a minute, it’ll change.  Over the course of 40 years in IT journalism, I’ve watched CIOs deal with the dynamic environment of IT, not just with the many tech revolutions but also the waves of change focused on corporate structure that, in turn, drive further shifts in IT strategy.  Get ready for yet more change.  But first, let’s take a step back in time to 1985-ish. For those of you who weren’t yet born at that time, this isn’t just another grandpa tale like, “We walked two miles to school in a blizzard, barefoot, uphill both ways!”  It was in the mid-80s when IT management gurus made the term chief information officer popular. Prior to that point the top IT exec in a company tended to be called something like VP of IT or information systems manager.  The CIO role was revolutionary in ways you might not expect. The CIO had to be responsible not just for IT’s data center or distributed minicomputers. That CIO also took on the chore of developing a business strategy for a company full of PCs, including a network to tie those PCs — often smuggled into the office to avoid IT — to servers. Keep in mind PCs really were new in 1985; it was just four years since the IBM PC debuted.  Related:IT Leadership Is More Change Management Than Technical Management However, the CIO also wasn’t just to be about computers and software. They were saddled with tech such as fax machines, copiers, and wired telephone systems supporting thousands of employees.  The CIO was viewed as sort of a chief knowledge officer. If it involved “information”, they were responsible, at least in theory. Humans having a defensive nature, and the bureaucrats who headed up some of the knowledge domains didn’t want some new tech boss sticking their nose into their operations. So, the noses of the PC manager and telecom director quickly got out of joint, and the CIO had to calm hard feelings.   Yet, those early CIOs were lucky in a sense. They worked in a relatively stable and predictable environment in 1985. The real change was only just getting started.  Within a couple years, the CIO role went beyond complicated. Computer worms and viruses were showing up in a “secure” setting. Meanwhile, experts and boards of directors began questioning the value of those very, very expensive mainframes.  Soon, as security threats continued to flourish with the companywide adoption of PCs and networking, it was the CIO who faced the challenges of protecting the data and operations. That CIO had to cede decision-making to what eventually became a CISO, and both struggled to maintain the balance between expanding access and protecting assets.  Related:Digital Transformation Is a Golden Opportunity for RGP By the time the World Wide Web debuted in the mid-90s, everyone from business leaders to employees and the public was clamoring for more and easier access to the “information” that the CIO had to manage.  The changes became more numerous, more rapid and more complex:  Decentralization of computing power  Y2K at the turn of the century  24×7 remote and mobile access  Outsourced processing and, sometimes, all IT services  Developer and security talent shortages  The combination of world terrorism and economic uncertainty.  Now, the CIO faces what can be viewed simultaneously as the Godzilla and the Super Man of IT: artificial intelligence. How you see it depends on your situation and your organization’s needs.  Think about this: An AI implementation in some way weaves together all of those earlier concepts, with the noticeable exception of Y2K and COVID, which are just so yesteryear. And each of the many changes and concepts plays a role in any AI initiative today.  Guess who has to manage that AI initiative and the seemingly unstoppable spread of rogue AI? In the end, it comes down to the CIO.   Related:What a CIO Needs to Do Today to Prepare for Quantum Computing For all the talk about the wonder and worry surrounding AI technologies, enterprise use of AI boils down to how you execute and manage AI tech, policies and people.   Consider that list of change factors. We’ve heard plenty about the security aspect of AI as a threat and a tool, and AI does present staffing challenges. AI has its pros and cons in SaaS strategies, as we’ve seen in numerous breaches. Today’s CIO has to figure out how citizen development plays a role in AI’s implementation, as employees download GenAI apps for their own use, perhaps without understanding the apps’ limitations and vulnerabilities.  It’s up to CIOs to define not just a business strategy for rolling out AI apps, but also a communication strategy on AI use for their many end-user departments. They have to know which AI technologies the company and those many departments can use and how, and those CIOs have to move fast and be flexible. As we have seen over the past two years, the 1980s approach to PCs — “Just don’t use a PC” — won’t work with AI.   A reactive attempt to simply ban AI’s use by user departments is too late and too draconian. CIOs have to take lessons from the traditional application development model to define user needs and ways to meet those needs. But do it much faster.  The traditional start to application development has been meetings with department heads to define user needs, leading to months or years of development. Today, IT’s response has to be accelerated in ways that would have been impossible to imagine a year or two ago. So, the classic app dev approach of interviewing users and then returning with a limited-function application in three, six, or 12 months simply won’t work. That process has to start now, and even that may be too little too late for many companies.   Wait for the perfect strategy, and the AI horse won’t just be out

Forecast for Today's CIOs Is Simple: Turbulence Read More »

What a CIO Must Do to Prep for Quantum Computing

Recent advancements in quantum computing indicate that it may soon become a mainstream technology, particularly among scientific and medical adopters. Once this happens, it will only be a matter of time before quantum enters the enterprise mainstream.  The advent of production-scale quantum computing, while still some years out, necessitates immediate strategic action from CIOs, says André M. König, CEO of Global Quantum Intelligence, a quantum industry analysis firm. He notes that transitioning to it won’t be a trivial undertaking, likely requiring five to 10 years for large enterprises to complete. “Therefore, the most critical recommendation for CIOs is to start immediate planning and to initiate a quantum-safe program if one isn’t already underway,” König advises.  Scott Buchholz, emerging technology research director and quantum computing leader at Deloitte, also believes that now is the time for CIOs to begin preparing for quantum computing’s inevitable arrival. “Many CIOs and their enterprises continue to take a ‘wait and see’ approach,” he says. “However, given the inherent technical complexity, implementation strategies will necessitate extended timelines — spanning years rather than months — and will demand the careful development of both talent and technology operating models across investment cycles.”  Related:Forecast for Today’s CIOs Is Simple: Turbulence First Steps Before doing anything else, CIOs and other leaders need to understand which use cases might benefit from quantum computing, and when capabilities to meet those needs might arrive, Buchholz says. They will also need to identify the teams to handle development and operations.  König agrees that CIOs should begin defining job roles for a future quantum environment, exploring whether to leverage external quantum consultants or to cultivate internal expertise through robust training and upskilling programs. “Investing in staff education on quantum computing will be vital for building a knowledgeable workforce capable of navigating this complex landscape,” he explains.  Meanwhile, CIOs and their teams should engage in strategic planning by identifying potential quantum use cases that could accelerate or improve existing IT processes, König says. “Advanced technology groups within the organization should be tasked with evaluating quantum computing readiness and continuously monitoring for signals of disruption in the quantum landscape.”  Now is also the time to start building close relationships with quantum technology vendors, Buchholz says. “Organizations that invest in cultivating relationships within the quantum ecosystem will be well-established and supported when quantum becomes commercially viable,” he states. “The time to begin forging these connections is now.”  Related:IT Leadership Is More Change Management Than Technical Management Business partners can leverage their significant investments and expertise in multiple ways to help educate CIOs on quantum solutions, says Doug Saylors, a partner at technology research and advisory firm ISG. “Partners can help provide a basic understanding of quantum physics and computing as a foundation, along with industry-specific use cases.” he states. “Additionally, we’re starting to see the emergence of industry-specific consortiums focused on the application of quantum computing in a non-competitive manner.”  Building Support At a minimum, CIOs should begin creating awareness campaigns for their C-level colleagues on use cases applicable to their industry and begin forecasting budget and skill requirements, Saylors says.  CIOs should start with sober, informed communication, Buchholz recommends. “Opening a dialogue offers the space to educate stakeholders on how … these new [quantum] tools provide opportunities and risks,” he says. “From there, discussions can shift to solution-oriented conversations.”  Related:Digital Transformation Is a Golden Opportunity for RGP Security Matters CIOs and CISOs must also be proactive in educating leaders on quantum computing’s potential risks, Buchholz says. He points to Deloitte’s 2025 Tech Trends Report, which found that just over half (52%) of organizations are assessing their exposure and developing quantum-related risk strategies, yet with only 30% taking decisive action to implement solutions. “With the approaching deadlines for widespread adoption, those numbers should be much higher,” Buchholz warns.  Risk management must be at the forefront of quantum preparedness, König cautions. “A key focus should be on cryptographic vulnerabilities, particularly addressing the ‘harvest now, decrypt later’ threat.” He believes that implementing post-quantum cryptography (PQC) solutions will be crucial in combatting this threat. “Ultimately, CIOs should adopt a risk-based approach to quantum preparedness, prioritizing the protection of the most critical assets and data against future quantum-enabled threats.”  Be Prepared While there’s no set date for when widespread quantum adoption will arrive, preparedness is crucial, Buchholz says. “CIOs must prepare their organizations, teams, and stakeholders to help keep their organization on a path of success and growth,” he advises. “Starting to plan today, and working to thoughtfully scale for the future, will help a CIO prepare for both the opportunities and risks quantum computing can bring to their enterprise.”  source

What a CIO Must Do to Prep for Quantum Computing Read More »

IT Leadership Means More Change Management

One way to see a new country is to become a professional truck driver. Werner Enterprises EVP and CIO Daragh Mahon did just that when he immigrated from Ireland to the US. After about nine months, he wanted to find something different and decided to leverage his prior inventory management experience to embark on a new career.  “I went to work for a company called PeachTree Software as an inventory control manager, but literally everything was considered inventory management — unloading trucks, shipping stuff, running returns. Over the first couple of months, I realized that nothing was automated. It was all spreadsheets at the time,” says Mahon. “I started learning how to code, and I started to write some applications, but I got fed up. I thought, ‘This is just pointless. Why should I build this if I can buy it?’ so I negotiated a deal with Manhattan Associates for a shipping and warehouse management system and signed a contract I had no authority to sign.”  While he wasn’t fired for overstepping his role, the CFO, Teri McEvily, reminded him that he’d violated company policy and that the software ‘better work’ or he would be fired for sure. Mahon worked closely with IT and together they helped Manhattan Associates develop one of their first high-volume shipping systems. Over the next couple of years, he continued to help IT in the afternoon or evenings, after working full-time in the warehouse.  Related:Digital Transformation Is a Golden Opportunity for RGP When PeachTree was acquired by Sage Group in 1999, the IT department was doing an SAP implementation. McEvily was tasked with running the SAP implementation for North America and subsequently globally. She asked Mahon to help with the implementation since he was a supply chain expert.  “I volunteered for and moved into the data migration area, because it wasn’t working and nobody had any data migration experience, nor did I. [Nevertheless,] Teri told me to make it work because it was a mess,” says Mahon. “So, I gathered a bunch of SQL and data guys, we sat down and figured it out. Suddenly, we were doing things with SAP data services that Deloitte and SAP probably didn’t know you could do. I started gradually doing all sorts of SAP work. Wherever there was a need, I filled it, such as running security on the platform. The more I got stuck with technical problems, the more I enjoyed it. I love chaos.”  Meanwhile, his job scope expanded to managing the relationship between the business teams and the service desk and infrastructure. In fact, Mahon and the Sage team virtualized the SAP environment, which was an industry first. When they told SAP what they were doing, they expressly told him not to do it on the grounds that it wouldn’t work.   Related:What a CIO Needs to Do Today to Prepare for Quantum Computing “We installed SAP on virtual servers in a data center, and SAP told us not to do it, but we couldn’t figure out why we shouldn’t do it [because] it would save us a ton of money [and] effort. It worked just fine,” says Mahon. “Over the coming years, I started to work with SaaS platforms Salesforce and Zuora, and that’s kind of what brought me to my build versus buy mentality. This was at the beginning of our move to the cloud, which the CIO spearheaded. We were early adopters, and I just felt it was the wave of the future. As we came out of the SAP implementations, I got involved in business analysis and I remember going to Teri one day and saying, ‘You know, I’m kind of pissed off. I’ve become a jack of all trades and a master of none, when others in the organization own a specific piece of the IT world.”  McEvily told Mahon he was looking at it the wrong way. The reason he found himself in this situation was because he was the only guy she and others could trust to fix problems. (In fact, his favorite award of all was the one given after the 10-year SAP implementation. All the approximately 200 people involved in the deployment nominated Mahon for the award, “Who to call when the **it hits the fan.”)  Related:IT Leadership Takes on AGI Mahon spent just over 20 years at Sage Group, rising through the ranks to director of IT and finally senior director, IT & business applications. After that, he held several positions at Vonage, from director of business services to ultimately senior vice president – global and IT business applications before joining Werner as EVP and CIO in 2020.  “As I moved on into director, VP, and senior VP, and CIO roles, I realized that all of that being kicked around the place, going from one area to another, one problem to another and one **it storm to another was actually a good thing,” says Mahon. “It meant I had cybersecurity and infrastructure knowledge. I ran development teams and the service desk. I worked with all the different business units as well: marketing, sales, contact center, accounting, finance, HR, you name it. So, when I sit down and have an IT conversation with anyone, I’ve been there and done [what they do] to a certain extent.”  Important Lessons Learned Along the Way  Planning is considered critical in business to keep an organization moving forward in a predictable way, but Mahon doesn’t believe in the traditional annual and long-term planning in which lots of time is invested in creating the perfect plan which is then executed.   “Never get too engaged in planning. You have a plan, but it’s pretty broad and open-ended. The North Star is very fuzzy, and it never gets to be a pinpoint [because] you need to focus on all the stuff that’s going on around you,” says Mahon. “You should know exactly what you’re going to do in the next two to three months. From three to six months out, you have a really

IT Leadership Means More Change Management Read More »