Information Week

Why Most Agentic Architectures Will Fail

Agentic artificial intelligence is expected to have a major impact because it can execute complex tasks autonomously. For now, the hype is outstripping successful implementations, and there are a lot of reasons for that.   “In 2024, AI agents have become a marketing buzzword for many vendors. However, for user organizations, agents have been an area of early curiosity and experimentation, with actual implementations being far and few,” says Leslie Joseph, principal analyst at Forrester. “We expect this to change in 2025 as the technology and the ecosystem mature. However, our prediction offers a cautionary note.”  Joseph says organizations attempting to build AI agents are failing for three main reasons: a poorly scoped vision for agentic workflows, a poor technical solution, and a lack of focus on change management.  “A poorly scoped vision for agentic workflows results in either a too broad or narrow bounding box for agent functionality,” says Joseph. “Too narrow a scope may render the problem as solvable by a deterministic workflow, while too broad a problem might introduce too much variability. Agent builders should ask themselves how best to define the business problem they are trying to solve, and where an AI agent fits into this scope.”  Related:AI’s on Duty, But I’m the One Staying Late Second, it’s early days. Agents are still very early-stage applications, and the ecosystem, including agentic tooling, is less evolved than one might expect.   “While many vendors message around the ease-of-use and drag-drop nature of their agent builder platforms, the fact is that there is still a lot of engineering needed under the hood to deliver a robust enterprise solution, which requires strong technical skills,” says Joseph.  Finally, a lack of focus on change management isn’t helping. Organizations need to understand how the agentic workflow fits into or enhances existing processes and being proactive about managing change.    “The invention of LLMs was like the discovery of the brick,” says Joseph. “With agents, we are now figuring out how to put these bricks together to construct homes and cities and skyscrapers. Every enterprise will need to identify what their desired level of autonomy is, and how to build towards that using AI agents.”     Leslie Joseph, Forrester He expects the short-term benefits to be process improvement and productivity, but over the longer term, enterprises should be ready for agents to create disruptions across the tech stack. For now, companies should embrace AI agents and agentic workflows, given its disruptive potential.  Related:How Do Companies Know if They Overspend on AI and Then Recover? “Start investing in experiments and allocating budgets towards proofs-of-concept. Ensure that your teams learn along the way rather than outsourcing everything to an ISV or tech vendor, because these learnings will be crucial down the road,” says Joseph.  Multi-Agent Workflows Are Challenging  When establishing a multi-agent workflow, there are three primary challenges businesses face, according to Murali Swaminathan, CTO at software company Freshworks. First, it’s incredibly difficult to make workflows predictable in a world that is unstructured and conversational. Second, even complex reasoning in workflows can be prescriptive and hard to achieve reliably. Third, continuous evaluation of these workflows is necessary to measure, and ultimately realize efficacy.   “[E]nterprises must establish clear approaches on what workflows or problems they want the agentic systems to solve,” says Swaminathan. “Additionally, it’s critical that they develop a clear plan on how they will gauge success. This approach will ensure that expectations are measured, and that a strategy of ‘progress over perfection’ is employed.”  Over the short term, enterprises will most likely achieve task-based goals related to the employee and agent. Over the long term, business benefits should follow, along with insights about what the business should and should not do.  Related:Addressing the Security Risks of AI in the Cloud “[C]reate a clear game plan on how to implement, utilize, and measure the success of agentic architectures,” says Swaminathan. “Failing to plan is planning to fail.”    Insufficient Infrastructure and Data Governance  When it comes to agentic architectures, infrastructure and data governance matter greatly.  “Without the right infrastructure and data governance in place, agentic architectures struggle to handle the complexity, scale, and interoperability needed for successful implementation,” says Doug Gilbert, CIO and chief digital officer at experience-driven digital transformation partner Sutherland Global. “Companies should focus on building a strong digital core that can handle the high demands of AI, from data processing to seamless integration with hybrid or multi-cloud environments. This not only allows organizations to scale AI capabilities efficiently but also ensures the flexibility to adapt as systems evolve.”  Equally important is a well-defined data strategy. Whether leveraging a hybrid, private, or multi-cloud approach, secure and accessible data is essential for building robust AI solutions, ensuring compliance and security across the board.  Interconnectivity Matters  Interacting with other systems designed for humans is much harder for agentic AI to do than it seems.   “Making RPA [Robotic Process Automation] nearly 100% reliable took 12-plus years. And that’s carefully hard coded to interact with human operated systems across the web and Windows. So, we see these people suggesting that they can get an LLM to do the same and it turns out [to be] quite unreliable,” says Kevin Surace, chairman and CTO at autonomous testing platform Appvance. “People will be disappointed when the agent thinks it did everything right, but you later find that payment never went out.”  Despite the fact humans don’t get everything right, people expect agentic AI outcomes to be 100% accurate. As an accuracy benchmark, Surace suggests setting the accuracy goal as high as RPA or well-trained humans.  “Anyone can demo a simple action a few times,” says Surace. “But doing complex tasks with variability a thousand times without failure — then you have a product people want.”  Orchestration Can Be Tricky  Orchestration involves end-to-end harmonization of outputs from multiple agents, delivering a unified and comprehensive resolution to the user’s query.   “A key of the agentic AI architecture is its capability to organize agents logically by functional domains such as IT, HR, engineering, and

Why Most Agentic Architectures Will Fail Read More »

What Makes a Perfect AI Pilot?

How do you get your AI use case up and running? A pilot is the essential starting point. However, if you’re like most organizations, you may run up against barriers at this early stage. In research conducted by Lenovo earlier this year, 60% of companies told us that they struggle to get stakeholder support for pilot programs, while the majority reported substantial financial constraints at both pilot (88%) and deployment stage (83%). How can you overcome these hurdles and put a successful pilot together? Like good cooking, it’s about getting the ingredients, people, and timing right. Let’s start with ingredients. Bring Together the Data and Tools Data is the key component of any AI application. What’s imperative is to first find the data that’s needed to drive your use case. While each one is different, what GenAI projects have in common is that the data is likely to be unstructured – for example, a mix of documents, PDFs, and web content – rather than relational data. To be maximally effective, AI solutions typically need as much of this data as possible. This brings us to the second ingredient – the tools to run and manage the pilot. These break down into three broad areas: Data evaluation and screening. First up, is there enough data to work with? It’s important to know what’s there upfront. Tools are available now that can evaluate documents and ensure they have enough quality data to support a GenAI use case. Data cleansing. Documents need to be screened for low-quality documents, duplicate information or personally identifiable information that should be masked, for example. So, you’ll need a tool that’s capable of doing this across unstructured data, which is a different challenge to cleaning up relational databases. GenAI operations. If an AI pilot is not properly proven, stakeholders will rapidly lose trust. Solutions have been known to “hallucinate,” give out incomplete or incorrect answers, or even make missteps like recommending a competitor’s products. GenAI operations put in place practices that check for and correct issues such as model drift and bias, guarantee accuracy, and generally make sure the solution is ready for use in a production environment. This might include, for example, using a set of baseline questions to benchmark the AI’s performance. Assemble the Right People Next, the people. The first and most obvious are the individuals who own the use case that the GenAI pilot is designed for. Engaging with them – and understanding their definition of success – is a critical first step. Then, there are a range of people that our experience shows are pivotal to making any AI pilot a success: Data steward. This is the subject-matter expert who owns and understands the data needed to drive the solution. Vendor data expert. They work with the data steward to make sure data is cleansed and ingested into the GenAI system, making it ready to use. IT team. The pilot needs people who understand the systems that are being integrated with the vendor’s AI solution and have ownership of them from a technology point of view. Security team member. The AI pilot also needs a security expert on hand to ensure that the solution meets your security requirements. This individual will also be able to define upfront the levels of integration required, as well as the solution’s balance between the cloud and your secure data center. Internal domain experts. While a vendor can of course help with testing an AI pilot, it’s essential to bring in subject-matter experts to validate whether it is providing the right responses. In an insurance use case, for example, these would be the people who know whether the AI was recommending the correct policies. Get the Timing Right Finally, timing. GenAI pilots typically take between two and six months, depending on the extent of the use case. However, like any good recipe, they need careful monitoring throughout the process, in this case to check for “model drift” and to correct for any changes in data or the model itself. And to save time in the future, what’s important is to use one Gen AI platform for every use case, to avoid having to start from scratch every time. Proven Solutions Lenovo, in collaboration with NVIDIA, offers tried-and-tested tools and frameworks for building AI pilots to help you get GenAI solutions into production. Lenovo AI Advantage with NVIDIA, a full-stack solution for building and deploying AI capabilities, combines Lenovo’s services and infrastructure with NVIDIA accelerated computing and NVIDIA AI Enterprise software. To build real-world proofs of concept, Lenovo AI Fast Start delivers live solutions to demonstrate generative AI deployment and showcase business, operational, and technology results. Businesses can accelerate and quickly scale AI using full-stack NVIDIA-based technologies through Lenovo AI Fast Start for NVIDIA AI Enterprise, which also includes NVIDIA NIM microservices, NVIDIA NeMo, and NVIDIA Blueprints. To learn more about GenAI pilots, and how Lenovo and NVIDIA technologies can help, contact [email protected]. source

What Makes a Perfect AI Pilot? Read More »

Building an AI Council to Drive the 2025 Tech Revolution

Gartner recently shared that AI is the No. 1 technology that CEOs believe will significantly impact their industries within the next three years. However, as enterprise leaders have realized by now, turning AI’s promise into measurable outcomes requires more than technology — it demands aligned strategies, governance, and scalable operating models. AI councils have emerged as essential tools for enterprises to harness the full potential of this evolving technology, ensuring investments align with business goals and deliver tangible results.   AI Councils: Essential for ROI   AI initiatives can quickly become fragmented and ineffective without strategic coordination. In fact, Gartner revealed in its report that 49% of leaders note challenges scaling AI due to scattered approaches. This is where AI councils come into play. Acting as central hubs, these councils streamline efforts by unifying AI investments, helping enterprises move beyond experimental projects to scalable strategies that deliver measurable outcomes. For example, AI councils bridge insights across departments, from pre-sales to customer support, while establishing governance and literacy.   At the heart of AI transformation are CIOs, making them uniquely positioned to guide their organizations through an AI council approach. No longer confined to the traditional IT role, today’s CIOs are stepping forward as leaders of business transformation and revenue growth. With comprehensive access to enterprise data and systems, CIOs can align current AI initiatives with business goals and position this technology as a competitive differentiator and growth enabler.    Related:Why So Many Customer Experiences Are Mediocre at Best Establishing an AI Council   To effectively establish an AI council, business leaders must consider these three elements:   1. Identify stakeholders: Bring together leaders from cross-functional teams to ensure diverse perspectives and enterprise-wide alignment.   2. Set objectives and KPIs: Define clear, measurable goals for AI initiatives to track progress and demonstrate value.   3. Align strategies: Gartner emphasizes the importance of synchronizing AI strategies with IT and data and analytics plans to maximize synergy and streamline implementation.   Strategic Questions Every AI Council Should Address   A foundational aspect of an effective AI council is its ability to frame and address the right questions — those that maximize the impact of AI initiatives across an organization. By doing so, the council provides clarity, alignment, and actionable insights to guide strategic decisions.   Related:Tech Company Layoffs: The COVID Tech Bubble Bursts Questions serve as a unifying thread, connecting diverse roles, technologies, and objectives. They ensure that every AI-related initiative contributes to broader organizational goals. In my own experience with AI councils, these questions have been instrumental in guiding successful outcomes.   For instance, my enterprise’s AI council was established with a clear purpose: to act as a cohesive force across various roles, connecting experiments, pilots, proofs of concept, and broader investments in AI. This focus has helped the council provide meaningful answers to questions such as:   How can customer support teams leverage insights from pre-sales calls to enhance service and outcomes?   How do we create a through-line across go-to-market (GTM) roles to avoid isolated productivity improvements and foster collective advancement?   How can we extract maximum value from existing technologies within the enterprise tech stack?   Is there a consolidation opportunity, such as adopting a single tool or shared technologies, to enhance collaboration and efficiency across teams?   By addressing these questions, the AI council not only found impactful solutions but also surfaced additional questions that needed to be asked — ensuring a continuous cycle of refinement and innovation.   Related:Addressing the Skills Gap to Keep Up with the Evolution of the Cloud Measuring AI Outcomes and Driving ROI   Many organizations overestimate AI’s immediate potential, leading to challenges in scalability and implementation. For example, RAND recently shared that 80% of AI projects are failing. Insufficient training data, a focus on cutting-edge technology over user needs, inadequate infrastructure for deployment, and applying AI to problems beyond its current capabilities, are shared as common barriers to successful AI implementation.  AI councils enable enterprises to avoid common AI integration pitfalls like technology overhype by helping leaders focus on the impact of AI on business-critical objectives rather than the appeal of the technology itself. A successful AI council will track technology metrics such as: time saved on revenue-critical tasks, improved customer engagement, and cost savings. Gartner also recommends developing KPIs tied directly to business priorities for clearer impact evaluation.   The Future of AI Councils: A Strategic Imperative   As CIOs and enterprise leaders take on the challenge of scaling AI, the importance of a well-structured AI council cannot be overstated. It’s a strategic imperative, not just a tactical tool. By focusing on measurable impact, ensuring alignment across roles, and embracing a continuous cycle of refinement, AI councils position organizations to thrive in an AI-driven future.   source

Building an AI Council to Drive the 2025 Tech Revolution Read More »

6 AI-Related Security Trends to Watch in 2025

Most industry analysts expect organizations will accelerate efforts to harness generative artificial intelligence (GenAI) and large language models (LLMs) in a variety of use cases over the next year. Typical examples include customer support, fraud detection, content creation, data analytics, knowledge management, and, increasingly, software development. A recent survey of 1,700 IT professionals conducted by Centient on behalf of OutSystems had 81% of respondents describing their organizations as currently using GenAI to assist with coding and software development. Nearly three-quarters (74%) plan on building 10 or more apps over the next 12 months using AI-powered development approaches. While such use cases promise to deliver significant efficiency and productivity gains for organizations, they also introduce new privacy, governance, and security risks. Here are six AI-related security issues that industry experts say IT and security leaders should pay attention to in the next 12 months. AI Coding Assistants Will Go Mainstream — and So Will Risks Use of AI-based coding assistants, such as GitHub Copilot, Amazon CodeWhisperer, and OpenAI Codex, will go from experimental and early adopter status to mainstream, especially among startup organizations. The touted upsides of such tools include improved developer productivity, automation of repetitive tasks, error reduction, and faster development times. However, as with all new technologies, there are some downsides as well. From a security standpoint these include auto-coding responses like vulnerable code, data exposure, and propagation of insecure coding practices. “While AI-based code assistants undoubtedly offer strong benefits when it comes to auto-complete, code generation, re-use, and making coding more accessible to a non-engineering audience, it is not without risks,” says Derek Holt, CEO of Digital.ai. The biggest is the fact that the AI models are only as good as the code they are trained on. Early users saw coding errors, security anti-patterns, and code sprawl while using AI coding assistants for development, Holt says. “Enterprises users will continue to be required to scan for known vulnerabilities with [Dynamic Application Security Testing, or DAST; and Static Application Security Testing, or SAST] and harden code against reverse-engineering attempts to ensure negative impacts are limited and productivity gains are driving expect benefits.” AI to Accelerate Adoption of xOps Practices As more organizations work to embed AI capabilities into their software, expect to see DevSecOps, DataOps, and ModelOps — or the practice of managing and monitoring AI models in production — converge into a broader, all-encompassing xOps management approach, Holt says. The push to AI-enabled software is increasingly blurring the lines between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLMs and GenAI apps that dynamically generate responses based on patterns learned from training data sets, Holt says. The trend will put new pressures on operations, support, and QA teams, and drive adoption of xOps, he notes. “xOps is an emerging term that outlines the DevOps requirements when creating applications that leverage in-house or open source models trained on enterprise proprietary data,” he says. “This new approach recognizes that when delivering mobile or web applications that leverage AI models, there is a requirement to integrate and synchronize traditional DevSecOps processes with that of DataOps, MLOps, and ModelOps into an integrated end-to-end life cycle.” Holt perceives this emerging set of best practices will become hyper-critical for companies to ensure quality, secure, and supportable AI-enhanced applications. Shadow AI: A Bigger Security Headache The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. One example is the rapidly proliferating — and often unmanaged — use of AI chatbots among workers for a variety of purposes. The trend has heightened concerns about the inadvertent exposure of sensitive data at many organizations. Security teams can expect to see a spike in the unsanctioned use of such tools in the coming year, predicts Nicole Carignan, vice president of strategic cyber AI at Darktrace. “We will see an explosion of tools that use AI and generative AI within enterprises and on devices used by employees,” leading to a rise in shadow AI, Carignan says. “If unchecked, this raises serious questions and concerns about data loss prevention as well as compliance concerns as new regulations like the EU AI Act start to take effect,” she says. Carignan expects that chief information officers (CIOs) and chief information security officers (CISOs) will come under increasing pressure to implement capabilities for detecting, tracking, and rooting out unsanctioned use of AI tools in their environment. AI Will Augment, Not Replace, Human Skills AI excels at processing massive volumes of threat data and identifying patterns in that data. But for some time at least, it remains at best an augmentation tool that is adept at handling repetitive tasks and enabling automation of basic threat detection functions. The most successful security programs over the next year will continue to be ones that combine AI’s processing power with human creativity, according to Stephen Kowski, field CTO at SlashNext Email Security+. Many organizations will continue to require human expertise to identify and respond to real-world attacks that evolve beyond the historical patterns that AI systems use. Effective threat hunting will continue to depend on human intuition and skills to spot subtle anomalies and connect seemingly unrelated indicators, he says. “The key is achieving the right balance where AI handles high-volume routine detection while skilled analysts investigate novel attack patterns and determine strategic responses.” AI’s ability to rapidly analyze large datasets will heighten the need for cybersecurity workers to sharpen their data analytics skills, adds Julian Davies, vice president of advanced services at Bugcrowd. “The ability to interpret AI-generated insights will be essential for detecting anomalies, predicting threats, and enhancing overall security measures.” Prompt engineering skills are going to be increasingly useful as well for organizations seeking to derive maximum value from their AI investments, he adds. Attackers Will Leverage AI to Exploit Open Source Vulns Venky Raju, field CTO at ColorTokens, expects threat actors will leverage AI tools to exploit vulnerabilities and automatically generate exploit code in open

6 AI-Related Security Trends to Watch in 2025 Read More »

What CISOs Think About GenAI

GenAI is everywhere — available as a standalone tool, proprietary LLMs or embedded in applications. Since everyone can easily access it, it also presents security and privacy risks, so CISOs are doing what they can to stay up on it while protecting their companies with policies.  “As a CISO who has to approve an organization’s usage of GenAI, I need to have a centralized governance framework in place,” says Sammy Basu CEO & founder of cybersecurity solution provider Careful Security. “We need to educate employees about what information they can enter into AI tools, and they should refrain from uploading client confidential or restricted information because we don’t have clarity on where the data may land up.”  Specifically, Basu created security policies and simple AI dos and don’ts addressing AI usage for Careful Security clients. As is typical these days, people are uploading information into AI models to stay competitive. However, Basu says a regular user would need security gateways built into their AI tools to identify and redact sensitive information. In addition, GenAI IP laws are ambiguous, so it’s not always clear who owns the copyright of AI generated content that has been altered by a human.  From Cautious Curiosity to Risk-Aware Adoption  Related:What Could Less Regulation Mean for AI? Ed Gaudet, CEO and founder of healthcare risk management solution provider Censinet says over the years as a user and as a CISO, his GenAI experience has transitioned from cautious curiosity to a more structured, risk-aware adoption of GenAI capabilities.   “It is undeniable that GenAI opens a vast array of opportunities, though careful planning and continuous learning remain critical to contain the risks that it brings,” says Gaudet. “I was initially cautious about GenAI at the start because of the privacy of data, IP protection and misuse. Early versions of GenAI tools, for instance, highlighted how input data was stored or used for further training. But as the technology has improved and providers have put better safeguards in place — opt-out data and secure APIs — I have come to see what it can do when used responsibly.”  Gaudet believes sensitive or proprietary data should never be input into GenAI systems, such as OpenAI or proprietary LLMs. He has also made it mandatory for teams to use only vetted and authorized tools, preferably those that run on secure, on-premises environments to reduce data exposure.   Ed Gaudet, Censinet “One of the significant challenges has been educating non-technical teams on these policies,” says Gaudet. “GenAI is considered a ‘black box’ solution by many users, and they do not always understand all the potential risks associated with data leaks or the creation of misinformation.”   Related:AI-Driven Quality Assurance: Why Everyone Gets It Wrong Patricia Thaine, co-founder and CEO at data privacy solution provider Private AI, says curating data for machine learning is complicated enough without having to additionally think about access controls, purpose limitation, and the security of personal and confidential company information going to third parties.   “This was never going to be an easy task, no matter when it happened,” says Thaine. “The success of this gargantuan endeavor depends almost entirely on whether organizations can maintain trust with proper AI governance in place and whether we have finally understood just how fundamentally important meticulous data curation and quality annotations are, regardless of how large a model we throw at a task.”  The Risks Can Outweigh the Benefits  More workers are using GenAI for brainstorming, generating content, writing code, research, and analysis. While it has the potential to provide valuable contributions to various workflows as it matures, too much can go wrong without the proper safeguards.  “As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards,” says Harold Rivas, CISO at global cybersecurity company Trellix. “Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution.”  Related:6 AI-Related Security Trends to Watch in 2025 However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud.  Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year.   “The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary,” says Nag. “We’re running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.”  The standards landscape has also evolved. The release of NIST’s AI Risk Management Framework and concrete guidance from major cloud providers on AI governance, provide clear frameworks to audit against.   “We’ve implemented these controls within our existing security architecture, treating AI much like any other data-processing capability that requires appropriate safeguards. From a practical standpoint, we’re now running different AI workloads based on data sensitivity,” says Nag. “Public-facing functions might leverage cloud APIs with appropriate controls, while sensitive data processing happens exclusively on private infrastructure using our own models. This tiered approach lets us maximize utility while maintaining strict control over sensitive data.”  Dev Nag, QueryPal The rise of enterprise-grade AI platforms with SOC 2 compliance, private instances and no data retention policies has also expanded QueryPal’s options for semi-sensitive workloads.   “When combined with proper data classification and access controls, these platforms can be safely integrated into many business processes. That said, we maintain rigorous monitoring and access controls around all AI systems,” says Nag. “We treat model inputs and outputs as sensitive data streams that need to be tracked, logged and audited. Our incident response procedures specifically account for AI-related data exposure scenarios, and we regularly test these procedures.”  GenAI Is Improving

What CISOs Think About GenAI Read More »

New Cybersecurity Rules Coming for Health Care

Health care organizations may soon be subject to new cybersecurity rules. The US Department of Health and Human Services (HHS) is proposing an update to the HIPAA Security Rule that would require covered health care entities to bolster their cybersecurity posture.   The proposed change comes as breaches continue to wreak havoc in the health care industry. From 2009 to 2023, health care organizations reported 5,887 data breaches involving 500 or more records to the Office for Civil Rights (OCR), according to The HIPAA Journal. A total of 667 health care data breaches occurred in 2024.   Melanie Fontes Rainer, OCR director, pointed to the ransomware attack on Change Healthcare  as an example of how these breaches are growing and impacting more people.   “This proposed rule to upgrade the HIPAA Security Rule addresses current and future cybersecurity threats. It would require updates to existing cybersecurity safeguards to reflect advances in technology and cybersecurity, and help ensure that doctors, health plans, and others providing health care meet their obligations to protect the security of individuals’ protected health information across the nation,” Fontes Rainer said in the HHS press release.   Proposed Rule  The HIPAA Security Rule, published in 2003, has not been updated since 2013, according to HHS. Covered entities handling electronic protected health information (ePHI) — including health care providers, health plans, health care clearinghouses, and business associates — would need to adhere to the updates in the proposed rule.   Related:Nation-State Threats Persist with Information Breach of US Treasury The unpublished version of the rule outlines proposed amendments to the Security Rule. The proposed changes are designed to align with best practices in cybersecurity, such as multifactor authentication, encryption of ePHI, network segmentation, and vulnerability scanning. Under the proposed rule, covered entities would be required to regularly review, test, and update cybersecurity policies and procedures, according to HHS.   “This rule represents a clear mandate for health care organizations, heightened accountability and an even greater emphasis on robust security protocols,” Shawn Hodges, CEO of Revelation Pharma, a national network of compounding pharmacies, tells InformationWeek via email. “Compliance will demand an ongoing commitment to quality control, frequent system audits, and advanced data protection measures.”  From Proposal to Practice  The proposed rule is scheduled to be published in the Federal Register on Jan. 6. Stakeholders will be able to share feedback during a 60-day public comment period. New regulations always come with the potential for pushback.   Related:How AI Can Speed Disaster Recovery “One of the things that people will push back on is it really is going to take resources, costs and people to implement a lot of these changes,” Brian Arnold, director of legal affairs at managed cybersecurity platform Huntress, tells InformationWeek.   Resource constraint is a common concern in the health care industry, particularly for rural health care organizations and smaller providers.   Anne Neuberger, the US deputy national security advisor for cyber and emerging technology, estimates that the proposed rule would cost $9 billion in its first year and then $6 billion over the following four years, Reuters reports.   “We faced similar apprehensions when HIPAA was first introduced over two decades ago,” says Hodges. “At the end of the day, these regulations exist to serve one purpose: protecting patients and their information. Every stakeholder in health care must recognize that this isn’t just a regulatory obligation — it’s a moral one.”  The public comment period will cross over into the incoming Trump administration, raising questions about the fate of the proposed rule.   Arnold points out that issues like cybersecurity, data privacy, and national security are typically considered more bipartisan than others. On the other hand, the Trump administration has signaled a desire to slash regulations. What that means for HHS and this rule remains to be seen.   Related:Bridging a Culture Gap: A CISO’s Role in the Zero-Trust Era “There is the chance that there won’t be a lot of tabling of this rule and maybe embracing it, but I do think it presents the opportunity where there could be some tweaks to it [that] you might not normally have gotten if it was proposed and then adopted under the same administration,” says Arnold. “I don’t expect these to be the final versions of the rules.”   Critical Infrastructure Under Siege  Critical infrastructure continues to be a target of threat actors, both nation state-backed groups and financially motivated criminal actors. Health care is just one of those targeted sectors that could be subject to new cybersecurity rules.   “The combination of increasing awareness of the overall vulnerability of critical infrastructure cybersecurity and the increased targeting of [critical infrastructure] by both cybercriminals and nation state threat actors like Volt Typhoon lead me to believe that we’ll see more rule updates like this one in the coming year,” says Trey Ford, CISO for the Americas at Bugcrowd, a crowdsourced cybersecurity company, in an email interview.   While the final version of the proposed changes to HIPAA and a timeline for adoption are uncertain, the threats the new rule aims to address remain a reality in health care.   “All in all, cybersecurity should be treated as a cornerstone of patient care. Protecting health information is not just an IT task — it’s everyone’s responsibility in health care,” says Hodges. source

New Cybersecurity Rules Coming for Health Care Read More »

Nation-State Threats Persist with Information Breach of US Treasury

On Dec. 8, cybersecurity company BeyondTrust notified the US Department of the Treasury of a threat actor intrusion, according to a letter Treasury sent to the US Senate Committee on Banking, Housing, and Urban Affairs.   This incident joins the list of other attacks attributed to China state-sponsored advanced persistent threat (APT) actors. How was this attack executed, and what is the outlook for ongoing cyber threats from China?   The US Treasury Hack  The threat actor gained access to Treasury end user workstations via a compromise of BeyondTrust. The threat actor was able to use a stolen key to “… override the service’s security, remotely access certain Treasury DO user workstations, and access certain unclassified documents maintained by those users,” according to the letter.   As of Jan. 6, BeyondTrust fully patched vulnerabilities relating to the SaaS instances of BeyondTrust Remote Support, according to the company’s security advisory.   “BeyondTrust previously identified and took measures to address a security incident in early December 2024 that involved the Remote Support product. BeyondTrust notified the limited number of customers who were involved, and it has been working to support those customers since then,” a BeyondTrust spokesperson shared via email.   Related:How AI Can Speed Disaster Recovery The threat actor targeted the Office of Foreign Assets Control (OFAC), the Office of Financial Research (OFR), and US Treasury Secretary Janet Yellen’s office, The Guardian reports.   OFAC administers a number of sanctions programs; threat actors could have targeted OFAC to gain insight into forthcoming US sanctions.   “It’s a more targeted approach designed specifically to get an inside look [at], potentially, future US policy,” John Ghose, government investigations and enforcement attorney and special counsel at law firm Baker Donelson, tells InformationWeek.   It is also possible the hackers have other motivations. “Their intention will probably be to manipulate or degrade the integrity of the data associated with the sanctioned personalities in China,” says Tom Kellerman, senior vice president of cyber strategy at application security company Contrast Security. “Is there a process ongoing right now to verify the integrity of the data associated with the multitude of Chinese citizens that have been sanctioned by Treasury?”  Chinese Cyber Threats and US Response   Chinese officials frequently deny involvement in hacking operations, but the US linked China state-backed threat actors to several major intrusions, including the Treasury breach.   Related:Bridging a Culture Gap: A CISO’s Role in the Zero-Trust Era The major telecommunications hack discovered last year was linked to APT Salt Typhoon. China state-backed actors were also found responsible for the 2015 breach of the US Office of Personnel Management (OPM), which impacted the data of 35 million government employees. In 2020, the US Department of Justice charged four Chinese military-backed hackers for their involvement in the 2017 breach of credit reporting agency Equifax.   While the Treasury and telecommunications hacks have come to light recently, cyber threats from China are ongoing. “Cyber insurgency within US critical infrastructure is far deeper than just Treasury,” says Kellerman.   China-backed APT groups may be lurking in US government and company systems as a part of espionage campaigns, but there is growing concern about the potential for disruptive cyberattacks that cripple critical infrastructure if geopolitical tensions boil over into outright conflict. What can be done as nation state cyber threats continue to loom?  Sanctions are a common response. Shortly following the news of the Treasury hack, the federal department announced sanctions on a cybersecurity company based in Beijing, relating to its role in helping breach US communications systems between the summer of 2022 and 2023, The New York Times reports.   Related:The Biggest Cybersecurity Issues Heading into 2025 “At this point when it comes to actors like China and Russia and others that are so heavily blacklisted … to what extent do we have a response? We’re already limiting trade significantly,” he says. “The response would require just more sophisticated hardening of our information systems including all levels of the supply chain,” says Ghose.   Hardening of the supply chain requires an understanding of common threat actor tactics.   “We need to pay attention to the Chinese modus operandi, which is [to] island hop through other parties, whether it be cybersecurity vendors or whether it be through telecommunications carriers, and the fact that they’re developing zero days faster than any other nation state, which still allows them to bypass a lot of cybersecurity defenses,” Kellerman tells InformationWeek.   And zero-day exploitation is on the rise. Cybersecurity consulting company Mandiant, a part of Google Cloud, found that 70% of vulnerabilities exploited in 2023 were zero days, an increase compared to 2021 and 2022.   Hacks like the one of Treasury could prompt more focus on the supply chain and third-party reliance.   “Is it possible that this then results in more internalization, less reliance on third parties because of the difficulty of securing the supply chain?” Ghose asks. “That’ll be an interesting development to watch.”  The Treasury hack also comes just before the beginning of a second Trump administration, and President-elect Trump has been vocal about taking an aggressive approach to China.   “The timing is interesting just because we’re about to have an administration change,” Ghose points out. “So … the Treasury leadership is going to be turning over soon. So, OFAC policy could look very different in, say, a couple of months from now.”   The US response to nation state cyber threats, beyond OFAC, could change under a new administration. source

Nation-State Threats Persist with Information Breach of US Treasury Read More »

Let AI Help You Plan Your Next IT Budget

Budget planning tools help IT leaders build an accurate estimate of future income and expenses in a detailed enough way to make sound operational decisions. That sounds simple enough, yet in actual practice creating a realistic budget is a time-consuming task that many IT leaders dread.  AI has the ability to analyze historical finance data, usage patterns, project expenditures, and related inputs to better forecast the future, says Tyler Higgins, managing director of management and technology consulting firm AArete, via email.  When teamed with automated data collection, AI has the potential to enhance many budget modeling processes, says Anurag Sahay, managing director and global lead of AI and data sciences at digital engineering firm Nagarro. In an online interview, he notes that AI can also improve extrapolation and forecasting to assess resource needs, extract key insights from unstructured feedback, and optimize decision-making models for the best planning outcome and “what-if” scenarios.  Multiple Benefits  AI-supported budget planning offers both direct and indirect benefits. “The direct benefits are streamlining and shortening the budgeting process,” Higgins says. “The ideal outcome is a predictive budgeting process that contains powerful scenario planning tools and improved accuracy.”  Related:What Could Less Regulation Mean for AI? The most exciting part about using AI in IT budget planning is how it can shift the entire mindset from cost-cutting to value-building, says Jeff Mains, founder of Champion Leadership Group, a business training and coaching provider. Traditionally, budgets were seen as ways to manage resources and avoid overspending, but with AI we’re talking about a tool that identifies opportunities for innovation, he explains via email. “It doesn’t just keep you within budget — it shows you where strategic investments in IT can drive growth.” Mains says he uses AI to not only forecast expenses, but to create dynamic budget models that adjust in real-time based on shifting business needs and external factors. “It’s about creating a budget that grows with you, rather than just containing costs.”  AI-driven predictive analytics and benchmarking tools are already available for parts of the overall IT budget process, says Steven Hall, chief AI officer at technology research and advisory firm ISG. In an email interview, he notes that several technology business management tools, such as Apptio, provide deep insights and scenario planning to analyze current spending patterns and run savings and growth scenarios. “These platforms are integrating GenAI capabilities to provide even deeper insights and look for savings by integrating usage, external benchmark, and demand data to plan better IT spending.”  Related:AI-Driven Quality Assurance: Why Everyone Gets It Wrong First Steps  Higgins says the best way to begin using AI budget planning is to pick a specific use case and explore its potential. “We’re still in the infancy of AI, yet use cases keep growing,” he notes. “Instead of biting off everything at once, pick a few use cases and ensure that your baseline operational, financial, and usage data is sufficient, clean, and well structured.” Higgins suggests establishing an objective for each use case, then deploying a pilot AI project to determine if it’s delivering the anticipated output.  When embedded into IT financial platforms, AI budgeting will provide deeper insight into opportunities as well as create the ability to model various scenarios for growth, Hall says. “These evolving capabilities will also provide leaders with actionable insights and identify specific actions to address budget challenges.”  The best approach is to take the long view, Mains says. “AI can deliver immediate insights, but its real power comes when it’s integrated into long-term strategic planning.” He suggests selecting a single area of volatile IT spending, such as cloud services or software licenses, and allowing AI to analyze usage patterns in order to offer smarter budget recommendations. “From there, you can gradually scale AI’s role, aligning its outputs with broader business goals.”  Related:6 AI-Related Security Trends to Watch in 2025 Risks and Benefits  AI’s biggest benefit is predictive accuracy. It’s not just about saving time — it’s about knowing where your IT investments will have the highest impact six months from now, or even a year down the road, Mains says. The biggest risk is treating AI as a silver bullet. “The human element is still critical,” he warns. “Without context and strategic insight, even the most advanced AI models can miss the mark.”  Hall notes that AI models are only as good as the data they’re fed, and poor-quality or incomplete data can easily result in inaccurate budget forecasts. “Implementing AI tools also requires an upfront investment in technology and talent, which can be a barrier for smaller organizations.”  Looking Forward  The hardest part of most AI-driven projects, including budgeting, is getting started, Higgins observes. “These tools are never going to be perfect at first, but they will get better, and the results will be tangible for every organization.”  source

Let AI Help You Plan Your Next IT Budget Read More »

Addressing the Skills Gap to Keep Up with the Evolution of the Cloud

Spurred by the rapid adoption of generative AI, cloud computing’s 20% year-over-year growth is driving its status as today’s default operating model. However, the workforce skills gap has many organizations struggling to leverage the cloud’s full potential. While security and cost controls are key challenges to cloud adoption, the skills gap continues to vex enterprises seeking to maximize their investments in cloud computing, as more than 75% of organizations have abandoned projects due to skills gaps.   Many companies hire new talent to address the cloud skills gap, which is only a temporary solution. To implement a sustainable transition to the cloud, leaders must adopt a long-term strategic approach to upskill existing employees with a comprehensive workforce development plan. Continuous learning programs can help companies close their cloud computing skills gap and evolve the workforce to stay ahead of technology. These programs should also include non-technical employees to ensure enterprise-wide cloud literacy. Impact of AI and the Cloud on Security, Compliance, and Upskilling AI’s rapid evolution and influence on the cloud are game changers for businesses’ innovation and management of the complex security and compliance landscapes that come with this shift. Addressing these challenges through upskilling is vital to ensuring companies can navigate the new era of AI and cloud computing confidently and securely.  Related:Why So Many Customer Experiences Are Mediocre at Best Companies can use AI to automate routine tasks, improve customer experiences through chatbots and recommendations, and analyze large datasets to derive actionable insights. AI also helps cloud environments to be more adaptive and self-optimizing, enabling them to scale based on real-time demand and usage patterns. This integration of AI and the cloud enhances efficiency and innovation but also creates new challenges related to security, compliance, and the need for specialized skills.   AI can be a powerful tool to enhance cloud security through advanced threat detection and real-time risk analysis. However, using AI in cloud systems makes these environments more complex, creating more entry points for potential security threats. AI-driven systems that are not properly secured could become targets for malicious actors seeking to exploit vulnerabilities. For example, adversarial AI techniques in which data is manipulated to deceive AI models are an emerging threat to cloud security.  To mitigate these risks, businesses need cloud security professionals with expertise in both cloud infrastructure and AI-driven tools. These professionals must know how to use AI to strengthen security measures while also being vigilant about the unique security challenges that AI introduces. Through continuous learning and targeted upskilling programs, organizations can equip their workforce with the knowledge needed to navigate these challenges and unlock the full potential of AI and the cloud.   Related:Tech Company Layoffs: The COVID Tech Bubble Bursts Upskilling Teams, Optimizing Cloud Usage, and Alignment Across industries, the cloud is now table stakes, but its successful adoption requires more than just implementing a cloud infrastructure. It demands a holistic approach that optimizes cloud usage and aligns its strategies with business objectives. When done right, cloud computing allows teams to enhance agility and speed, drive innovation, and improve cross-team collaboration. To operationalize cloud computing effectively, businesses must focus on leadership and organizational alignment, cloud governance and security, and continuous upskilling of employees.  Cloud adoption should be an integral part of the business’s overall strategy rather than an isolated IT initiative. Key considerations include creating a cloud-first mindset and culture across the organization, from leadership to front-line employees. By utilizing the cloud, organizations can pivot quickly based on market conditions and leverage data analytics and AI to make more informed, data-driven decisions.  Related:UK Launches Antitrust Investigations Targeting Big Tech Cloud computing is a highly specialized skill that requires a deep understanding of cloud platforms, security, DevOps practices, and data management. Training that includes AWS, Microsoft Azure, or Google Cloud certifications helps employees stay current on the latest cloud technologies and best practices. As cloud computing affects many aspects of a business, from IT and development teams to marketing and operations, a cross-functional collaboration ensures that cloud capabilities are utilized as effectively as possible across the enterprise.   Fostering a Culture of Continuous Learning As the cloud continues to evolve, the need for workforces with the skills to use it will intensify. To remain competitive, organizations must foster a culture in which employees are empowered to update their skills through a mix of formal training, hands-on experience, and knowledge sharing.   Organizations that fail to address the skills gap risk falling behind in the race to leverage cloud technologies effectively. By investing in cloud training programs, certifications, and continuous learning, businesses can ensure they have the talent to innovate, scale, and secure their operations in the cloud.  source

Addressing the Skills Gap to Keep Up with the Evolution of the Cloud Read More »

The Biggest Cybersecurity Issues Heading into 2025

Cybersecurity leaders always have a lot on their minds. What are the latest threats to their enterprises? What emerging technologies can bolster their defenses? How can they secure the necessary talent and the budget? What’s on the regulatory horizon?   As 2025 begins, InformationWeek spoke to four leaders in the cybersecurity space about some of the biggest issues on their minds.    AI-Fueled Threats and Defense   AI was on everyone’s lips in 2024, and there is every reason to expect that this technology boom will continue to be top of mind in 2025.   AI makes threat actors more prolific and sophisticated. They can use it to automate large-scale attacks. They can make phishing lures more convincing. Deepfake audio and video continue to improve, making them harder to spot. In 2024, scammers effectively manipulated a finance worker into paying them $25 million, thanks to a deepfake video conference.   The same powerful capabilities of AI are, of course, being applied on the defensive side. AI-driven automation, for example, speeds threat detection and frees up analysts’ time for more complex work.   But AI has myriad use cases. In addition to cybersecurity threats and defensive tools, this technology is being applied up and down the technology stack. Cybersecurity leaders must think about the security implications of AI throughout their enterprises.   Related:Nation-State Threats Persist with Information Breach of US Treasury “We are seeing a lot of projects moving [forward] and it sort of feels like security is … being asked to follow behind the business and reduce the risk after the fact,” says Patrick Sullivan, CTO, security strategy at Akamai Technologies, a cloud computing and security company.   Insider Threats  In 2024, KnowBe4 hired a North Korean hacker to fill an open IT position. The cybersecurity company recognized the insider threat early on, before the person was even onboarded. But this is not an isolated kind of threat.   Aggressor nation states will continue to use this kind of approach to infiltrate US companies and critical infrastructure providers, whether to steal intellectual property and data or to cause disruption to essential services.   “We’re really seeing a need now for advanced controls in that talent acquisition process and in our ongoing insider threat monitoring programs to be able to mitigate against these new kinds of attacks that are out there,” Sharon Chand, principal of cyber risk services at consulting firm Deloitte, asserts.  Escalating Geopolitical Tensions  The escalating geopolitical tensions across the world play out, in part, in the cybersecurity space. Nation state-backed threat actors and hacktivists target organizations in the US and across the world in the service of political goals.   Related:How AI Can Speed Disaster Recovery The UK rang alarm bells regarding Russia’s ability to conduct cyber-warfare on British businesses, BBC reports. US Cyber Command warns of China’s ability to disrupt US critical infrastructure in the event that conflict erupts between the two countries, according to Reuters.     Disruptive Cyberattacks  This year is set to be a record for ransomware payments, and blockchain data platform Chainalysis points out that “big game hunting” is a big driver.   Sam Rubin, senior vice president of Unit 42 consulting and threat intelligence at cybersecurity company Palo Alto Networks, tells InformationWeek that attacks that cause crippling business disruption are on the rise.   “These disruptive attacks especially for large organizations that have a big role in the economy or in their market are becoming the target and a way for the threat actors to get very large multimillion-dollar pay days,” he explains.   Zero Day Vulnerabilities   In November, the Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and a number of their partners released a list of the top routinely exploited vulnerabilities in 2023. Of the 15 top common vulnerabilities and exposures (CVEs), 11 were zero days.   Related:Bridging a Culture Gap: A CISO’s Role in the Zero-Trust Era “Some of that is nation state actors. Some of that is ransomware operators. So, all adversary classes seem to be pivoting more toward zero days,” says Sullivan.   Third-Party Risks   In the summer of this past year, business at thousands of car dealerships was upended following two cyberattacks on a single software provider: CDK Global. The health care industry experienced a major disruption when Change Healthcare, a payment and claims provider, was hit with ransomware. The potential of another cyberattack with a massive ripple effect looms large in 2025.   “There’s just so much dependency on third parties among lots and lots of companies and different industries. And, I think there will be a large-scale attack on a company that impacts not only that company but those [that] depend on it,” says Ann Irvine, chief data and analytics officer at Resilience, a cybersecurity risk management company.   As enterprises incorporate more third parties into their supply chains, more web apps and APIs are exposed, Sullivan points out. “[Businesses need] to understand where those vulnerabilities emerge, prioritize them, and then have an efficient patching process to remediate,” he urges.   The Need for Integrated Security Platforms  The market for security platforms and tools is massive. If you can think of a security challenge, there are probably a host of vendors clamoring to serve up a solution. But there is a movement to consolidate those solutions.   “We’re seeing continued creativity of the bad actors coming into multiple different types of attack vectors, and historically, some of our defenses have been quite siloed in their ability to prevent [and] mitigate those kinds of attacks,” says Chand. “We’re seeing the need for enterprise clients to really think about integrated security platforms.”  Networking company Extreme Networks surveyed 200 CIOs and IT decision markers, and 88% reported a desire for a single integrated platform that includes AI, networking, and security.   Upskilling the Cyber Workforce   The cybersecurity challenge shortage is an ongoing concern. Consulting firm Gartner predicts that more than half of cyber incidents will stem from a lack of talent and human failure by 2025.   In addition to filling roles, enterprises are also tasked with the prospect of upskilling their current cybersecurity talent. As threats evolve, in

The Biggest Cybersecurity Issues Heading into 2025 Read More »