Information Week

Bridging a Culture Gap: A CISO’s Role in the Zero-Trust Era

Adopting zero-trust security architectures is increasingly becoming a corporate imperative, with zero trust serving as the recommended approach for building resilience against the evolving nature of enterprise threats. This shift represents more than just implementing the latest and greatest best-of-breed tools. It’s a foundational shift away from perimeter-based security controls and external network defenses that were not designed for today’s threat landscape.   More than 80% of all data breaches today are attributed to human error or negligence, making human risk a pressing security concern amid the rise of hybrid work environments. A zero-trust architecture limits the damage that a compromised user can cause by segmenting the organization’s security environment into smaller, isolated zones that restrict the ability to access sensitive data across the entire ecosystem. Unfortunately, the path to effective implementation has proven challenging. Forrester research found that more than 63% of enterprises are struggling to implement zero-trust frameworks, and Gartner predicts that by 2026 only 10% of large enterprises will have a mature and measurable zero-trust program in place.   This heightens the role of the transformational CISO to the forefront. CISO success today requires more than being a pure technologist from the SOC room. They need to serve as transformational leaders who are capable of navigating shifting organizational priorities to foster collective buy-in amongst executive leaders, establish effective processes with business line stakeholders, and develop versatile security teams. Cultivating this company-wide alignment is critical to alleviating the roadblocks that hinder zero-trust adoption today.   Related:What Does Biden’s New Executive Order Mean for Cybersecurity? Articulating Zero Trust’s Value   Nearly 50% of IT professionals describe collaboration between security risk management and business risk management as poor or nonexistent, according to NIST research. As CISOs, it’s our job to bridge this divide by framing zero trust as an enabler of business agility, operational efficiency, and competitive advantage rather than focusing on technical specifications. Leveraging scenario-based planning and risk quantification techniques can effectively articulate the value of zero trust in terms that resonate with various stakeholders — correlating the ramifications of cyber incidents to high-value outcomes that impact their department. Marketing leaders, for example, might better appreciate zero trust when they understand how it prevents customer data breaches that result in brand reputational damage.  Related:3 Strategies For a Seamless EU NIS2 Implementation CISOs should establish regular touchpoints with business unit leaders to understand their workflows, pain points, and growth initiatives. This collaborative approach helps identify opportunities where zero trust can enhance business processes rather than hinder them. By securing visible support from the C-suite, CISOs can overcome initial resistance and ensure the necessary resources are allocated for successful implementation. It also helps strengthen organizational buy-in across all employees, giving the company a platform to address concerns, share implementation progress, and maintain alignment with business objectives.   Minimizing Organizational Friction  Successful zero-trust adoption requires a carefully orchestrated change management strategy. Rather than pursuing lower-risk areas, organizations often achieve better results by starting with mid-risk priorities and moving methodically toward more complex challenges. This approach prevents implementation paralysis and drives meaningful security advancement.  Clear communication at every stage is essential. Regular updates, user awareness training, and open feedback channels help maintain transparency and address concerns proactively. When employees realize that zero trust can streamline their access to resources while maintaining security, resistance typically diminishes. The key lies in balancing security requirements with user experience. Modern implementations should leverage automation and contextual access controls to make security seamless. Implementing single sign-on solutions alongside zero-trust principles can enhance both security and convenience, making the transition more palatable for end users.  Related:Microsoft Rings in 2025 With Record Security Update In addition, developing a comprehensive change impact assessment helps identify potential friction points before they emerge. This involves mapping current workflows, understanding dependencies, and creating mitigation strategies. Regular user satisfaction surveys and feedback sessions enable continuous refinement of the implementation approach, ensuring that security measures align with operational needs while maintaining robust protection.  Positioning Practitioners for Success   The technical complexity of zero-trust architectures demands a targeted focus on skill development amongst security practitioners. With practitioners often wearing multiple hats across architecture, implementation, operations, and monitoring, they must be all-around defenders who are capable of seamlessly transitioning between functional roles. This requires a strong foundational knowledge spanning both on-premises and cloud security domains. Security teams must understand the organization’s end-to-end security environment, from network tools to cloud applications, endpoints, and data storage systems.  Investment in targeted learning is crucial here. Prioritize formal trainings and upskilling programs that build team-wide competencies and implement cross-training initiatives that facilitate knowledge sharing to reduce key person dependencies and develop operational resilience. Establishing a dedicated zero-trust center of excellence can accelerate this skill development by providing guidance and support to other security team members while maintaining documentation and best practices.  The path to zero trust is a continuous journey of organizational transformation. While technical implementation remains crucial, the transformational CISO’s ability to bridge cultural gaps, foster organizational alignment, and develop comprehensive team capabilities will determine the success of zero-trust initiatives. As cyber threats continue to evolve and regulatory pressures mount, organizations that successfully execute this cultural and technical transformation will be better positioned to protect their critical assets and maintain business continuity in an increasingly complex threat landscape.  source

Bridging a Culture Gap: A CISO’s Role in the Zero-Trust Era Read More »

AI Risk Management: Is There an Easy Way?

When ChatGPT commercially launched in 2022, governments, industry sectors, regulators and consumer advocacy groups began to discuss the need to regulate AI, as well as to use it, and it is likely that new regulatory requirements will emerge for AI in the coming months.   The quandary for CIOs is that no one really knows what these new requirements will be. However, two things are clear: It makes sense to do some of your own thinking about what your company’s internal guardrails should be for AI; and there is too much at stake for organizations to ignore thinking about AI risk.   The annals of AI deployments are rife with examples of AI gone wrong, resulting in damage to corporate images and revenues. No CIO wants to be on the receiving end of such a gaffe.  That’s why PWC says, “Businesses should also ask specific questions about what data will be used to design a particular piece of technology, what data the tech will consume, how it will be maintained and what impact this technology will have on others … It is important to consider not just the users, but also anyone else who could potentially be impacted by the technology. Can we determine how individuals, communities and environments might be negatively affected? What metrics can be tracked?”    Related:How Can CIOs Prepare for AI Data Regulation Changes? Identify a ‘Short List’ of AI Risks   As AI grows and individuals and organizations of all stripes begin using it, new risks will develop, but these are the current AI risks that companies should consider as they embark on AI development and deployment:   Un-vetted data. Companies aren’t likely to obtain all of the data for their AI projects from internal sources. They will need to source data from third parties.   A molecular design research team in Europe used AI to scan and digest all of the worldwide information available from sources such as research papers, articles, and experiments on that molecule. A healthcare institution wanted to use an AI system for cancer diagnosis, so it went out to procure data on a wide range of patients from many different countries.   In both cases, data needed to be vetted.   In the first case, the research team narrowed the lens of the data it was choosing to admit into its molecular data repository, opting to use only information that directly referred to the molecule they were studying. In the second case, the healthcare institution made sure that any data it procured from third parties was properly anonymized so that the privacy of individual patients was protected.   By properly vetting internal and external data that AI would be using, both organizations significantly reduced the risk of admitting bad data into their AI data repositories.   Related:AI’s Two Faces: Unlock Innovation but Manage Shadow AI Imperfect algorithms. Humans are imperfect, and so are the products they produce. The faulty Amazon recruitment tool, powered by AI and outputting results that favored males over females in recruitment efforts, is an oft-cited example — but it’s not the only one.   Imperfect algorithms pose risks because they tend to produce imperfect results that can lead businesses down the wrong strategic paths. That’s why it’s imperative to have a diverse AI team working on algorithm and query development. This staff diversity should be defined by a diverse set of business areas (along with IT and data scientists) working on the algorithmic premises that will drive the data. An equal amount of diversity should be used as it applies to the demographics of age, gender and ethnic background. To the degree that a full range of diverse perspectives are incorporated into algorithmic development and data collection, organizations lower their risk, because fewer stones are left unturned.    Poor user and business process training. AI system users, as well as AI data and algorithms, should be vetted during AI development and deployment. For example, a radiologist or a cancer specialist might have the chops to use an AI system designed specifically for cancer diagnosis, but a podiatrist might not.   Related:In the Era of Generative AI, Establish a ‘Risk Mindset’ Equally important is ensuring that users of a new AI system understand where and how the system is to be used in their daily business processes. For instance, a loan underwriter in a bank might take a loan application, interview the applicant, and make an initial determination as to the kind of loan the applicant could qualify for, but the next step might be to run the application through an AI-powered loan decisioning system to see if the system agrees. If there is disagreement, the next step might be to take the application to the lending manager for review.   The keys here, from both the AI development and deployment perspectives, are that the AI system must be easy to use, and that the users know how and when to use it.   Accuracy over time. AI systems are initially developed and tested until they acquire a degree of accuracy that meets or exceeds the accuracy of subject matter experts (SMEs). The gold standard for AI system accuracy is that the system is 95% accurate when compared against the conclusions of SMEs. However, over time, business conditions can change, or the machine learning that the system does on its own might begin to produce results that yield reduced levels of accuracy when compared to what is transpiring in the real world. Inaccuracy creates risk.   The solution is to establish a metric for accuracy (e.g., 95%), and to measure this metric on a regular basis.  As soon as AI results begin losing accuracy, data and algorithms should be reviewed, tuned and tested until accuracy is restored.   Intellectual property risk. Earlier, we discussed how AI users should be vetted for their skill levels and job needs before using an AI system. An additional level of vetting should be applied to those individuals who use the company’s AI to develop proprietary intellectual property for the company.   If you are an aerospace company, you don’t want your

AI Risk Management: Is There an Easy Way? Read More »

Microsoft Rings in 2025 With Record Security Update

Microsoft’s January update contains patches for a record 159 vulnerabilities, including eight zero-day bugs, three of which attackers are already actively exploiting. The update is Microsoft’s largest ever and is notable also for including three bugs that the company said were discovered by an artificial intelligence (AI) platform.   Microsoft assessed 10 of the vulnerabilities disclosed this week as being of critical severity and the remaining ones as important bugs to fix. As always, the patches address vulnerabilities in a wide range of Microsoft technologies, including Windows OS, Microsoft Office, .NET, Azure, Kerberos, and Windows Hyper-V. They include more than 20 remote code execution (RCE) vulnerabilities, nearly the same number of elevation-of-privilege bugs, and an assortment of other denial-of-service flaws, security bypass issues, and spoofing and information disclosure vulnerabilities. Three Vulnerabilities to Patch Immediately Multiple security researchers pointed to the three actively exploited bugs in this month’s update as the vulnerabilities that need immediate attention. The vulnerabilities, identified as CVE-2025-21335, CVE-2025-21333, and CVE-2025-21334, are all privilege escalation issues in a component of the Windows Hyper-V’s NT Kernel. Attackers can exploit the bug relatively easily and with minimal permissions to gain system-level privileges on affected systems. Microsoft itself has assigned each of the three bugs a relatively moderate severity score of 7.8 out of 10 on the CVSS scale. But the fact that attackers are exploiting the bug already means organizations cannot afford to delay patching it. “Don’t be fooled by their relatively low CVSS scores of 7.8,” said Kev Breen, senior director threat research, Immersive Labs, in emailed comments. “Hyper-V is heavily embedded in modern Windows 11 operating systems and used for a range of security tasks.” Microsoft has not released any details on how attackers are exploiting the vulnerabilities. But it is likely that threat actors are using it to escalate privileges after they have gained initial access to a target environment, according to researchers. “Without proper safeguards, such vulnerabilities escalate to full guest-to-host takeovers, posing significant security risks across your virtual environment,” researchers at Automox wrote in a blog post this week. Five Publicly Disclosed but Not Yet Exploited Zero-Days The remaining five zero-days that Microsoft patched in its January update are all bugs that have been previously disclosed but which attackers have not exploited yet. Three of the bugs enable remote code execution and affect Microsoft Access: CVE-2025-21186 (CVSS:7.8/10), CVE-2025-21366 (CVSS: 7.8/10), and CVE-2025-21395. Microsoft credited AI-based vulnerability hunting platform Unpatched.ai for finding the bugs. “Automated vulnerability detection using AI has garnered a lot of attention recently, so it’s noteworthy to see this service being credited with finding bugs in Microsoft products,” Satnam Narang, senior staff research engineer for Tenable, wrote in emailed comments. “It may be the first of many in 2025.” The other two publicly disclosed but as yet unexploited zero-days in Microsoft’s January security update are CVE-2025-21275 (CVSS: 7.8/10) in Windows App Package Installer and CVE-2025-21308 in Windows Themes. Both enable privilege escalation to SYSTEM and therefore are high-priority bugs for fixing as well. Other Critical Vulns In addition to the zero-days there are several other vulnerabilities in the latest batch that also merit high-priority attention. Near the top of the list are three CVEs to which Microsoft has assigned near maximum CVSS scores of 9.8 out of 10: CVE-2025-21311 in Windows NTLMv1 on multiple Windows versions; CVE-2025-21307, an unauthenticated RCE flaw in Windows Reliable Multicast Transport Driver; and CVE-2025-21298, an arbitrary code execution vulnerability in Windows OLE. According to Ben Hopkins, cybersecurity engineer at Immersive Labs, Microsoft likely rated CVE-2025-21311 as critical because of the potentially severe risk it presents. “What makes this vulnerability so impactful is the fact that it is remotely exploitable, so attackers can reach the compromised machine(s) over the Internet,” he wrote in emailed comments. “The attacker does not need significant knowledge or skills to achieve repeatable success with the same payload across any vulnerable component.” CVE-2025-21307, meanwhile, is a use-after-free memory corruption bug that affects organizations using the Pragmatic General Multicast (PGM) multicast transport protocol. In such an environment, an unauthenticated attacker only needs to send a malicious packet to the server to trigger the vulnerability, Ben McCarthy, lead cybersecurity engineer at Immersive Labs, wrote in emailed comments. Attackers who successfully attack the vulnerability can gain kernel-level access to affected systems, meaning organizations using the protocol need to apply Microsoft’s patch for the flaw immediately, McCarthy added. Tyler Reguly, associated director of security R&D at Fortra, described CVE-2025-21298 — the third 9.8 severity bug — as an RCE flaw that an attacker would likely exploit via email rather than over the network. “The Microsoft Outlook preview pane is a valid attack vector, which lends itself to calling this a remote attack. Consider reading all emails in plaintext to avoid vulnerabilities like this one,” he noted in emailed comments. Microsoft’s January 2025 update is in stark contrast to January 2024’s update when the company disclosed just 49 CVEs. According to data from Automox, the company issued patches for 150 CVEs in April 2024, and for 142 in July. source

Microsoft Rings in 2025 With Record Security Update Read More »

3 Strategies For a Seamless EU NIS2 Implementation

Businesses everywhere face pressures to enhance their security postures as cyberattacks across sectors rise. Even so, many organizations have been hesitant to invest in cybersecurity for a variety of reasons such as budget constraints and operational issues. The EU’s new Network and Information Security Directive (NIS2) confronts this hesitancy head on by making it mandatory for companies in Europe – and those doing business with Europe – to invest in cybersecurity and prioritize it regardless of budgets and team structures.   What Is NIS2?  The first NIS Directive was implemented in 2016, which was the EU’s endeavor to unify cybersecurity strategies across member states. In 2023, the commission introduced the NIS2 Directive, a set of revisions to the original NIS. Each member state was required to implement the NIS2 recommendations into their own national legal systems by October 17, 2024.  The original NIS focused on improving cybersecurity for several sectors, such as banking and finance, energy and healthcare. NIS2 expands that scope to other entities, including digital services, such as domain name system (DNS) service providers, top-level domain (TLD) name registries, social networking platforms and data centers, along with manufacturing of critical products, such as pharmaceuticals, medical devices and chemicals; postal and courier services; and wastewater and waste management.  Related:What Does Biden’s New Executive Order Mean for Cybersecurity? Organizations in these industries are now required to implement more robust cyber risk management practices like incident reporting, risk analysis and auditing, resilience/business continuity and supply chain security. For example, member states must ensure TLD name registries and domain registration services collect accurate and complete registration data in a dedicated database. The new regulations also strengthen supervision and enforcement mechanisms, requiring national authorities to monitor compliance, investigate incidents and impose penalties for non-compliance.  The goal of these new measures is to ensure the stability of society’s infrastructure in the face of cyber threats. Entities in the EU will benefit from adopting these security measures over the long run, better preventing a devastating cyberattack. In doing so, they will also avoid the NIS2 penalties, which are significantly more punitive and clearly defined than those created under the original directive.   Impact on Organizations  Much like how the European Union’s General Data Protection Regulation (GDPR) reset the standard for privacy globally, NIS2 sets clear requirements for businesses to establish stronger security defenses, but not without a cost. Failing to comply can lead to severe financial penalties and legal implications.   Related:Microsoft Rings in 2025 With Record Security Update The official launch of NIS2 in October was met with mixed reactions. While some organizations could testify, they had been preparing all along, many others had left NIS2 on the backburner. In addition, as a result of the new sectors covered by NIS2, there were businesses that did not initially believe they would be impacted and therefore had not laid their own groundwork.   All this said, it will be interesting to see how penalty enforcement plays out in 2025. If organizations don’t demonstrate compliance early in the new year, or at least show progress toward becoming compliant, I predict we will start to see consequences, though it may be too soon to tell which sectors will face them first.  To those still grappling with NIS2 implementation, it may understandably seem like a daunting task, but it does not have to be. Here are three actions organizations can take today to ensure a more seamless NIS2 implementation:   1. Evaluate your business partners.  NIS2 is not just about strengthening one business’ security; It also demands businesses thoroughly evaluate every entity they engage with in their supply chain. A chain is only as strong as its weakest link, and the same can be said for businesses and their partners’ security postures. It is essential for organizations to audit their partners to ensure every entity they do business with meets NIS2 requirements. Evaluating any security gaps now can help to avoid overlooked issues down the road.   Related:How CISOs Can Build a Disaster Recovery Skillset 2. Consolidate your domains.  We have heard anecdotally that some businesses are not fully aware of their domain registrars or who is responsible for managing and securing the domains within their organization. This lapse in knowledge creates more than siloed work environments; it can cause major repercussions when it comes to secure domain management and NIS2 compliance. Taking a more consistent, consolidated approach to managing and securing domains helps strengthen an organization’s overall domain security and checks one more task off the team’s compliance checklist.   3. Stay security-minded, organization-wide.  With new NIS2 requirements, businesses must report cybersecurity incidents within 24 hours. This demand requires an organization-wide culture shift to a more security-minded approach to the way they do business. For example, businesses may need to evaluate what cybersecurity protocols they have in place to secure the way they interact with their customers and their supply chain. Without security being top-of-mind, businesses may miss NIS2 requirements that could lead to revenue loss, loss of customers and even dents in their reputation. This shift doesn’t happen overnight but working with partners that are security-minded helps organizations stay a step ahead in their security.  As cybercriminals become more elusive in targeting reputable organizations, and as global geopolitical tensions leave many companies in the crossfires of nation-state attacks, adhering to NIS2 standards becomes all the more critical. These three strategies are guiding principles for organizations to contribute to a safer, more secure enterprise environment in Europe and around the world.   source

3 Strategies For a Seamless EU NIS2 Implementation Read More »

How CISOs Can Build a Disaster Recovery Skillset

You hear this mantra in cybersecurity over and over again: It’s not if, it’s when. Data breaches, ransomware attacks, and all manner of incidents abound, it seems like disaster lurks around every corner. The prevalence of these incidents has shifted the CISO’s emphasis from prevention to resilience. Yes, even the most prepared enterprises can still get hit. What matters is how they bounce back.   Today’s CISO role has disaster recovery baked into the job description. How can they cultivate that skillset and use it to guide their organizations through the fallout of a major cybersecurity incident?   Defining Critical Disaster Recovery Skills  Disaster recovery has become an essential part of the CISO role. “In cybersecurity, we live in the world of incidents, whether it’s someone clicking on a phish or someone plugging in a USB drive, or someone who’s conducted fraud against your company,” Ross Young, CISO in residence at venture capital fund Team8, tells InformationWeek.   Incident response and disaster recovery go hand in hand. “Some of the best CISOs are some of the best understanders of disaster recovery efforts and apply those in their own security response plans,” says Matt Hillary, CISO at compliance automation platform Drata.   Effective disaster recovery requires both technical skills and human skills.   Related:What Does Biden’s New Executive Order Mean for Cybersecurity? On the technical side, CISOs must understand how each part of the technology stack is used in their organizations and how that technology impacts the CIA triad: confidentiality, integrity, and availability.   “A lot of that technical work is going to be driven down to the engineering level. Ideally, the CISO will have done the right work to bring in the right talent and drive the technical remediation,” says Marshall Erwin, CISO at Fastly, a cloud computing services company.   CISOs also need to be able to put themselves in the mindset of attackers to understand their goals and what they could be doing once inside the network. “You can say, ‘Team, here’s where we need to be looking, here’s where we need to point our lens and our forensic skills to identify what an attacker did to be able to make sure that we kicked them out and have cleaned up our internal network,’” says Erwin.   But human skills are equally important. CISOs need to be able to communicate effectively across multiple teams and with C-suite peers to lead an effective response.   “What you feel you need to do from a security investigative perspective might be the opposite from [what] business resilience … folks want to take,” says Mandy Andress, CISO at Elastic, an AI search company. “How do you navigate, communicate, and find the … compromises.”   Related:3 Strategies For a Seamless EU NIS2 Implementation A lot of that work is best done in advance of an actual incident. CISOs can add their voice to disaster recovery plans to ensure the security perspective is in place before an attacker gets inside.   In the heat of a cybersecurity disaster, CISOs also have a responsibility to their team. They need skills to get them through the incident response process.   “It seems like every incident I’ve ever seen, it always happens on a Saturday when everybody’s at their kid’s baseball game or something else. It’s the most inconvenient time possible. How do you keep the positive moral?” says Young.   Remaining calm and decisive in the midst of a stressful situation that can last days, weeks, or even months is necessary and not without its challenges. “I think there is a lot of bravado sometimes in … the security community,” says Hillary. “I don’t know if it’s a mask or if it’s something else that leads us to not being as human as we need to be. And so just to continue to be humble, teachable, and learn throughout that incident.”  Cultivating Disaster Recovery Skills   While people may have different career paths that lead them to the CISO role, they’ve most likely worked through cybersecurity incidents along the way.   Related:Microsoft Rings in 2025 With Record Security Update “Incidents are frequent enough that you’re going to have that experience at some point in your career and develop that expertise organically,” says Erwin.   While trial by fire is an excellent teacher, there are other ways that CISOs can shore up their disaster response and recovery toolboxes. Industry conferences, for example, can offer valuable training.   “When I was the CISO of Caterpillar Financial, I went to FS-ISAC [Financial Services-Information Sharing and Analysis Center], and they had a CISO conference where they did tabletop exercises simulating an insider threat,” Young shares.   CISOs can lead their own tabletop exercises at their enterprises to better understand the holes in their incident response plans and areas where they need to strengthen their own skills.   Other leaders within an organization can be valuable resources for CISOs looking to cultivate these skills. “One of my closest peers that I usually … go to is someone who’s over on the infrastructure team,” says Hillary. “Any kind of disaster impact or availability incident that they experience on their end, they have a plan for, they have a really good, well-exercised muscle within the organization to recover.”  CISOs can also look outside of their organizations for ways to sharpen their skills. Hillary shares that he always looks at other breaches and outages. “I usually ask myself two questions. How do I know that this same vector isn’t being used against my company right now? How do I know this same incident that this other company is experiencing can’t happen to us?” he says. “So, it helps drive a lot of preventative measures.”  Navigating Disaster  In a world of third-party risk, human error, and motivated threat actors, even the best prepared CISOs cannot always shield their enterprises from all cybersecurity incidents. When disaster strikes, how can they put their skills to work?    “It is an opportunity for the CISO to step in and lead,” says Erwin. “That’s the most critical thing a CISO is going to do in those incidents, and if

How CISOs Can Build a Disaster Recovery Skillset Read More »

What Does Biden's New Executive Order Mean for Cybersecurity?

On. Jan. 16, just days before leaving office, President Biden issued an executive order on improving the nation’s cybersecurity. The extensive order comes on the heels of the breaches of US Treasury and US telecommunications providers perpetrated by China state-sponsored threat actors.  “Adversarial countries and criminals continue to conduct cyber campaigns targeting the United States and Americans, with the People’s Republic of China presenting the most active and persistent cyber threat to United States Government, private sector, and critical infrastructure networks,” the order states.   This new executive order, building on the one Biden issued in 2021, is extensive. It addresses issues ranging from third-party supply chain risks and AI to cybersecurity in space and the risks of quantum computers.   Could this executive order shape the federal government’s approach to cybersecurity? And how uncertain is its impact under the incoming Trump administration?   The Executive Order  The executive order outlines a broad set of initiatives to address nation state threats, improve defense of the nation’s digital infrastructure, drive accountability for software and cloud providers, and promote innovation in cybersecurity.  Like the 2021 executive order, the newly released order emphasizes the importance of collaboration with the private sector.   Related:3 Strategies For a Seamless EU NIS2 Implementation “Since it’s an executive order, it’s mainly aimed at the federal government. It doesn’t directly regulate the private sector,” Jim Dempsey, managing director of the Cybersecurity Law Center at nonprofit International Association of Privacy Professionals (IAPP), tells InformationWeek. “It indirectly aims to impact private sector cybersecurity by using the government’s procurement power.”  For example, the order directs software vendors working with the federal government to submit machine-readable secure software development attestations through the Cybersecurity and Infrastructure Security Agency (CISA) Repository for Software Attestation and Artifacts (RSAA).   “If CISA finds that attestations are incomplete or artifacts are insufficient for validating the attestations, the Director of CISA shall notify the software provider and the contracting agency,” according to the order.   The order also calls for the development of guidelines relating to the secure management of cloud service providers’ access tokens and cryptographic keys. In 2023, China-backed threat actor stole a cryptographic key, which led to the breach of several government agency Outlook email systems, Wired reports. A stolen key was behind the compromise of BeyondTrust that led to the recent US Treasury breach.   Related:Microsoft Rings in 2025 With Record Security Update AI, unsurprisingly, doesn’t go untouched by the order. It delves into establishing a program for leveraging AI models for cyber defense.    The Biden administration also uses the executive order to call attention to cybersecurity threats that may loom larger in the future. The order points to the risks posed by quantum computers and space system cybersecurity concerns.   Biden’s Cyber Legacy  The Biden Administration made cybersecurity a priority. In addition to the 2021 executive order on cybersecurity, the administration released a National Cybersecurity Strategy and an implementation plan in 2023.     The current administration also took sector-specific actions to bolster cybersecurity. For example, Biden issued an executive order focused on maritime cybersecurity.   Kevin Orr, president of RSA Federal at RSA Security, a network security company, saw a positive response to the Biden Administration’s efforts to improve cybersecurity within the government.   “I was surprised at how many agencies … have leaned in the last 18 months, especially within the intelligence community, have really adopted basic identity proofing, coming forward with multifactor authentication, and really strengthening their defenses,” Orr shares.   Related:How CISOs Can Build a Disaster Recovery Skillset While the Biden Administration has worked to further cybersecurity, there are questions about adoption of new policies and best practices. Some stakeholders call for more regulatory enforcement.    “Much like any regulation, people are only going to follow it if there’s some type of regulatory teeth to it,” Joe Nicastro, field CTO at software security firm Legit Security, argues.   Others argue for incentives are more likely to drive adoption of cybersecurity measures.   Cybersecurity is an ongoing national security concern, and the Biden administration is soon passing the torch.   “I think this administration can leave extremely, extremely proud,” says Dempsey. “Certainly, they are handing over the nation’s cybersecurity to the incoming Trump administration in far better shape than it was four years ago.”  A New Administration   While the order could mean big changes in the federal government’s approach to cybersecurity, the timing makes its ultimate impact uncertain. Many of its directives for federal agencies have a long runway, months or years, for compliance. Will the Trump administration enforce the executive order?  Cybersecurity has largely been painted as a bipartisan issue. And there has been some continuity between the first Trump Administration and the Biden Administration when it comes to cyber policies.   For example, the Justice Department recently issued a final rule on Biden’s Executive Order 14117 “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.” That order charges the Justice Department with establishing a regulatory program to prevent the sale of Americans’ sensitive data to China, Russia, Iran, and other foreign adversaries. That order and subsequent ruling stem from an executive order signed by Trump in 2019.   Biden’s 2025 cybersecurity executive order puts a spotlight on cyber threats from China, and President-Elect Trump has been vocal about his intention to crack down on those threats. But that does not preclude changes to or dismissal of provisions in Biden’s final cybersecurity executive order.   “There may be some things that the incoming administration will ignore or deprioritize. I’d be a little surprised if they repealed the order,” says Dempsey.   CISA was a major player in the Biden administration’s approach to cybersecurity, and it will continue to play a big role if this new executive order rolls out as outlined. But the federal agency has been criticized by several Republican lawmakers. Some have called to limit its power or even shut it down, AP News reports.   The incoming Trump administration is also expected to take a more hands-off approach to regulation in many areas. Critical infrastructure is consistently at the

What Does Biden's New Executive Order Mean for Cybersecurity? Read More »

What Security Leaders Get Wrong About Zero-Trust Architecture

Zero-trust architecture has emerged as the leading security method for organizations of all types and sizes. Zero-trust shifts cyber defenses away from static, network-based perimeters to focus directly on protecting users, assets, and resources.  Network segmentation and strong authentication methods give zero-trust adopters strong Layer 7 threat prevention. That’s why a growing number of enterprises of all types and sizes are embracing the approach. Unfortunately, many security leaders continue to deploy zero-trust incorrectly, weakening its power and opening the door to all types of bad actors.  To prevent the mistakes that many organizations make when planning a transition to zero-trust security, here’s a look at six common misconceptions you need to avoid.  Mistake One: A single security vendor can supply everything  One vendor can’t provide everything your organization needs to implement a zero-trust architecture strategy, warns Tim Morrow, situational awareness technical manager in the CERT division of Carnegie Mellon University’s Software Engineering Institute.  “It’s dangerous to accept zero-trust architecture vendors’ marketing material and product information without considering whether it will meet your organization’s security priority needs and its capability to implement and maintain the architecture,” Morrow says in an email interview.  Related:What Does Biden’s New Executive Order Mean for Cybersecurity? Mistake Two: Zero-trust is too costly to implement  Aside from the costs saved by reducing the risk of a breach, zero-trust can help save long term expenses by improving asset utilization, operational effectiveness, and reduced compliance costs, says Dimple Ahluwalia, vice president and managing partner, security consulting and systems integration at IBM via email.  Mistake Three: Underestimating the technical challenges  IT and security leaders often overlook the need to implement and manage foundational security practices before establishing a zero-trust architecture, says Craig Zeigler, an incident response senior manager at accounting and business advisory firm Crowe, in an online interview. They may also fail to identify potential gaps, such as vendor-related issues, and ensure that the chosen solution is not only compatible with their specific needs but also equipped with the appropriate controls to provide equal or greater security. “In essence, without security leaders having a thorough understanding of their team and endpoints, implementing zero trust becomes a daunting task.”  Mistake Four: Failing to align zero-trust architecture strategy with overall enterprise assets and needs  Related:3 Strategies For a Seamless EU NIS2 Implementation Cyberattacks are growing in number and severity. “A continuous vigil concerning the organization’s security operations … must be maintained,” Morrow says. The zero-trust architecture must fully mesh with business operations and goals.  Understand your organization’s current assets — data, applications, infrastructure, and workflows — and set up a procedure to update this information periodically, Morrow advises. “Yearly updates of your organization’s assets will definitely no longer be enough.”  Organizations also need to remember that their business and reputation are on the line each and every day, Morrow says. “Not doing your best to reduce your organization’s risks to cyber threats can be very costly.”  Mistake Five: Viewing zero-trust as a solution rather than an ongoing strategy  It’s essential for security leaders to understand that zero-trust is not a static goal, but a dynamic, evolving strategy, says Ricky Simpson, solutions director at Quorum Cyber, a Microsoft cybersecurity partner. “Building a culture that prioritizes security at every level, from executive leadership to individual employees, is critical to the success of zero-trust initiatives,” he notes via email.  Related:Microsoft Rings in 2025 With Record Security Update Simpson feels that continuous education, regular assessments, and a willingness to adapt to new threats and technologies are key components within a sustainable zero-trust framework. “By fostering collaboration and maintaining a vigilant stance, security leaders can better protect their organizations in an increasingly complex and hostile digital environment.”  Mistake Six: Believing that implementing zero-trust is simply a one-and-done project  Zero-trust is actually a holistic and strategic approach to security that requires ongoing evaluations of trust and threats. “It’s not a quick fix but a long-term shift in strategy,” says Shane O’Donnell, vice president of Centric Consulting’s cybersecurity practice.  Underestimating zero-trust implementation poses two major risks, notes O’Donnell in an email interview. First, unrealistic timelines and expectations can derail project planning, exhaust budgets, and drain resources. Second, hasty or flawed execution can actually create new security vulnerabilities, defeating the very purpose of a zero-trust architecture.  O’Donnell says this misconception can be addressed through continuous education and understanding. “It’s vital for security leaders to realize that transitioning to a zero-trust architecture means substantial technological and organizational changes,” he says. “This strategy should be treated as an ongoing commitment that lasts way beyond the initial set-up stage.”  source

What Security Leaders Get Wrong About Zero-Trust Architecture Read More »

Addressing the Security Risks of AI in the Cloud

The majority of organizations — 89% of them, according to the 2024 State of the Cloud Report from Flexera — have adopted a multicloud strategy. Now they are riding the wave of the next big technology: AI. The opportunities seem boundless: chatbots, AI-assisted development, cognitive cloud computing, and the list goes on. But the power of AI in the cloud is not without risk.   While enterprises are eager to put AI to use, many of them still grapple with data governance as they accumulate more and more information. AI has the potential to amplify existing enterprise risks and introduce entirely new ones. How can enterprise leaders define these risks, both internal and external, and safeguard their organizations while capturing the benefits of cloud and AI?   Defining the Risks   Data is the lifeblood of cloud computing and AI. And where there is data, there is security risk and privacy risk. Misconfigurations, insider threats, external threat actors, compliance requirements, and third parties are among the pressing concerns enterprise leaders must address  Risk assessment is not a new concept for enterprise leadership teams. Many of the same strategies apply when evaluating the risks associated with AI. “You do threat modeling and your planning phase and risk assessment. You do security requirement definitions [and] policy enforcement,” says Rick Clark, global head of cloud advisory at UST, a digital transformations solutions company.   Related:Are We Ready for Artificial General Intelligence? As AI tools flood the market and various business functions clamor to adopt them, the risk of exposing sensitive data and the attack surface expands.   For many enterprises, it makes sense to consolidate data to take advantage of internal AI, but that is not without risk. “Whether it’s for security or development or anything, [you’re] going to have to start consolidating data, and once you start consolidating data you create a single attack point,” Clark points out.   And those are just the risks security leaders can more easily identify. The abundance of cheap and even free GenAI tools available to employees adds another layer of complexity.   “It’s [like] how we used to have the shadow IT. It’s repeating again with this,” says Amrit Jassal, CTO at Egnyte, an enterprise content management company.   AI comes with novel risks as well.   “Poisoning of the LLMs, that I think is one of my biggest concerns right now,” Clark shares with InformationWeek. “Enterprises aren’t watching them carefully as they’re starting to build these language models.”  Related:AI’s on Duty, But I’m the One Staying Late How can enterprises ensure the data feeding the LLMs they use hasn’t been manipulated?  This early on in the AI game, enterprise teams are faced with the challenges of a managing the behavior and testing systems and tools that they may not yet fully understand.   “What’s … new and difficult and challenging in some ways for our industry is that the systems have a kind of nondeterministic behavior,” Mark Ryland, director of Amazon Security for Amazon Web Services (AWS), explains. “You can’t comprehensively test a system because it’s designed in part to be critical, creative, meaning that the very same input doesn’t result in the same output.”  The risks of AI and cloud can multiply with the complexity of an enterprise’s tech stack. With a multi-cloud strategy and often growing supply chain, security teams have to think about a sprawling attack surface and myriad points of risk.   “As an example, we have had to take a close look at least privilege things, not just for our customers but for our own employees as well. And, then that has to be extended not to just one provider but to multiple providers,” says Jassal. “It definitely becomes much more complex.”  AI Against the Cloud  Widely available AI tools will be leveraged not only by enterprises but also the attackers that target them. At this point, the threat of AI-fueled attacks on cloud environments is moderately low, according to IBM’s X-Force Cloud Threat Landscape Report 2024. But the escalation of that threat is easy to imagine.   Related:How Do Companies Know if They Overspend on AI and Then Recover? AI could exponentially increase threat actors’ capabilities via coding-assistance, increasingly sophisticated campaigns, and automated attacks.   “We’re going to start seeing that AI can gather information to start making … personalized phishing attacks,” says Clark. “There’s going to be adversarial AI attacks, where they exploit weaknesses in your AI models even by feeding data to bypass security systems.”  AI model developers will, naturally, attempt to curtail this activity, but potential victims cannot assume this risk goes away. “The providers of GenAI systems obviously have capabilities in place to try to detect abusive use of their systems, and I’m sure those controls are reasonably effective but not perfect,” says Ryland.   Even if enterprises opt to eschew AI for now, threat actors are going to use that technology against them. “AI is going to be used in attacks against you. You’re going to need AI to combat it, but you need to secure your AI. It’s a bit of a vicious circle,” says Clark.   The Role of Cloud Providers  Enterprises still have responsibility for their data in the cloud, while cloud providers play their part by securing the infrastructure of the cloud.   “The shared responsibility still stays,” says Jassal. “Ultimately if something happens, a breach etcetera, in Egnyte’s systems … Egnyte is responsible for it whether it was due to a Google problem or Amazon problem. The customer doesn’t really care.”  While that fundamental shared responsibility model remains, does AI change that conversation at all?  Model providers are now part of the equation. “Model providers have a distinct set of responsibilities,” says Ryland. “Those entities … [take] on some responsibility to ensure that the models are behaving according to the commitments that are made around responsible AI.”  While different parties — users, cloud providers, and model providers — have different responsibilities, AI is giving them new ways to meet those responsibilities.   AI-driven security, for example, is going to be essential for enterprises to protect their

Addressing the Security Risks of AI in the Cloud Read More »

Are We Ready for Artificial General Intelligence?

The artificial intelligence evolution is well underway. AI technology is changing how we communicate, do business, manage our energy grid, and even diagnose and treat illnesses. And it is evolving more rapidly than we could have predicted. Both companies that produce the models driving AI and governments that are attempting to regulate this frontier environment have struggled to institute appropriate guardrails.   In part, this is due to how poorly we understand how AI actually functions. Its decision-making is notoriously opaque and difficult to analyze. Thus, regulating its operations in a meaningful way presents a unique challenge: How do we steer a technology away from making potentially harmful decisions when we don’t exactly understand how it makes its decisions in the first place?   This is becoming an increasingly pressing problem as artificial general intelligence  (AGI) and its successor, artificial superintelligence (ASI), loom on the horizon.   AGI is AI equivalent to or surpassing human intelligence. ASI is AI that exceeds human intelligence entirely. Until recently, AGI was believed to be a distant possibility, if it was achievable at all. Now, an increasing number of experts believe that it may only be a matter of years until AGI systems are operational.   Related:AI’s on Duty, But I’m the One Staying Late As we grapple with the unintended consequences of current AI application — understood to be less intelligent than humans because of their typically narrow and limited functions — we must simultaneously attempt to anticipate and obviate the potential dangers of AI that might match or outstrip our capabilities.   AI companies are approaching the issue with varying degrees of seriousness — sometimes leading to internal conflicts. National governments and international bodies are attempting to impose some order on the digital Wild West, with limited success. So, how ready are we for AGI? Are we ready at all?  InformationWeek investigates these questions, with insights from Tracy Jones, associate director of digital consultancy Guidehouse’s data and AI practice, May Habib, CEO and co-founder of generative AI company Writer, and Alexander De Ridder, chief technology officer of AI developer SmythOS.  What Is AGI and How Do We Prepare Ourselves?  The boundaries between narrow AI, which performs a specified set of functions, and true AGI, which is capable of broader cognition in the same way that humans are, remain blurry.   As Miles Brundage, whose recent departure as senior advisor of OpenAI’s AGI Readiness team has spurred further discussion of how to prepare for the phenomenon, says, “AGI is an overloaded phrase.”   Related:How Do Companies Know if They Overspend on AI and Then Recover? “AGI has many definitions, but regardless of what you call it, it is the next generation of enterprise AI,” Habib says. “Current AI technologies function within pre-determined parameters, but AGI can handle much more complex tasks that require a deeper, contextual understanding. In the future, AI will be capable of learning, reasoning, and adapting across any task or work domain, not just those pre-programmed or trained into it.”  AGI will also be capable of creative thinking and action that is independent of its creators. It will be able to operate in multiple realms, completing numerous types of tasks. It is possible that AGI may, in its general effect, be a person. There is some suggestion that personality qualities may be successfully encoded into a hypothetical AGI system, leading it to act in ways that align with certain sorts of people, with particular personality qualities that influence their decision-making.   However, as it is defined, AGI appears to be a distinct possibility in the near future. We simply do not know what it will look like.  “AGI is still technically theoretical. How do you get ready for something that big?” Jones asks. “If you can’t even get ready for the basics — you can’t tie your shoe –how do you control the environment when it’s 1,000 times more complicated?”  Related:Addressing the Security Risks of AI in the Cloud Such a system, which will approach sentience, may thus be capable of human failings due to simple malfunction or misdirection due to hacking events or even intentional disobedience on its own. If any human personality traits are encoded, intentionally or not, they ought to be benign or at least beneficial — a highly subjective and difficult determination to make. AGI needs to be designed with the idea that it can ultimately be trusted with its own intelligence — that it will act with the interests of its designers and users in mind. They must be closely aligned with our own goals and values.  “AI guardrails are and will continue to come down to self-regulation in the enterprise,” Habib says. “While LLMs can be unreliable, we can get nondeterministic systems to do mostly deterministic things when we’re specific with the outcomes we want from our generative AI applications. Innovation and safety are a balancing act. Self-regulation will continue to be key for AI’s journey.”  Disbandment of OpenAI’s AGI Readiness Team  Brundage’s departure from OpenAI in late October following the disbandment of its AGI Readiness team sent shockwaves through the AI community. He joined the company in 2018 as a researcher and led its policy research since 2021, serving as a key watchdog for potential issues created by the company’s rapidly advancing products. The dissolution of his team and his departure followed on the heels of the implosion of its Superalignment team in May, which had served a similar oversight purpose.   Brundage claimed that he would either join a nonprofit focused on monitoring AI concerns or start his own. While both he and OpenAI claimed that the split was amicable, observers have read between the lines, speculating that his concerns had not been taken seriously by the company. The members of the team who stayed with the company have been shuffled to other departments. Other significant figures at the company have also left in the past year.  Though the Substack post in which he extensively described his reasons for leaving and his concerns about AGI was largely diplomatic, Brundage stated that

Are We Ready for Artificial General Intelligence? Read More »