Information Week

Court Pulls Plug on Biden’s Net Neutrality Revival, Limits FCC Power

A federal appeals court this week ruled that the Federal Communications Commission and Biden administration overstepped their authority by reviving net neutrality rules last year. The 6th Circuit Court of Appeals three-judge panel said the FCC was wrong to label broadband as a telecommunications service instead of an information service. The distinction was the basis for giving FCC power to wield net neutrality rules. Under the rules, which were struck down during the first Trump administration, broadband internet providers could not block or throttle internet access or speed up access to certain websites that pay higher fees. Net neutrality proponents say the rules ensure open and fair access to the internet, while detractors say the rules stifle innovation and weaken competition. Net neutrality was first approved in 2015 under the Obama administration and struck down the first Trump administration in 2017. In a down-the-line partisan vote, the FCC restored the rules last April. Democratic FCC Chair Jessica Rosenworcel urged Congress to take action against Thursday’s court decision. “Consumers across the country have told us again and again that they want an internet that is fast, open, and fair,” Rosenworcel said in a statement. “With this decision it is clear that Congress now needs to heed their call, take up the charge for net neutrality, and put open internet principles in federal law.” Related:UK Launches Antitrust Investigations Targeting Big Tech In a post on X (formerly Twitter), Tim Wu, a Columbia law professor and consultant for the Biden Administration’s competition and antitrust policy, disagreed with the court’s ruling. “When Congress passed the Telecommunications Act in 1934, it clearly wanted the American people to enjoy non-discriminatory, low-cost communications services. Finding otherwise is blatant judicial activism that puts corporate interests over American democracy,” he wrote. Republican FCC commissioner Brendan Carr applauded the court’s decision, deriding the FCC’s “nearly limitless” power over the internet as a utility under Title II of the Communication Act. “Rather than focusing on a broadband agenda that would bridge the digital divide, the Biden Administration chose to waste time and resources imposing these unnecessary command and control regulations,” Carr said in a statement. “I am pleased that the appellate court invalidated President Biden’s internet power grab by striking down these unlawful Title II regulations. But the work to unwind the Biden Administration’s regulatory overreach will continue.” Related:In Global Contest for Tech Talent, US Skills Draw Top Pay In the published opinion, the judges wrote that the action to restore net neutrality “resurrected the FCC’s heavy-handed regulatory regime.” Tim Wood, net neutrality advocate and general counsel of Free Press, scoffed at the court’s ruling. “It’s rich to think of Donald Trump and Elon Musk’s hand-picked FCC chairman characterizing light-touch broadband rules as heavy-handed regulation, while scheming to force carriage of viewpoints favorable to Trump on the nation’s broadcast airwaves and social media sites,” Wood said in a statement. “With this ruling, the 6th Circuit has for now denied the public the internet access service that it deserves…” source

Court Pulls Plug on Biden’s Net Neutrality Revival, Limits FCC Power Read More »

Y2K and Infrastructure Resilience 25 Years Later

In 1999, under the digital gleam of Keanu Reeves in “The Matrix,” IT teams either sat confidently with the changes they made to resolve the Y2K bug or waited with bated breath to see if the fixes held. The arrival of the year 2000 did not bring about the feared digital apocalypse thanks to fixes to make software and hardware to understand what seemed like a simple, yet crucial change to date formats. Now, more than one generation later, IT infrastructure faces a menagerie of other potential risks that could bring about systemic disruptions — some attributed to errors, others that stem from bad actors. This episode of DOS Won’t Hunt saw Greg Rivera, vice president of product at CAST; Paul Davis, field CISO at JFrog; and Theresa Lanowitz, chief evangelist at LevelBlue, discuss how computer infrastructure evolved in the 25 years since worries about Y2K launched IT teams into action. Are there rules and norms that were part of legacy tech from the Y2K era that no longer apply in the time of cloud, edge, and all else? Would the mass IT mobilization that happened to resolve Y2K be easier or harder to pull off today to address infrastructure issues? Did bad actors learn any tricks from Y2K for potential systemwide cyberattacks? source

Y2K and Infrastructure Resilience 25 Years Later Read More »

Federal Cybersecurity Policy Still Lags Rapid Change

Water, power, sewage, banking, education, you name it — all these life essentials have something in common: they rely on information technology. Increasingly complex and insecure technology. Meanwhile, threat actors have the means to launch ever-rising numbers of attacks on critical applications. The revelation this past August of the huge data breach at National Public Data of Americans’ Social Security numbers, and other personal data, is a stunning Exhibit A.    The number of reported vulnerabilities has skyrocketed over the last 10 years. In fact, the number of new software vulnerabilities cataloged in the federal National Vulnerability Database has increased an average of 29% per year over the last seven years. Every year sets a record high, and with the introduction of malicious code-writing and security hole-finding AI models, there’s no reason to think that trend will reverse. The federal government’s contribution to cybersecurity has thus far been through guidance and influence or by wielding its purchasing power as a huge IT consumer. Those have some value but clearly aren’t having much impact.    The public is quite unaware of how low the bar is presently set in software security. Modern software is never written entirely from scratch. Instead, developers use an “assembly” approach that pulls together existing code packages, often using open-source software built and maintained by developers not beholden in any way to the company making the final product.  Related:New Cybersecurity Rules Coming for Health Care As security vulnerabilities and active malware become increasingly common, all companies find themselves shouldering increasing security risk. Such government organizations as the Cybersecurity and Infrastructure Security Agency (CISA) have spent a great deal of time, money, and effort over the last few years trying to convince software vendors to adopt basic security practices and Software Bills of Materials (SBOMs). A vendor’s SBOM tells the customer what is in the software — but not whether the contents are secure. CISA’s actions have not moved the needle at stopping breaches. US cybercrime costs reached an estimated $320 billion as of last year. Between 2017 and 2023, costs grew by over $300 billion.  Companies say they’re doing more about cybersecurity, but breaches continue, and the private market is not correcting poor behavior. Stock charts barely register a blip when companies report breaches now. Congress has not yet stepped in, hampered, perhaps, by an inadequate understanding of the issue.   Related:Supply Chain Risk Mitigation Must Be a Priority in 2025 Urgent action is, consequently, needed.  Government stepped in to protect our food and medicine by establishing the Food and Drug Administration, intervened to make our automobiles safer by establishing the National Highway Traffic Safety Administration, and acted to ensure job safety by establishing the Occupational Safety and Health Administration. When new technology or industrial development has threatened public health and safety, the government has created new regulatory bodies to protect that health and safety. And according to public polling, while Americans may be largely dissatisfied with the federal government in broad terms, they still desire it to help keep the populace safe, including providing protection from unsafe products.  The upshot is that Congress should establish a new regulatory body to evolve the “guidance” currently provided by CISA and presidential executive orders, coupled with oversight powers based on an expanded definition of critical software and hardware. What specifically defines “critical” here will of course need to be determined, but the current definition in use by CISA simply does not provide a sufficient scope to ensure America’s cybersecurity.     The current patchwork of industry self-regulation — with each federal department doing their best to oversee their respective industry areas — leaves too many gaps and will not even scale to the challenges we already face. The new regulatory body’s charter should establish enforceable minimum standard security practices for private companies that are deemed critical to the nation. Those standards should go beyond CISA’s currently used definition of critical infrastructure, which does not include companies essential to our everyday lives, such as Microsoft, Google, payment providers, and cybersecurity firms like CrowdStrike.     Related:How to Create an Enterprise-Wide Cybersecurity Culture This new regulator will also need the power to audit companies against those standards, selectively publish findings publicly, share findings with other regulators such as the SEC, establish fines, and in egregious cases, be able to pull products from the market. These powers follow the established scope of current agencies, such as the FDA and NHTSA. Without these powers of regulation over essential software, any new agency will be reduced to providing “guidance” and our nation will continue to be at risk.  As CISA is already under the Department of Homeland Security, the above could be accomplished either through expanding their jurisdiction and giving them the above powers and responsibilities, or through the establishment of a new agency. The need for robust cybersecurity regulation and oversight has become essential if we are to protect American citizens, companies, and governments from cyberattacks. Our unpredictable technological and geopolitical environments will demand no less.  source

Federal Cybersecurity Policy Still Lags Rapid Change Read More »

Achieving Network TCO

The global network infrastructure market is predicted to reach $256 billion by 2028, the Wi-Fi 7 market topped $1 billion in 2024, network and cloud security investments achieved all-time highs by the end of 2023, and the edge computing market is set to grow by 1108% over the next eight years. All present urgent needs to expand network budgets. However, as network planners and managers are well aware, their annual budgets are not growing exponentially. What’s the best way to keep pace with network technology when your budget only goes so far? First, let’s take a look at the business drivers that present a need for significant network upgrades: Technologies like Wi-Fi 6 and Wi-Fi 7 will deliver network services quicker to internal users and will present a virtually boundless number of network IP addresses for new devices as they are added. Edge computing will continue to be deployed at manufacturing plants, retail stores, and remote offices. New cloud-based network management and security solutions are available that enable you to virtualize your network—with the likely result of sites using hybrid networks that will be part cloud-based and part on-premises. As these new networking technologies and tools are implemented, network staff will need to be recruited or trained to operate them. Network managers can build compelling cases for all of these upgrades, but at some point, the CIO, CFO, CEO, or all three will ask the TCO (total cost of ownership) question: What is it costing us now just to keep things going, and if we agree to make a major investment into technology XYZ, how long will it take us to recoup our investment? Understanding the elements of TCO In past practice, it was relatively straightforward to calculate network TCO. You totaled your costs in network hardware, software, staff labor, contractor labor, service contracts, power consumption, floor space, and so on, and then came up with numbers for operating expenses and asset capitalizations. The assumptions were that you would incrementally swap out aging assets or add new ones in any given budget year but in a phased approach. The question is, will this type of TCO approach be sustainable in the long term, given the many business drivers for edge computing, rapid information transport, security, and network monitoring that are contending for investments all at once? The answer is “no.” TCO calculation methodologies won’t fundamentally change, but the elements that highlight TCO discussions will. Navigating a new TCO landscape TCO discussion should shift from a unilateral cost justification (and payback) of technology that is being proposed to a discussion of what the opportunity costs for the business will be if a network infrastructure investment is canceled or delayed. If a company determines strategically to decentralize manufacturing and distribution but is also wary of adding headcount, it’s going to seek out edge computing and network automation. It’s also likely to want robust security at its remote sites, which means investments in zero-trust networks and observability software that can assure that the same level of enterprise security is being applied at remote sites as it is at central headquarters. In cases like this, it shouldn’t be the network manager or even the CIO who is solely responsible for making the budget case for network investments. Instead, the network technology investments should be packaged together in the total remote business recommendation and investment that other C-level executives (e.g., the COO or VP of operations) argue for with the CIO and/or network manager, HR, and others. In this scenario, the TCO of a network technology investment is weighed against the cost of not doing it at all and missing a corporate opportunity to decentralize operations, which can’t be accomplished without the technology that is needed to run it. Getting budget approvals The takeaway from the above example for network managers is that they need to consider and present the business value and opportunity costs of every network funding proposal that they make. The more network managers do this, the more successful they’ll be in securing funding. Here are some examples of network needs that are likely to link closely with business opportunities: Software monitoring and security Companies will continue to move more IT to the cloud because they like the feeling of “pay per use.” Accordingly, more networks are likely to evolve into hybrid combinations of both internal and cloud-based resources. At the same time, companies also want airtight security on both cloud-based and internal networks so they can avoid data breaches and intellectual property thefts. Most companies are already running an IAM (Identity and Access Management) system that enables network staff to have a “single pane of glass” view of all user access and permissions, whether these occur internally or in the cloud. Unfortunately, an IAM security solution doesn’t provide the same level of security visibility and granularity that a CIEM (Cloud Infrastructure Entitlement Management) software does, so a network manager might propose investing in CIEM, although CIEM can be quite expensive. In this example, traditional TCO arguments will certainly come up, but so should the opportunity cost (and risk) of not having all of the company’s cloud-based property secured. A final word about network TCO Network managers annually struggle to upgrade networks within the budgetary dollars they’re allotted—and many dread the TCO justification and payback discussions for network infrastructure investments that others fail to see value in. This page can be turned by presenting the business opportunity costs if a proposal is denied or deferred—and it can best be presented alongside managers from the business who want to advance corporate strategies that require network infrastructure investment.  It’s time to transform the network into an integral and strategic part of IT and the business and not just a supporting role. source

Achieving Network TCO Read More »

Why Most Return to Office Mandates Will Fail

After surviving the pandemic with work-from-home policies, some organizations have decided that work should return to its pre-pandemic state in which most employees were expected to be in the office at least part of the week, if not the whole week.  The problem with that is two-fold: First, organizations admitted that they were pleasantly surprised by remote work productivity, but now they’re saying, “Yeah, but training is easier and water cooler conversations are golden.” While those are compelling facts, organizations are forgetting that employees may vote with their feet.  One reason is that employees discovered a new work-life balance during the pandemic that many do not want to give up. For some, that means flexible hours. For others, it’s the ability to be present at work and at home simultaneously.  “Especially the United States, [workers] have moved from big cities or simply to other areas far from their corporate offices, and their children have started attending schools near their new homes to work remotely. So, the requirement to start working in the office again means either a new move or a job change,” says Diana Soprana Blažaitienė, international HR and remote work expert for hospitality and IT sectors across Scandinavia and Germany. “Employees who are told to return to the office are also unhappy about the increased costs of work: clothing, transportation, lunches, [and commute time].”   Related:Court Pulls Plug on Biden’s Net Neutrality Revival, Limits FCC Power Return to office (RTO) is the main reason why some people are changing jobs right now, particularly Gen Z.  “Gen Z, who prioritize work-life balance, will undoubtedly choose organizations without a strict RTO policy. This means that top talent and more candidates in general will be attracted by those that offer the opportunity to work remotely at least part of the time,” says Blažaitienė. “Even some employees who come to me for selections identify the RTO policy as a deception by the employer because they were hired when they could work remotely, and now they are required to return to the office.”  The real reason RTO is happening is that some executives and managers feel more in control, or they believe remote work processes are not properly structured and managed. There’s also the real estate issue of leased and owned properties that are not being used to capacity.  Diana Soprana Blažaitienė “I think that CEOs need to understand that the factory work structure — work from 8 to 5 — is already outdated and we are inevitably entering an era of a different perception and nature of work,” says Blažaitienė.  Related:Tech Company Layoffs: The COVID Tech Bubble Bursts Dovilė Gelčinskaitė, senior talent manager at omnichannel marketing platform Omnisend, agrees.  “RTO mandates ignore the true purpose of on-site work: fostering creativity and teamwork. At Omnisend, we recognize that brainstorming, workshops and team building can’t be replicated remotely. However, we’ve also found that rigid, outdated workplace models fail to reflect how much the nature of work has changed,” says Gelčinskaitė. “Flexibility is now an expectation, especially among younger generations, so finding that balance between flexibility and in-person interactions is crucial. Companies that fail to do so will lose great talent to companies that do.”  RTO Adds to Stress and Burnout  Organizations are facing pushback on their RTO policies, but employee exoduses will send a much more powerful message.  “In general, people do not like feeling that things are happening to them, and that they have no say, or choice in the matter. So, when you suddenly pivot to an RTO mandate, employees will take it personally, as it does impact their personal lives, and they will likely feel demoralized,” says Ashley Alexander, chief people officer at observability platform Chronosphere. “In most cases, employees are professional adults, so making knee-jerk decisions is going to cause unnecessary stress or burnout.”  Related:IT’s New Frontier: Protecting the Company from Brand Bashing One reason RTO policies fail is because the employees who were forced back to the office spend their day on Zoom calls with colleagues who aren’t physically present.   “To avoid [this annoyance], there needs to be a thoughtful strategy ensuring pods or teams collaborating closely or benefiting from shared learning are in the office together,” says Alexander. “A sudden shift from remote work to RTO often highlights how dispersed teams have become. Without a clear location-based strategy tied to roles and responsibilities, the transition can feel chaotic and ineffective.”   A better way to approach it is to clearly explain how RTO benefits employees, or how the mandate positively impacts customers and the ability to get work done more efficiently. There should also be reasonable time given for employees to opt in or out of the RTO mandate, and executives should have to follow the same expectations as everyone else.   According to Rachel Marcuse, COO at organizational consulting firm  ReadySet, many employees see RTO as a regressive, antiquated move.   “Employees may be less engaged during a workday bookended by commutes and less than enthusiastic about the financial and climate costs of traveling to the office daily,” says Marcuse. “[B]usinesses could lose out on the best Gen Z talent, with recent studies showing that Gen Zers want the option to work remotely — even as they also crave some level of in-person collaboration.”  Downstream Effects  As companies enforce their RTO policies, there are downstream effects, the most obvious of which is getting employees to change their behavior, yet again.  “More rigid mandates shrink the available talent pool, especially for organizations, particularly those in smaller markets. Remote work has been a boon for these companies, granting access to talent they wouldn’t typically be able to attract,” says Darrin Murriner, CEO and co-founder at automated technology coaching platform Cloverleaf. “For candidates, rigid RTO decreases the number of available job opportunities, creating a lose-lose situation for both sides.”  For example, such mandates increase operational costs, including housing in-office employees and managing relocations. These policies can also create disruption and uncertainty, driving valuable employees to reconsider their roles within the organization.  “For employees,

Why Most Return to Office Mandates Will Fail Read More »

Supply Chain Risk Mitigation Must Be a Priority in 2025

COMMENTARY Israel’s electronic pager attacks targeting Hezbollah in September highlighted the dangerous ramifications of a weaponized supply chain. The attacks, which leveraged remotely detonated explosives hidden inside pager batteries, injured nearly 3,000 people across Lebanon, as a worst-case reminder of the inherent risk that lies within global supply networks. The situation wasn’t just another doomsday scenario crafted by financially motivated vendors hoping to sell security products. It was a legitimate, real-world byproduct of our current reality amid the escalating proliferation of adversarial cybercrime. It also underscored the dangers of relying on third-party hardware and software, with roots back to foreign countries of concern — something that happens more often than one might expect. For example, on Sept. 12, a US House Select Committee Investigation revealed that 80% of the ship-to-shore cranes at American ports are manufactured by a single Chinese government-owned company. While the committee did not find evidence that the company used its access maliciously, the vulnerability could have enabled China to manipulate US maritime equipment and technology in the wake of geopolitical conflict.  As nation-state actors explore new avenues for gaining geopolitical advantage, securing supply chains must be a shared priority amongst the cybersecurity community in 2025. Verizon’s “2024 Data Breach Investigations Report” found that the use of zero-day exploits to initiate breaches surged by 180% year-over-year — and among them, 15% involved a third-party supplier. The right vulnerability at the wrong time can put critical infrastructure in the crosshairs of a consequential event. Implementing impactful supply chain protections is far easier said than accomplished, due to the complexity, scale, and integration of modern supply chain ecosystems. While there isn’t a silver bullet for eradicating threats entirely, prioritizing a targeted focus on effective supply chain risk management principles in 2025 is a critical place to start. It will require an optimal balance of rigorous supplier validation, purposeful data exposure, and meticulous preparation. Rigorous Supplier Validation: Moving Beyond the Checkboxes Whether it’s cyber warfare or ransomware, modern supply chain attacks are too sophisticated for organizations to fall short on supplier validation. Now is a vital time to move beyond self-reported security assessments and vendor questionnaires and migrate toward more comprehensive validation processes that prioritize regulatory compliance, response readiness, and secure-by-design. Ensuring adherence to evolving industry standards must be a foundational driver of any supplier validation strategy. Is your supplier positioned to meet the European Union’s Digital Operational Resilience Act (DORA) and Cyber Resilience Act (CRA) regulations? Are they aligned with the National Security Agency’s CNSA 2.0 timelines to defend against quantum-based attacks? Do their products possess the cryptographic agility to integrate the National Institute of Standards and Technology’s (NIST’s) new Post-Quantum Cryptography (PQC) algorithms by 2025? These examples are all important value drivers to consider when selecting a new partner. Chief information security officers (CISOs) should still push further by mandating actual evidence of cyber resilience. Conduct annual on-site security audits for suppliers that assess everything from physical security measures and solution stacks to IT workflows and employee training programs. In addition, require your suppliers to provide quarterly penetration testing reports and vulnerability assessments, then thoroughly review the documents and track remediation efforts. Equally crucial to rigorous validation is gauging a supplier’s incident response readiness via notification procedures, communication protocols, practitioner expertise, and cross-functional collaboration. Any joint cyber-defense strategy should also be underpinned by a shared commitment to secure-by-design principles and robust product security testing protocols that are integrated into supply chain risk assessments. Implemented during the early stages of product development, secure-by-design helps reduce an application’s exploit surface before it is made available for broad use. Product security testing provides a comprehensive understanding of how utilizing a particular product will impact your threat model and risk posture. Purposeful Data Exposure: Less Is Always More Less (access) is more when it comes to protecting data in supply chain environments. Organizations should be focused on adopting purposeful approaches to data sharing, carefully considering what information is truly necessary for a third-party partnership to succeed. Limiting the exposure of sensitive information to external suppliers via scaled zero-trust concepts will help reduce your supply chain attack surface exponentially, which in turn simplifies the management of third-party risk.  An important step in this process involves implementing stringent access controls that restrict credentials to only essential data and systems. Data aging and retention policies also play a crucial role here. Automating processes to phase out legacy or unnecessary data helps ensure that even if a breach occurs, the damage is contained and privacy is maintained. Leveraging encryptions aggressively across all data touchpoints accessible to third parties will also add an extra layer of protection for undetected breaches that occur throughout the wider supply chain ecosystem. Meticulous Preparation: Assumption of Breach Mindset As supply chain attacks accelerate, organizations must operate under the assumption that a breach isn’t just possible — it’s probable. An “assumption of breach” mindset shift will help drive more meticulous approaches to preparation via comprehensive supply chain incident response and risk mitigation. Preparation measures should begin with developing and regularly updating agile incident response processes that specifically cater to third-party and supply chain risks. For effectiveness, these processes will need to be well-documented and frequently practiced through realistic simulations and tabletop exercises. Such drills help identify potential gaps in the response strategy and ensure that all team members understand their roles and responsibilities during a crisis.  Maintaining an up-to-date contact list for all key vendors and partners is another crucial component to preparation. In the heat of an incident, knowing exactly who to call at Vendor X, Y, or Z can save precious time and potentially limit the scope of a breach. This list should be regularly audited and updated to account for personnel changes or shifts in vendor relationships. Organizations should also have a clear understanding of the shutdown and containment procedures for each critical application or system within their supply chain. While it’s impossible to predict every potential scenario, a well-positioned team armed with comprehensive response plans and intimate knowledge of their supply chain environment is far better equipped to combat adversarial threat actors. source

Supply Chain Risk Mitigation Must Be a Priority in 2025 Read More »

Does Desktop AI Come With a Side of Risk?

Artificial intelligence has come to the desktop. Microsoft 365 Copilot, which debuted last year, is now widely available. Apple Intelligence just reached general beta availability for users of late-model Macs, iPhones, and iPads. And Google Gemini will reportedly soon be able to take actions through the Chrome browser under an in-development agent feature dubbed Project Jarvis. The integration of large language models (LLMs) that sift through business information and provide automated scripting of actions — so-called “agentic” capabilities — holds massive promise for knowledge workers but also significant concerns for business leaders and chief information security officers (CISOs). Companies already suffer from significant issues with the oversharing of information and a failure to limit access permissions — 40% of firms delayed their rollout of Microsoft 365 Copilot by three months or more because of such security worries, according to a Gartner survey. The broad range of capabilities offered by desktop AI systems, combined with the lack of rigorous information security at many businesses, poses a significant risk, says Jim Alkove, CEO of Oleria, an identity and access management platform for cloud services. “It’s the combinatorics here that actually should make everyone concerned,” he says. “These categorical risks exist in the larger [native language] model-based technology, and when you combine them with the sort of runtime security risks that we’ve been dealing with — and information access and auditability risks — it ends up having a multiplicative effect on risk.” Related:Orgs Scramble to Fix Actively Exploited Bug in Apache Struts 2 Desktop AI will likely take off in 2025. Companies are already looking to rapidly adopt Microsoft 365 Copilot and other desktop AI technologies, but only 16% have pushed past initial pilot projects to roll out the technology to all workers, according to Gartner’s “The State of Microsoft 365 Copilot: Survey Results.” The overwhelming majority (60%) are still evaluating the technology in a pilot project, while a fifth of businesses haven’t even reached that far and are still in the planning stage. Most workers are looking forward to having a desktop AI system to assist them with daily tasks. Some 90% of respondents believe their users would fight to retain access to their AI assistant, and 89% agree that the technology has improved productivity, according to Gartner. Bringing Security to the AI Assistant Unfortunately, the technologies are black boxes in terms of their architecture and protections, and that means they lack trust. With a human personal assistant, companies can do background checks, limit their access to certain technologies, and audit their work — measures that have no analogous control with desktop AI systems at present, says Oleria’s Alkove. Related:Delinea Joins CVE Numbering Authority Program AI assistants — whether they are on the desktop, on a mobile device, or in the cloud — will have far more access to information than they need, he says. “If you think about how ill-equipped modern technology is to deal with the fact that my assistant should be able to do a certain set of electronic tasks on my behalf, but nothing else,” Alkove says. “You can grant your assistant access to email and your calendar, but you cannot restrict your assistant from seeing certain emails and certain calendar events. They can see everything.” This ability to delegate tasks needs to become part of the security fabric of AI assistants, he says. Cyber-Risk: Social Engineering Both Users & AI Without such security design and controls, attacks will likely follow. Earlier this year, a prompt injection attack scenario highlighted the risks to businesses. Security researcher Johann Rehberger found that an indirect prompt injection attack through email, a Word document, or a website could trick Microsoft 365 Copilot into taking on the role of a scammer, extracting personal information, and leaking it to an attacker. Rehberger initially notified Microsoft of the issue in January and provided the company with information throughout the year. It’s unknown whether Microsoft has a comprehensive fix for the issue. Related:Citizen Development Moves Too Fast for Its Own Good The ability to access the capabilities of an operating system or device will make desktop AI assistants another target for fraudsters who have been trying to get a user to take actions. Instead, they will now focus on getting an LLM to take actions, says Ben Kliger, CEO of Zenity, an AI agent security firm. “An LLM gives them the ability to do things on your behalf without any specific consent or control,” he says. “So many of these prompt injection attacks are trying to social engineer the system — trying to go around other controls that you have in your network without having to socially engineer a human.” Visibility Into AI’s Black Box Most companies lack visibility into and control of the security of AI technology in general. To adequately vet the technology, companies need to be able to examine what the AI system is doing, how employees are interacting with the technology, and what actions are being delegated to the AI, Kliger says. “These are all things that the organization needs to control, not the agentic platform,” he says. “You need to break it down and to actually look deeper into how those platforms actually being utilized, and how do people build and interact with those platforms.” The first step to evaluating the risk of Microsoft 365 Copilot, Google’s purported Project Jarvis, Apple Intelligence, and other technologies is to gain this visibility and have the controls in place to limit an AI assistant’s access on a granular level, says Oleria’s Alkove. Rather than a big bucket of data that a desktop AI system can always access, companies need to be able to control access by the eventual recipient of the data, their role, and the sensitivity of the information, he says. “How do you grant access to portions of your information and portions of the actions that you would normally take as an individual, to that agent, and also only for a period of time?” Alkove asks. “You might only want the

Does Desktop AI Come With a Side of Risk? Read More »

Secure By Demand: Key Principles for Vendor Assessments

In today’s interconnected world, the software supply chain is a vast network of fragile connections that has become a prime target for cybercriminals. The complex nature of the software supply chain, with its numerous components and dependencies, makes it vulnerable to exploitation. Organizations rely on software from numerous vendors, each with its own security posture, which can expose them to risk if not properly managed.   The Cybersecurity and Infrastructure Security Agency (CISA) recently published a comprehensive “Secure by Demand Guide: How Software Customers Can Drive a Secure Technology Ecosystem” to help organizations understand how to secure their software supply chains effectively. With both vendors and threat actors increasingly leveraging AI, this guide is a timely resource for organizations seeking to more effectively navigate their software vendor relationships.  Importance of Securing the Software Supply Chain  Supply chain attacks, such as the infamous Change Healthcare and CDK Global breaches, highlight the critical importance of securing the software supply chain. It represents a significant risk to every organization given that a single vulnerability can have a domino effect that compromises the entire chain. These attacks can have devastating consequences, including data breaches, operational disruptions, regulatory penalties, and irreparable reputational damage.  Related:How to Create an Enterprise-Wide Cybersecurity Culture CISA’s guide serves as an excellent foundation for organizations needing to implement a robust software supply chain security strategy. These best practices are particularly valuable for public companies required to report material cyberattacks to the SEC. The top three takeaways for organizations are:   1. Embracing radical transparency: CISA urges vendors to embrace radical transparency, providing a comprehensive and open view of their security practices, vulnerabilities, methodologies, data, and guiding principles.  2. Taking ownership of security outcomes: Vendors must be accountable for the security outcomes of their software. By having visibility into both their own security posture and that of their vendors, organizations can identify vulnerabilities and take corrective actions.  3. Make security a team effort: Ensure that the organization’s security objectives are clearly defined and communicated to all employees. Cybersecurity should not be treated as an individual responsibility but rather as a company-wide priority, just like other critical business functions.  Mastering Vendor Assessments   Related:The Importance of Empowering CFOs Against Cyber Threats Recent research from SecurityScorecard found that 99% of Global 2000 companies have been directly connected to a supply chain breach. These incidents can be extremely costly, with remediation and management costs 17 times higher than first-party breaches. To mitigate these risks, organizations must prioritize thorough vendor assessments. Vendor assessments can be time-consuming, but they are just as important as ensuring your own company’s security. Several key processes to consider include:   Conducting regular vendor assessments: First and foremost, a vendor assessment doesn’t work if you only do it once in a blue moon. Continuously assess the security postures of your vendors to ensure that they comply with industry security standards and that their software does not expose your organization to vulnerabilities. This includes conducting regular security audits, reviewing vendor security practices, and assessing their incident response capabilities.  Demand secure-by-design products: Make “secure by design” a non-negotiable. Prioritize vendors who embed security into every phase of the product life cycle, ensuring it’s a core consideration from development to deployment, not an afterthought. Implement strong vendor management policies: Develop a comprehensive vendor management policy that includes onboarding procedures, continuous monitoring, and guidelines for security expectations throughout the vendor relationship. This policy should outline the security requirements that vendors must meet and establish clear communication channels for reporting and addressing security issues.  Related:5 Questions Your Data Protection Vendor Hopes You Don’t Ask Ensure limited access and privileges: Operate on a principle of least privilege with vendors. Grant them only the minimum access and permissions needed to fulfill their tasks. Overprovisioning access can widen your attack surface significantly. Implement robust access controls and conduct regular reviews to ensure only authorized personnel have access to sensitive systems and data.  Monitor for vulnerabilities and weaknesses: Actively monitor for new vulnerabilities in software provided by your vendors. Utilize automated tools to detect vulnerabilities and respond swiftly to reduce exposure. Stay informed about emerging threats and industry best practices to ensure your organization is prepared to address new challenges.  Securing the Future of the Supply Chain   The supply chain breaches at Change Healthcare and CDK Global demonstrate the devastating consequences of neglecting software supply chain security. These attacks can result in billions of dollars in losses, months of operational disruption, irreparable damage to reputation, legal ramifications, regulatory fines, and loss of customer trust. Moreover, recovery efforts, such as forensic investigations and system restorations, require substantial resources.  Collaboration is important in any industry, but in today’s age of increasing nation-state threat actors and even individual hackers in their parent’s garage, collaboration and information sharing among cybersecurity professionals is vital. By aligning with Secure by Demand principles, utilizing continuous monitoring, and implementing a culture of transparency, organizations can strengthen their defenses and significantly reduce the risk of supply chain attacks.  source

Secure By Demand: Key Principles for Vendor Assessments Read More »

How to Create an Enterprise-Wide Cybersecurity Culture

As the threat landscape grows, investment in cybersecurity training and awareness programs is expanding rapidly. The reason is simple — cybersecurity’s weak link is people and how they behave. It’s a challenge that many experts now believe can only be resolved through an enterprise-wide culture change.  Prioritizing cybersecurity and building an enterprise-wide cybersecurity culture is essential, says Jennifer Sullivan, a principal in Deloitte’s cyber strategy practice. In an era of rapid technological evolution, cyber threats pose significant risks to organizations’ operations, reputation, and financial stability. “Cultivating a culture of continuous education and awareness empowers every employee to take ownership of cybersecurity, supporting sustainable growth and innovation,” she states in an email interview. “By prioritizing cybersecurity, potential vulnerabilities can be transformed into strategic strengths, ensuring a long-term culture of resilience and trust both inside and outside the organization.”  Getting Started  The first step in creating an enterprise-wide cybersecurity culture is building a comprehensive policy that establishes what’s considered right and wrong. “This policy should be clear, well-documented, and easily accessible to everyone in the organization,” advises Erez Tadmor, field CTO at security policy management company Tufin, in an online interview. The policy should outline network security rules, such as access controls and data communication standards, setting the foundation for expected behaviors, he explains. “When all security teams align with these guidelines, it fosters a sense of unity and responsibility that becomes ingrained in the company’s culture.”  Related:Secure By Demand: Key Principles for Vendor Assessments Promote ownership in cybersecurity functions, recommends Amanda Satterwhite, Accenture Federal Services’ managing director of cyber mission and enablement. This goal can be most effectively achieved by assigning security roles and responsibilities across various levels or teams within the organization, she notes via email. Rewards and recognition are also important. “Reward employees who demonstrate strong cybersecurity practices and who willingly take the time to report potential threats through vigilance.”  Make cybersecurity a factor in each employee’s annual performance, Satterwhite advises. “This ensures that individuals clearly understand what’s personally expected from them,” she says. “Setting minimum security performance goals for each individual fosters a culture of accountability and shared responsibility.”  Related:The Importance of Empowering CFOs Against Cyber Threats Cybersecurity culture planning requires a cross-organizational effort. While the CISO or CSO typically leads, the tone must be set from the top with active board involvement, Sullivan says. “The C-suite should integrate cybersecurity into business strategy, and key stakeholders from IT, legal, HR, finance, and operations must collaborate to address an ever-evolving threat landscape.” She adds that engaging employees at all levels through continuous education will ensure that cybersecurity becomes everyone’s responsibility.  Culture Building  Liberty Mutual Insurance builds its cybersecurity culture with “Responsible Defenders,” a culture-based awareness initiative that’s designed to educate the firm’s 45,000 global employees about their role as frontline guardians against cyberattacks. “The program aims to educate employees about their responsibility to keep sensitive customer, employee, and company information secure,” says Jill Areson-Perkins, a cybersecurity manager at Liberty Mutual Insurance, in an online interview. The program’s goal is to keep employees engaged throughout the year with social engineering exercises, gamification tactics, blog posts, videos, and online training and events. “As the cyber threat landscape continues to evolve, we regularly update and enhance our training and education.”  Related:5 Questions Your Data Protection Vendor Hopes You Don’t Ask Liberty Mutual also fosters a cybersecurity environment by deploying exercises that use real phishing emails as templates. Employees that fail the exercise are given real-time training that highlights the rogue emails’ suspicious components. “We also provide a ‘Friends and Family Cyber Guide’ for employees to share externally.” The guide offers tips on topics such as ‘phishy’ emails, password management, and social media privacy, Areson-Perkins says. “By actively engaging every employee, as well as senior leaders and business partners across the company, we cultivate a culture where everyone feels empowered to safeguard the company.”  Final Thoughts  A big mistake many organizations make is treating cybersecurity as a separate initiative that’s disconnected from the organization’s core mission, Sullivan says. “Cybersecurity should be recognized as a critical business imperative that requires board and C-suite-level attention and strategic oversight.”  Creating a healthy network security culture is an ongoing process that involves continuous learning, adaptation, and collaboration among teams, Tadmor says. This requires more thought than just setting policies — it’s also about integrating security practices into daily routines and workflows. “Regular training, open communication, and real-time monitoring are essential components to keep the culture alive and responsive to emerging network threats,” he says. “By making network security a shared responsibility across the organization, companies can build a resilient and adaptive security posture.”  Seek clarity and openness, Satterwhite suggests. “One of the biggest mistakes in building a cybersecurity culture is adopting industry buzzwords that don’t resonate with employees,” she explains. Use company-aligned terms in internal campaigns that promote the importance of securing the company’s mission. “Make sure that the messaging is clear and understandable at every level of the organization.”  source

How to Create an Enterprise-Wide Cybersecurity Culture Read More »

9 Cloud Service Adoption Trends

As the competitive landscape changes and the mix of cloud services available continues to grow, organizations are moving deeper into the cloud to stay competitive. Many are adopting a cloud-first strategy.  “Organizations are adopting more advanced, integrated cloud strategies that include multi-cloud environments and expanded services such as platform as a service (PaaS) and infrastructure as a service (IaaS),” says Bryant Robinson, principal consultant at management consulting firm Sendero Consulting. “This shift is driven by increasing demands for flexibility, scalability, and the need to support emerging technologies such as remote collaboration, real-time data processing and AI-powered diagnostics.”  Recent surges in cyberattacks have also accelerated these changes, highlighting the need for adaptable digital infrastructure to ensure continuity of business processes, enhance user accessibility, and protect sensitive customer data.  “Companies that are succeeding with cloud adoption are investing in improved security frameworks, focusing on interoperability, and leveraging cloud-native tools to build scalable applications,” says Robinson. “In addition, certain industries have to prioritize technology with regulation and compliance mechanisms that add a level of complexity. Within healthcare, for example, regulations like HIPAA are [considered] and prioritized through implementing secure data-sharing practices across cloud environments.”  Related:Best Practices for Managing Hybrid Cloud Data Governance However, some organizations struggle with managing multi-cloud complexity and the resulting inability to access, share, and seamlessly use data across those environments. Organizations may also lack the in-house expertise needed to implement and operationalize cloud platforms effectively, leading to the inefficient use of resources and potential security risks.  “Organizations should develop a clear, long-term cloud strategy that aligns with organizational goals, focusing on interoperability, scalability, and security. Prioritize upskilling IT teams to manage cloud environments effectively and invest in disaster recovery and cybersecurity solutions to protect sensitive customer data,” says Robinson. “Embrace multi-cloud approaches for flexibility, simplifying management with automation and centralized control systems. Finally, select cloud vendors with a strong track record and expertise in supporting compliance within heavily regulated environments.”  Following are more trends driving cloud service shifts.  1. Innovation  Previously, the demand for cloud data services was largely driven by flexibility, convenience and cost, but Emma McGrattan, CTO at Actian, a division of HCL Software, has seen a dramatic shift in how cloud data services are leveraged to accelerate innovation.   Related:Top 5 Infrastructure for AI Articles in 2024 “AI and ML use cases, specifically a desire to deliver on GenAI initiatives, are causing organizations to rethink their traditional approach to data and use cloud data services to provide a shortcut to seamless data integration, efficient orchestration, accelerated data quality, and effective governance,” says McGrattan. “[The] successful companies understand the importance of investing in data preparation, governance, and management to prepare for GenAI-ready data. They also understand that high-quality data is essential, not only for success but also to mitigate the reputational and financial risks associated with inaccurate AI-driven decisions, including the very real danger of automating actions based on AI hallucinations.”  The advantages of embracing these data trends include accelerated insights, enhanced customer experiences, and significant gains in operational efficiency. However, substantial challenges persist. Data integration across diverse systems remains a complex undertaking, and the scarcity of skilled data professionals presents a significant hurdle. Furthermore, keeping pace with the relentless acceleration of technological advancements demands continuous adaptation and learning. Successfully navigating these challenges requires sound data governance.  Related:Tech Goes Nuclear “My advice is to focus on encouraging data literacy across the organization and to foster a culture of data curiosity,” says McGrattan. “I believe the most successful companies will be staffed with teams fluent in the language of data and empowered to ask questions of the data, explore trends, and uncover insights without encountering complexity or fearing repercussions for challenging the status quo. It is this curiosity that will lead to breakthrough insights and innovation because it pushes people to go beyond surface-level metrics.”  2. Cloud computing applications  Most organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. They’re also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection.  “Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements,” says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. “Customers across industries are very cost-conscious. They want technology that’s good, safe and secure at a much cheaper rate.”   Providers are shifting to more now container-based or server-free workloads to control cost because they allow providers to scale up to meet the needs of customer activity while also scaling back when systems are not heavily utilized.   “You want to unload as many apps as possible to vendors whose main role is to service those apps. That hasn’t changed. What has changed is how much they’re willing to spend on moving forward on their digital transformation objectives,” says Milewski.  3. Artificial intelligence and machine learning  There’s a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.  “The most successful organizations are maintaining disciplined focus on cloud ROI while exploring AI capabilities. They’re treating AI services as part of their broader cloud fabric rather than isolated initiatives, ensuring that investments align with actual business value rather than just chasing the next shiny object,” says Campos. “[However,] many organizations are falling into the trap of making strategic cloud provider commitments based on current AI capabilities without fully understanding

9 Cloud Service Adoption Trends Read More »