Information Week

Is Open Source a Threat to National Security?

Open-source software is a lifesaver for startups and enterprises alike as they attempt to deliver value to customers faster. While open source use isn’t considered dubious for business use like it once was, the very open nature of it leaves it open to poisoning by bad actors.   “Open-source AI and software can present serious national security risks — particularly as critical infrastructure increasingly relies on them. While open-source technology fosters rapid innovation, it doesn’t inherently have more vulnerabilities than closed-source software,” says Christopher Robinson, chief security architect at the Open Source Security Foundation (OpenSSF). “The difference is open-source vulnerabilities are publicly disclosed, while closed-source software may not always reveal its security defects.”  Incidents such as XZ-Utils backdoor earlier this year demonstrate how sophisticated actors, including nation-states, can target overextended maintainers to introduce malicious code. However, the XZ-Utils backdoor was stopped because the open-source community’s transparency allowed a member to identify the malicious behavior.  “At the root of these risks are poor software development practices, a lack of secure development training, limited resources, and insufficient access to security tools, such as scanners or secure build infrastructure. Also, the lack of rigorous vetting and due diligence by software consumers exacerbates the risk,” says Robinson. “The threats are not limited to open source but extend to closed-source software and hardware, pointing to a broader, systemic issue across the tech ecosystem. To prevent exploitation on a national level, trust in open-source tools must be reinforced by strong security measures.”  Related:Let’s Revisit Quality Assurance Open Source: Get What You Paid For?  A primary threat is the lack of support and funding for open-source maintainers, many of whom are unpaid volunteers. Organizations often adopt open-source software without vetting security, assuming volunteers will manage it.   Another often overlooked issue is conflating trust with security. Simply being a trusted maintainer doesn’t ensure a project’s security. Lawmakers and executives need to recognize that securing open source demands structured, ongoing support.  “AI systems, whether open or closed source, are susceptible to prompt injection and model training tampering. OWASP’s recent top 10 AI threats list highlights these threats, underscoring the need for robust security practices in AI development. Since AI development is software development, it can benefit from appropriate security engineering,” says Robinson. OWASP is the Open Worldwide Application Security Project.  “Without these practices, AI systems become highly susceptible to serious threats. Recognizing and addressing these vulnerabilities is essential to a secure open-source ecosystem.”  Related:Soft Skills, Hard Code: The New Formula for Coding in the AI Era At the company level, boards and executives need to understand that using open-source software involves effective due diligence and monitoring and contributing back to its maintenance. This includes adopting practices like creating and sharing software bills of materials (SBOMs) and providing resources to support maintainers. Fellowship programs can also provide sustainable support by involving students or early-career professionals in maintaining essential projects. These steps will create a more resilient open-source ecosystem, benefiting national security.  “Mitigating threats to open source requires a multifaceted approach that includes proactive security practices, automated tools, and industry collaboration and support. Tools like OpenSSF’s Scorecard, GUAC, OSV, OpenVEX, Protobom, and gittuf can help identify vulnerabilities early by assessing dependencies and project security,” says Robinson. “Integrating these tools into development pipelines ensures that high-risk issues are identified, prioritized and addressed promptly. Additionally, addressing sophisticated threats from nation-states and other malicious actors requires collaboration and information-sharing across industries and government.”  Related:Develop an Effective Strategy for User Self-Help Portals Sharing threat intelligence and establishing national-level protocols will keep maintainers informed about emerging risks and better prepared for attacks. By supporting maintainers with the right resources and fostering a collaborative intelligence network, the open-source ecosystem can become more resilient.  Infrastructure Is at Risk  While the widespread use of open-source components accelerates development and reduces costs, it can expose critical infrastructure to vulnerabilities.   “Open-source software is often more susceptible to exploitation than proprietary code, with research showing it accounts for 95% of all security risks in applications. Malicious actors can inject flaws or backdoors into open-source packages, and poorly maintained components may remain unpatched for extended periods, heightening the potential for cyberattacks,” says Nick Mistry, CISO at software supply chain security management company Lineaje. “As open-source software becomes deeply embedded in both government and private-sector systems, the attack surface grows, posing a real threat to national security.”  To mitigate these risks, lawmakers and C-suite executives must prioritize the security of open-source components through stricter governance, transparent supply chains and continuous monitoring.   Dependencies Are a Problem  Open-source AI and software carry unique security considerations, particularly given the scale and interconnected nature of AI models and open-source contributions.  “The open-source supply chain presents a unique security challenge. On one hand, the fact that more people are looking at the code can make it more secure, but on the other hand, anyone can contribute, creating new risks,” says Matt Barker, VP & global head, workload identity architecture at machine identity security company Venafi, a CyberArk Company. “This requires a different way of thinking about security, where the very openness that drives innovation also increases potential vulnerabilities if we’re not vigilant about assessing and securing each component. However, it’s also essential to recognize that open source has consistently driven innovation and resilience across industries.”  Organizational leaders must prioritize rigorous evaluation of open-source components and ensure safeguards are in place to track, verify, and secure these contributions.   “Many may be underestimating the implications of mingling data, models, and code within open-source AI definitions. Traditionally, open source is applied to software code alone, but AI relies on various complex elements like training data, weights and biases, which don’t fit cleanly into the traditional open-source model,” says Barker. “By not distinguishing between these layers, organizations may unknowingly expose sensitive data or models to risk. Additionally, reliance on open source for core infrastructure without robust verification procedures or contingencies can leave organizations vulnerable to cascading issues if an open-source component is compromised.”  Thus far, the US federal government has not imposed limits

Is Open Source a Threat to National Security? Read More »

How Learning to Fly Made Me a Better Cybersecurity CEO

COMMENTARY As a child, airplanes fascinated me — I was taken by their gravity-defying magic, their technical wonders, their sleek designs, and the adventures they unlocked. I dreamed of flying one myself. Although I pursued a career in cybersecurity, flying always inspired me — so I chased my lifelong dream of becoming a licensed pilot. I continue to fly light aircraft in the little spare time I get alongside my role as the CEO of a leading cyber-risk management company.  Always Have Backup A recent experience prompted me to think more closely about the interplay between my two passions.  Not long ago, I completed an advanced course for pilots of two-engine planes. Previously, I had only flown planes with one engine, which is a risk: If the engine malfunctions, you’re in big trouble.  In the final training session, we practiced different responses in the event of an engine breaking down. As our instructor walked us through different tactics, one thought went through my mind: the critical need for a “defense in depth” approach to security. Just as the smooth functioning of an airplane relies on multiple mechanisms supporting one another, a modern cybersecurity platform also leverages numerous defensive techniques, so that if a threat slips through one layer, it will be caught by another.  That was when I realized: While aviation and cybersecurity may appear as far apart as the heavens and earth, the skills I’ve learned from flying have profoundly influenced my career.   Know Your Environment Even at the beginning of my career, as a junior systems analyst and IT team manager, I understood that an organization’s cybersecurity posture is much broader than any single tool or platform. Effective cybersecurity requires a thorough understanding of the operating environment and all the tools therein. Before an organization can identify vulnerabilities and secure itself against attacks, it needs a complete understanding of its internal and external assets, digital surfaces, devices, brand assets, and more.  Likewise, becoming a pilot not only required me to master the practical skills of navigating an aircraft through various conditions but also necessitated a deep understanding of the equipment on board. Flying without a confident grasp of my instruments or expected flight environment is like playing Russian roulette: potentially fine … or lethal.  In cybersecurity, just as in aviation, one can never be passive. Full visibility into a technology environment is required to be able to manage risks, quickly adjust course, identify and communicate issues, and fix those issues under pressure.  Continuous Learning and Testing In the modern cybersecurity landscape, threats are always evolving, and hackers are constantly honing their skills. That’s why I ensure my company continuously tests its defenses and my employees constantly learn new skills to keep pace with the rapidly changing threat landscape.  During a recent performance review with one of my direct reports, the employee suggested that some of our threat simulations and training sessions were so time-consuming that they prevented his team from carrying out other deliverables. I acknowledged that learning and testing take up a lot of time, but doubled down on the importance of learning from past incidents to understand future threats and tactics. A cybersecurity company that prioritizes this will serve its customers better in the long run, even if it means a routine report or product update will be slightly delayed.  Muscle Memory and Task Execution A little-known insight into a pilot’s mindset: When landing my aircraft, I barely think about what I am doing. That’s because I have practiced and repeated the same maneuver hundreds of times, making complex tasks feel like second nature.  It’s just as vital to develop this sort of muscle memory among security professionals. Security teams should regularly practice routine protocols for any scenario. Conducting tabletop exercises and attack simulation drills allows teams to react quickly and effectively when a real threat emerges.  By promoting constant preparedness, I aim to ensure that my teams can execute the best course of action without hesitation, even in high-pressure situations. Small Issues Become Big Ones After flying for a few years, I felt like I’d finally memorized the dozens of separate tasks that form part of a pre-flight checklist. In reality, I’d started to prioritize — I knew that I’d always have to check whether there was enough fuel in the tank to complete the journey, but making sure each seatbelt on the plane was fastened correctly seemed secondary.  One time, I experienced a particularly bumpy landing. I asked a fellow pilot why that might have occurred, and he suggested checking the air pressure in the tires. I took a look and realized that I’d completely forgotten to check the tires before the flight. A tire low on air won’t cause the plane to fall from the sky, but landing on a flat tire can be extremely dangerous. If a flat tire hits the runway, it could burst and send the plane swerving. Incidents like this can easily be avoided — by running through the correct procedures to identify any small issue before it becomes a big one.  In cybersecurity, small vulnerabilities in a system can easily be overlooked and are therefore ripe for exploitation. In short, cybersecurity is not just about responding to attacks — it’s about mitigating risks before they can cause damage. By implementing best practices and checklist procedures, security teams can do just that.   The Sky’s the Limit The lessons I’ve learned soaring through the skies have extended far beyond the runway.  Learning from my mistakes and internalizing the discipline it takes to be a pilot have allowed me not only to lead my company with clarity and resilience; it also has provided me with a new perspective on the ever-evolving landscape of cybersecurity. Incorporating these lessons into the flight plan of my professional life has helped foster a culture of continuous improvement at our workplace, which ultimately has helped our customers.  source

How Learning to Fly Made Me a Better Cybersecurity CEO Read More »

What Enterprise IT Predictions Actually Mattered in 2024?

The end of each year brings a swarm of predictions, prognostications, and tea leaf readings on what technology is expected to take off in the year to come. While there is no doubt technology will continue to evolve and transform the enterprise world, not every forecast will hold water, and some might even distract organizations from relevant developments. Rather than pore over predictions for 2025, DOS Won’t Hunt gathered a panel to discuss what actually mattered to companies in 2024 versus the expectations that started the year. This episode brought together Rocky Cole, co-founder and COO of iVerify; Bogdan Raduta, head of AI with FlowX.AI; Bryan Wood, machine learning solutions engineer at Snorkel AI; Dave Merkel, CEO and co-founder of Expel; Alvaro Oliveira, chief talent officer with Andela; and John Peluso, chief technology officer at AvePoint. How did predictions on technology talent and the workforce play out in 2024? Did anything develop in the AI world that met or exceeded predictions? Did cybersecurity get struck with unexpected developments? Were there any surprises in technology that eclipsed predictions? Listen to the full podcast here. source

What Enterprise IT Predictions Actually Mattered in 2024? Read More »

Why Smart Security and Resiliency Matter for Critical Infrastructure

The fragility of our infrastructure has been on full display. Recently, a cyber incident at the Port of Seattle and Seattle-Tacoma International Airport highlighted the ongoing vulnerabilities we face and how critical infrastructure disruptions or outages can impact crucial avenues for travel, logistics and even power.   This incident is just one example of the growing nature of cyber threats to our digital infrastructure. A recent KnowBe4 report found critical infrastructure faced a 30% increase in cyberattacks in just one year, showing how outdated frameworks — that support vital sectors — can affect the essential sectors we rely on, such as energy, water, transportation, healthcare and finance. Moreover, according to a recent report, 44% of critical IT infrastructure is approaching end-of-life — meaning almost half of the world’s most critical infrastructure is more vulnerable to cyberattacks and at higher risk for prolonged outages.  We face this reality because technology vendors routinely retire legacy systems as new ones are developed, and they eventually stop providing necessary updates and security patches for those aging offerings. This ongoing erosion of support leaves those systems vulnerable to every manner of attack. For example, Microsoft ended support for Windows Server 2008 in 2020. At that time, they estimated that 60% of their user base was still using the unsupported software. And since then, Microsoft has reported hundreds of new vulnerabilities every year.  Related:What ‘Material’ Might Mean, and Other SEC Rule Mysteries Many public- and private-sector organizations continue to rely on legacy technologies, having made the difficult decision to leave outdated systems in place as other initiatives compete for capital. But running essential services on legacy systems constitutes a major risk, as these systems lack the modern encryption standards needed to defend against increasingly complex cyberattacks. These problems will only get worse when quantum computing makes code-cracking almost trivially easy.  Compounding these risks is the challenge of finding staff with the expertise and skills to manage legacy systems. The only thing harder than hiring someone who knows a dying programming language or system is convincing an employee to invest the time to learn it.  When organizations do step up to the challenge of replacing outdated systems, they tend to focus on the IT assets — the systems and technologies responsible for data storage, processing and transmission. But operational technology (OT) typically does not receive the same level of attention or investment. These are the systems and devices that oversee and regulate physical operations such as manufacturing, refining and distribution processes. As a result of being commonly overlooked, these critical OT systems typically are antiquated, non-standardized, complex, and unsecured. The fact that these legacy OT systems have become increasingly interconnected with IT networks and connected to the Internet significantly raises the exposure of all systems to cyber threats. An attack on any one of these OT systems could jeopardize product safety, inflict physical harm, or massively disrupt supply chains.  Related:What You Can Do About Software Supply Chain Security The problem of vulnerable legacy systems does not stop at the software level. In many ways, the underlying hardware on which these systems run presents even greater risk. Hardware flaws expose all an organization’s electronics to attack, since they affect systems at a base level and vastly increase the number of available targets. And while organizations can address software flaws with a patch or update, hardware vulnerabilities are harder to find and cannot be overwritten as simply. Given the potential of these types of attacks to wreak havoc, we expect them to increase over time.  Related:How to Prep for AI Regulation and AI Risk in 2025 In addition, the growing power and presence of AI is a double-edge sword. On one hand, AI will empower bad actors to create even more sophisticated attack strategies. But AI also can significantly help businesses and governments protect their vital systems. Enterprise AI can deliver a variety of capabilities, from automating processes to providing sophisticated data analytics and actionable insights. For instance, AI systems can scan IT and OT systems for weaknesses to pinpoint vulnerabilities and recommend remedial actions. They also can conduct cyber risk assessments by sifting through historical and contemporary data on cyber incidents to forecast the likelihood and outcomes of future events.  But even the vast power of AI will not address the fundamental issue of our widespread dependance on outdated, unsecured IT and OT systems. Achieving a more secure posture will require a more determined and holistic approach. This must include a concerted effort by governments and businesses to establish security standards for digital infrastructure and the reporting of cyber incidents; a systematic review of the legacy systems that underpin crucial functions; a strict policy of managing the lifecycle of hardware assets; and a longer-term roadmap for modernizing aging systems.  Organizations also must embrace a new mindset as they go about the task of better securing critical digital assets. The current tendency is to focus on securing, defending and protecting systems against threats. But given the rapidly increasing sophistication and number of attacks, along with the widening attack surface of digital infrastructure, a security-only mindset is insufficient. We must intensify our focus on assuring organizational resilience — the capacity to recover from the inevitable disruptions. As we embark on the essential work of updating our physical and digital systems, we also must build our ability to get back on our feet and keep moving forward.  source

Why Smart Security and Resiliency Matter for Critical Infrastructure Read More »

How AI Drives Results for Data-Mature Organizations

Every organization wants to optimize operational processes while keeping costs low. That’s why artificial intelligence has exploded onto the global stage in recent years. Companies see the promise of powerful automation tools, data suggestion and response systems, and the generative capabilities of platforms like OpenAI’s ChatGPT, Google’s Gemini, or Anthropic’s Claude and want to add them to their ever-growing toolbelt.   In the construction industry, we’re seeing clients lean on AI tools to summarize and track punch list items as they complete projects, brainstorm ideas for request for information (RFI) drafts, and other review and analytical tasks that significantly speed up the process from initial design to final build. AI is an immense boon, and I recommend that any company looking to improve the quality of its offerings implement AI where it can.  However, I also ask these organizations a simple question: “Will the data you have right now provide the results you’re looking for?”  Improving data quality isn’t as flashy as choosing the latest large language model, but it’s critical to AI’s efficacy — ensuring that the information you get is helpful for your business and customers. It requires a foundational shift in how your business treats its data, from understanding the link between data and results to internalizing that information and transforming your business with data-centric policies and processes in mind.  Related:Defining an AI Governance Policy How Data Quality Affects AI Results  The quality of the data you use determines the output you get from AI models. Without high-quality data, you’re effectively causing your business to leave productivity gains (and potential profits) on the table. Even the most powerful AI algorithms can only do so much if your data is inaccurate or inconsistent. Twenty-five percent of the highest-quality data isn’t available for public use so, to ensure the best possible results, it is essential to prioritize quality organization-specific data when training AI models.   When investigating the quality of your organization’s data, keep an eye out for the following critical flaws that can lead to poor AI results:  Incomplete data. Records lack critical information, or spreadsheets are missing data values throughout. If this data doesn’t exist, your analytical tools won’t have the comprehensive insight they need to provide actionable results.   Inconsistent data. Different regional methods for calculating and storing data may create incompatibilities when collating data into a single repository. These misalignments can lead to confusion in data processing and contribute to errors in output.   Duplicate data. Multiple databases exist to store the same information, creating data clutter that is difficult to sift through. Not only does this increase the storage required to maintain this data, but it also raises the cost of processing it through AI, significantly reducing operational efficiency.  Delays in data production. Inefficient processes increase the lead time between data gathering, cleaning, and usage, making the information you have obsolete before you can leverage it to benefit your business.   Related:Preparing for AI-Augmented Software Engineering How Data-Mature Organizations Approach Data Quality  Once you understand what the potential flaws in your data might look like, you can correct them. In my experience, organizations that address the root causes of their data quality problems are poised to get the most out of AI integrations. The most successful of them share four common characteristics:  1. They view their data as an asset  Analysts expect global data storage to reach 200 zettabytes by 2025. Every organization will store and process data as a part of day-to-day operations. However, the ones that get the best results from their AI models understand that data isn’t just something that takes up digital space — it’s an asset to be grown and cultivated with a steady hand.  Related:Have We Gone Too Far With AI in Software Development? That means managing data isn’t just a problem for your IT department to deal with. It needs buy-in from key stakeholders, preferably by building it into your organization’s structure at the executive level. Doing so will help you develop more effective solutions tailored to your organization’s unique needs. It will also help you leverage this data throughout the business to improve your product offerings, up to and including any AI models you use.  Automated data collection is a critical means for maximizing the value of data. Manual input can be erroneous, time-consuming, and limited in scope. Most companies that need accurate information for decision making do this effectively by establishing as many guardrails as possible such as suggestions, likely values, third party information, and standard drop-down lists.  2. They build standardized processes  First-party data is a key market differentiator. When properly sourced and vetted, high-quality data can give you a competitive advantage with insight no other organization can access. As a result, your data needs to be gathered from trusted sources and processed uniformly so it’s accessible throughout the organization and secured against cyberattacks.   Standardized processes help organizations achieve these goals. They ensure consistency and accuracy regardless of how data is ingested, which department gathers it, or who uses it. They also help to break down silos and improve intra-organizational accessibility, which is crucial for developing a genuinely unified and holistic approach to data storage. Build these processes into data gathering, training, and usage protocols, and enforce them across the organization to see better results.  3. They install comprehensive data governance policies  AI models need to be trained on data to provide accurate results. However, this data can be exposed (whether unintentionally or through malicious activity) without effective data governance policies in place.   These policies dictate how data can be used, whether it can be exposed to AI models for training or excluded in part or in whole. They also help your models align with required industry and governmental security and privacy regulations. Improving and enforcing your organization’s data governance policies will enhance its security stance while improving the overall quality of its output.  4. They invest in employee training  Less than half of executives believe their leaders have the knowledge to use AI safely and effectively. Investing

How AI Drives Results for Data-Mature Organizations Read More »

Preparing for AI-Augmented Software Engineering

AI-augmented software engineering integrates artificial intelligence technologies into the software development lifecycle, enhancing and automating various tasks with the goal of improving efficiency, accuracy, and productivity. With the support of AI agents and tools, this advanced approach to software engineering promises to accelerate the entire software engineering process.  Over the past several years, we’ve seen software developers use AI embedded in GitHub Copilot, Anthropic, ChatGPT, and other tools to help write code, says Steve Hall, chief AI officer at technology research and advisory firm ISG. “We’re now seeing AI agents with advanced capabilities to help prioritize features and functions, write and test code, implement advanced security code and help deploy code,” he explains in an email interview.  Less Effort/Faster Results  AI-augmented software reduces the effort needed to develop software, Hall says. “AI algorithms can be tuned for conditions that allow for more efficient, resilient and secure code.” He also notes that ISG research shows a 40% improvement in code quality when AI-augmented software engineering is used.  Brett Smith, distinguished software developer at analytics software firm SAS, says that AI-augmented software engineering has the potential to revolutionize software development. The approach can help developers write better code faster while identifying and fixing vulnerabilities, he explains in an online interview. Its speed can also help organizations detect and respond to security incidents faster. “In short, AI-augmented software engineering has the potential to make software more secure, reliable, and efficient.”  Related:Defining an AI Governance Policy AI-augmented approaches will free software engineers to focus on tasks that require critical thinking and creativity, predicts John Robert, deputy director of the software solutions division of the Carnegie Mellon University Software Engineering Institute. “A key potential benefit that excites most enthusiasts of AI-augmented software engineering approaches is efficiency — the ability to develop more code in less time and lower the barrier to entry for some tasks.” Teaming humans and AI will shift the attention of humans to the conceptual tasks that computers aren’t good at while reducing human error from tasks where AI can help, he observes in an email interview.   Thanks to recent advances in Generative AI capabilities and the availability of several large language models, generative AI-enabled software engineering is becoming much more pervasive, says Akash Tayal, principal and cloud engineering offering leader with Deloitte Consulting. “Recent GenAI models have proven to be efficient in automating many software engineering tasks while improving accuracy, which is a significant advancement in the field of software engineering,” he observes via email.  Related:How AI Drives Results for Data-Mature Organizations With the advent of generative AI, we’re seeing a seismic change in how AI is impacting software engineering, says Srini Iragavarapu, director of generative AI applications and developer experiences at Amazon Web Services. “Now, large language models are more accessible through services … so organizations and software providers can more easily build generative AI-powered software development applications,” he says in an online interview.  Better, Faster, Cheaper  GenAI helps offers improved productivity, faster time-to-market, cost efficiency, and improved code quality, Tayal says. “Enterprises can automate repetitive software engineering tasks using AI technologies in coding, testing, and bug fixing, as well as more complex tasks, while applying engineering standards and preferred practices to help drive better software quality.”  Hall notes that GenAI can access vast amounts of data to analyze market trends, current user behavior, customer feedback, and usage data to help identify key features that are in high demand and have the potential to deliver significant value to users. “Once features are described and prioritized, multiple agents can create the software program’s components.” This approach breaks down big tasks into multiple activities with an overall architecture. “It truly changes how we solve complex issues and apply technology.”  Related:Have We Gone Too Far With AI in Software Development? “If you think of the full software development lifecycle — planning what you want to build, creating code, maintaining code, making sure you’re writing high-quality and secure code, deploying your code, and maintaining your production services — AI can accelerate and improve each of these steps,” Iragavarapu says.  Looking Forward  Hall advises software development team leaders looking to get started in AI-augmented software engineering to begin with a handful of pilot programs headed by creative engineers looking to push the IT envelope. “Enable them with the development tools and technology and then tune the process as they go,” he suggests. “This approach will enable different learnings from the various teams and spotlight where there are still weaknesses.”  Robert recommends fielding suggestions from development team members to identify areas where applying AI-augmented software engineering might prove helpful. “Using that information, start a small team to assess the risks and benefits, and begin with small experiments.”  Don’t expect rapid benefits, Hall warns. He notes it will likely take six to twelve months to train and tune the LLMs and processes to scale properly.  source

Preparing for AI-Augmented Software Engineering Read More »

What 'Material' Might Mean, and Other SEC Rule Mysteries

Dec. 15 will mark one year since the Securities and Exchange Commission began enforcing its landmark rule mandating that publicly traded companies disclose “material” cyber incidents. One year in, what have CISOs learned about defining “the ‘m’ word,” and other unforeseen surprises? Forrester principal analyst Jeff Pollard Pollard will dig into this in detail at the 2024 Forrester Security and Risk Summit Dec. 9 – 11 in Baltimore and online in a session called “A CISO’s Life Preserver for SEC Disclosure Requirements” Wednesday, Dec. 11. He gave InformationWeek a preview of that session, explaining a bit about what CISOs ought to know about materiality. (Good news: it’s less than you think.) source

What 'Material' Might Mean, and Other SEC Rule Mysteries Read More »

Defining an AI Governance Policy

Every company knows it needs an AI governance policy, but there’s scant guidance for creating one. What are the crucial issues, and how do you begin?  There are AI governance outlines and templates everywhere, but no one, not even regulators, government officials or legal experts, knows the situations where AI will require guidance and governance. This lack of understanding is attributable to the newness of AI.  Since there is so much AI governance uncertainty, many companies are passing on defining governance, even though they are investigating and implementing AI in their businesses. I’m going to argue that companies don’t have to wait to define AI governance. They can begin with what they already know from privacy, anti-bias, copyright and other regulations, and start by incorporating these known elements into an AI governance policy.  Here’s a summary of what we already know.  Privacy  Privacy laws can vary from state to state and from country to country. What we do know is that individuals have the right to personal privacy, and the right “to be left alone” under US law. Individual data is highly confidential, particularly in the healthcare and financial fields. Individuals must sign privacy statements agreeing to the sharing of their information with certain third parties, if the information is to be shared. Privacy policies also explain what information companies will protect.   Related:How Bias Influences Outcomes Applying these basics to AI, this means that patient data, as one example, is likely to be anonymized if it is being grouped into a demographic of individuals with a propensity for a particular disease or condition. So, for a medical diagnostics AI system that is being used to arrive at a diagnosis for a particular patient, the AI algorithm can investigate summary data on patients that it has on file, but it can’t delve into the particulars of any one of the patients whose data has been aggregated or it will risk violating the patient’s privacy rights.  Anti-Bias  Discrimination and bias are integral parts of employee law that should be formalized in AI governance.  Organizations have already experienced AI miscues from bias by not populating their systems with sufficiently unbiased data, and by developing faulty algorithms and queries.  The result has been seriously biased systems that returned inaccurate and embarrassing results. This is why diversity and inclusion should be integral to AI work teams, in addition to reviewing the data to ensure that it is as free from bias as possible.  Related:Quick Study: Artificial Intelligence Ethics and Bias Diversity applies to the makeup of AI employees, but it also applies to company departments.   For instance, finance might want to know how to improve product profit margins, but sales might want to know about how to improve customer loyalty, and engineering and manufacturing might want to know about how to improve product performance so there are fewer returns. Collectively, all of these perspectives should be included in an AI analysis of customer satisfaction, or you risk getting biased and inaccurate results.  “One of the biggest risks in AI is the replication of existing societal biases. AI systems are only as good as the data they are trained on, and if that data reflects biased or incomplete worldviews, then AI’s outputs will follow suit,” noted Nichol Bradford, executive in residence for AI+HI at the Society of Human Resource Management.  Intellectual Property  Generative AI paves the way for others’ visual and word-based creations to be collected and re-purposed for use, often without the company’s or the originator’s knowledge. For example, your company could enter into an agreement with a third-party vendor whose data you want to buy for your AI data repository. You cannot be sure of how the third party obtained their data, or if their data is potentially violating copyright or intellectual property law.  Related:What Can a CIO Do About AI Bias? The Harvard Business Review discussed this issue in 2023. It stated, “While it may seem like these new AI tools can conjure new material from the ether, that’s not quite the case … This process comes with legal risks, including intellectual property infringement. In many cases, it also poses legal questions that are still being resolved. For example, does copyright, patent trademark infringement apply to AI creations? Is it clear who owns the content that generative AI platforms create for you, or your customers? Before businesses can embrace the benefits of generative AI, they need to understand the risks — and how to protect themselves.”  Unfortunately, it’s hard to understand what the risks are because intellectual property (IP) and copyright infringements in AI are just beginning to be challenged in the courts, and case law precedents have yet to be established.  Until legal clarifications can be made, it’s advisable for companies to initially draft governance guidelines for IP and copyrights that stipulate that any vendor from whom data is purchased for use in AI must be vetted and warrant that the data offered is free from copyright or IP risks. Internally, IT should also vet its own AI data for any potential IP or copyright infringement issues. If there is data that could pose an infringement problem, one approach is to license it.  Establishing AI Governance in the Organization  It will fall to the IT team to start the AI governance process. This process must begin with dialogues with the C-suite and the board. These key stakeholders must support the idea of AI governance in action as well as in words, because AI governance will affect employee behaviors as well as data and algorithm stewardship.  The most likely departmental AI “landing spots” must be identified because those departments will be most directly responsible for subject matter expert input and AI model training, and they will need training in governance.  To do this, an interdepartmental AI governance committee that agrees to governance policies and practices should be formed. It should have committed executive leadership backing it.  AI governance policy development will be fluid because AI regulation is fluid, but organizations can begin with

Defining an AI Governance Policy Read More »

The Future of Risk Management: Why a Holistic Customer View is Key to Reducing Risk in 2025

A holistic customer view is at the center of future-proofed risk management for industry-leading banks. We’ve seen it power enterprise customer strategies, allowing for highly personalized customer experiences and selling opportunities. A holistic customer view can transform traditional banks’ greatest weakness (siloed data) into their greatest strength — a competitive asset that digital foes cannot match. It sounds easy. It’s not. Here are the challenges and opportunities we see. Challenge #1: Most Banks Aren’t Structured for Personalization. It’s hard (if not impossible) to achieve customer-centricity if business lines cannot share customer information. That gold is hidden in siloes across the enterprise, making lines of business easy prey for digital predators. Nimble disruptors effectively use personalized, niche applications to pick off business lines’ valued customers. Plus, when these lines of business don’t collaborate it results in valuable customer insights that are missed in a disconnected data jungle, where critical context cannot be connected and leveraged for relationship building or sharper risk management. Opportunities We’ve seen leading banks harness the power of looking across lines of business, with each touchpoint bringing every customer into sharper focus. Key moments contain unique perspective and relevant insight. As new interactions add up over time, insights can be shared, resulting in a more accurate picture of each individual customer. By unifying these insights, it becomes possible to gain a detailed, 360º view into a customer’s current and future needs. Challenge #2: The Linear Customer Journey No Longer Exists. Today’s customer journey isn’t what it used to be. Digital experiences can feel splintered and discordant when designed with linear, rigid workflows and myopic siloes.   These disconnected experiences can obscure customer opportunities and result in spotty risk management. Mutually beneficial, risk-aware customer relationships require real-time, precise personalization and decision strategies. When banks align with customers’ interests and put customers at the heart of every decision, it’s a win-win for all. Opportunities Leading organizations use platforms to power these decisions across all stages and lines of business, delivering optimal control over the customer journey. The benefits to powering dynamic customer journey are: Improved customer engagement and satisfaction, reduced attrition, and improved Net Promotor Scores. Increased automation of marketing, credit line, pricing, and collections result in reduced operational costs. More precision, relevance, and consistency in decision-making. Better understanding of customers and the ability to act on insights in real-time. Challenge #3: Your Best Customers Don’t Have Patience to Go Through Bad Digital Banking Processes. Good customers won’t go through bad processes — but fraudsters and higher-risk customers will. Want to aggravate your best customers and damage your conversion rates? Inundate your hard-won customers with generic, untimely offers through disconnected channels that feel like they’re coming from different banks. That hurts the customer experience, loyalty, retention, trust, brand — and it’s expensive and inefficient. Opportunities Organizations need to build processes that can deliver fast, smart, cost-effective, and risk-aware offers to attract, win, and retain the right customers. That means attracting and engaging with risk-aware, responsible applications, while deterring fraudulent or higher risk ones. Forward-looking organizations should review their digital experiences and ensure that they delight customers. Many institutions are in catch-up mode and have cobbled together processes that resemble the digital version of form-filling. These processes are vulnerable to competitors — your conversion rates are suffering, and you may inadvertently cause adverse selection. Challenge #4: Real-Time, Seamless, and Safe Account Opening Is (Really) Hard. Today’s customers are busy, impatient, and digitally very savvy. (Admit it: you are, too.) Don’t we all want seamless experiences, real-time decisions, and convenience that still feels personal? We’ve seen our clients set impressive new global benchmarks: Two-minute, fraud-aware onboarding. That’s fast. Digital transactions increased more than 60% thanks to real-time onboarding. Opportunities To wrangle challenges and maximize the opportunities, banking executives can (once again) manage risk with a holistic customer view through a digital decision platform. Here’s how: Get the data: Leverage easy, plug-and-play, agnostic access to proliferating global data sources. Understand the risk: With real-time insights, you can run automated risk assessments for applicants and existing customers. Make the decision: Shared decision assets (rules, analytics, scorecards) help you make the best decision time and time again. That drives your bottom line AND customer experience. Monitor and learn: Interest rates, election years, natural disasters, global volatility — the world is constantly changing. Banks must continually test, monitor, and learn to stay agile and refine strategies to stay aware, relevant, and competitive. FOBO (fear of becoming obsolete) is real, and the competitive race is tightening and intensifying. Get Results That Deliver. We’ve seen our leading client innovators achieve tremendous results with this data-driven, risk-aware, holistic approach. We believe risk management and customer experience rewards speak for themselves. source

The Future of Risk Management: Why a Holistic Customer View is Key to Reducing Risk in 2025 Read More »