Information Week

Prioritizing Responsible AI with ISO 42001 Compliance

Artificial intelligence is a critical tool for companies looking to keep pace in the current competitive business landscape. The potential of AI promises great things — greater efficiency among the workforce, customized customer experiences, better informed decision making for C-suite executives — but it also comes with great risk, being just as useful to bad actors as it is to those with good intentions.    To combat nefarious use and promote transparency around the new technology, the International Organization for Standardization (ISO) recently released ISO/IEC 42001. The new standard guides the ethical and responsible development and deployment of artificial intelligence management systems — effectively giving organizations a vehicle to demonstrate that their approach to AI is ethical and secure.  In a world where AI is rapidly reshaping industries, having a structured approach like the one outlined in ISO 42001 ensures that businesses are harnessing AI’s power while maintaining ethical and transparent practices.  Having recently gone through the certification process, here’s what other companies considering taking this step should know:   What Is ISO 42001 and Why Does It Matter?   ISO 42001 is a groundbreaking international standard designed to establish a structured roadmap for the responsible development and usage of AI. This standard addresses critical challenges such as ethics, transparency, continual learning, and adaptation, ensuring that AI technologies are harnessed ethically and effectively.  Related:How Can Decision Makers Trust Hallucinating AI? The standard is also intentionally structured to align with other well-known management system standards, such as ISO 27001 and ISO 27701, to enhance existing security, privacy, and quality programs. For companies that touch AI, it’s of the utmost importance to be on top of the most rigorous AI frameworks and to implement strict guardrails to protect customers from malicious intent. It also gives organizations a foundation to comply with upcoming regulations, like the EU AI Act and related legislation in Colorado.          The Journey to ISO 42001 Compliance  Achieving compliance with ISO 42001 required our organization to take a risk-based approach to the establishment, implementation, maintenance, and continuous improvement of an AIMS. This approach involved several phases, including:  Defining the context in which our AI systems operate.  Identifying relevant external and internal stakeholders.  Understanding the expectations and requirements of the framework.  Related:How AI is Revolutionizing Photography Additionally, building out a comprehensive, ISO 42001-certified AIMS required us to standardize the fairness, accessibility, safety, and various impacts of our AI systems. The standard looks at an organization’s policies related to AI, the internal organization of roles and responsibilities for working with AI, resources for AI systems such as data, impact analysis of AI systems on individuals, groups, and society, the AI system life cycle, data management, information dissemination to interested parties (like external reporting), the use of AI systems, and third-party relationships.  Undergoing this certification process took approximately six months and involved us working closely with our auditing partner. Upon completion of our assessment, we received certification of compliance with ISO 42001 standards to serve as an indicator of our prioritization of responsible and secure AI to all stakeholders. Moving forward, we must sustain the practices mandated by the framework and undergo future routine assessments to continuously ensure we maintain compliance.   The Impact of ISO 42001 Compliance on Our AI Strategy  Compliance with ISO 42001 is not just about meeting a set of standards; it fundamentally impacts how we utilize AI moving forward. With many companies building out their own AI capabilities, proving to customers and stakeholders that they can trust our systems is crucial — and ultimately becomes a competitive differentiator.   Related:The New Cold War: US Urged to Form ‘Manhattan Project’ for AGI ISO 42001 addresses these concerns through comprehensive requirements, providing a roadmap to satisfying security and safety concerns about our AI. Getting ISO 42001 certified has allowed us to do the following:  Validate our AI management: ISO 42001 certification provides independent corroboration that we manage our AI systems ethically and responsibly.  Enhance trust with stakeholders: The certification demonstrates our commitment to responsible AI practices and ethical, transparent, and accountable AI development and usage.  Improve risk management: The certification helps us identify and mitigate risks associated with AI, ensuring potential ethical, security, and compliance issues are proactively addressed.  Gain a competitive edge: As ISO 42001 was published recently, becoming one of the first globally to certify our AIMS gives us an edge in the market, signaling to clients, partners, and regulators that we are at the forefront of responsible AI use.  The Importance of Working With an Accredited Body  Achieving ISO 42001 certification is a significant milestone, but it’s essential to work with an accredited body to ensure the certification’s credibility. In our certification process, we prioritized working with Schellman, an ANAB-accredited auditing certification body, as our partner in this journey. Schellman’s accreditation gave us assurance that they are properly equipped to verify our compliance with the ISO 42001 framework, adding an extra layer of validation to our certification while guiding us through the process.  While compliance does not equate to absolute security, it positions an organization to mitigate risks effectively and demonstrate to customers that their security is a top priority. By adhering to the rigorous standards set out in ISO 42001, we are committed to responsible AI practices that not only meet but exceed stakeholder expectations, ensuring the safe and ethical use of AI technologies.  source

Prioritizing Responsible AI with ISO 42001 Compliance Read More »

Meeting AI Regulations: A Guide for Security Leaders

Artificial Intelligence is rapidly transforming the business landscape, already shifting the way we work, create and gather data insights. This year, 72% of organizations have adopted generative AI in some way, and 50% have adopted AI in two or more business functions — up from less than a third of respondents in 2023. On the other hand, as AI adoption heats up, so do concerns around security — with 45% of organizations experiencing data exposures while implementing AI. CISOs and security leaders now face the critical challenge of balancing AI implementation with growing data security risks.   At the same time, government agencies are also turning their attention to AI security concerns — and the regulatory landscape surrounding the technology is quickly evolving. Uncertainty persists on a federal level, as no all-encompassing legislature is currently in place in the US to set guardrails for the use of AI tools. However, frameworks including the AI Bill of Rights and Executive Order on AI, as well as state-wide regulations like the Colorado AI Act (with 45 other states in 2024 introducing AI bills), are gaining momentum — as governments and organizations look to mitigate security risks associated with the use of AI.   To prepare for rapidly evolving regulations in today’s unpredictable threat landscape, while still advancing AI initiatives across the organization, here are the strategies security leaders must prioritize in the year ahead:  Related:What Does Enterprise-Wide Cybersecurity Culture Look Like? Building a robust data management infrastructure: Whether or not an organization is ready for widespread AI adoption, implementing an advanced data management, governance, and lifecycle infrastructure is critical to keep information safe from threat. However, 44% of organizations still lack basic information management measures, and only just over half have basic measures like archiving and retention policies (56%) and lifecycle management solutions (56%) in place.   To keep sensitive data safe from potential threats, proper governance and access policies must be established before AI is widely implemented. That way, employees are not inadvertently sharing sensitive information with AI tools. Beyond keeping data secure, employing proper governance policies and investing in the automated tools needed to do so can also help streamline compliance with new regulations — supporting security leaders by building a more flexible, agile data infrastructure to keep up with these fast-moving developments.  Leveraging existing standards for AI use: To prepare data and security practices for new regulations in the years to come, CISOs can look towards existing, widely recognized standards for AI use within the industry. International standards like the ISO/IEC 42001 outline recommended practices for organizations looking to utilize AI tools, to support responsible development and use and provide a structure for risk management and data governance. Aligning internal practices with frameworks like ISO/IEC early on in the implementation process assures that AI data practices are meeting widely accepted benchmarks for security and ethics — streamlining regulatory compliance down the road.  Related:Does the US Government Have a Cybersecurity Monoculture Problem? Fostering security-focused culture and principles: Security leaders must strive to emphasize that security is everyone’s job in the organization, and that all individuals play a part in keeping data safe from threats. Ongoing education around AI and new regulations (through constantly evolving and highly customized trainings) ensures that all members of the organization know how to use the technology safely — and are prepared to meet new standards and mandates for security in the years to come.   Adopting “do no harm” principles will also help to future-proof the organization to meet new regulations. This involves carefully assessing all of the potential consequences and effects of AI before implementation, evaluating how these tools can impact all individuals and stakeholders. It’s important to establish these principles early on — informing what limitations should be set to prevent potential misuse and preparing security teams for future regulations around ethical and fair use.   Related:The SEC Fines Four SolarWinds Breach Victims As we continue to see new AI regulations take shape in the coming years, security and business leaders need to focus their attention on how to prepare their entire organization to meet new compliance standards. As CISOs continue to face uncertainty in how regulations will progress, this is a strong signal to safeguard data and ensure individual preparedness now to meet new standards, as they evolve rapidly. AI is now everywhere, and ethical, secure and compliant use is an organization-wide effort in 2025 — which begins with building the proper data management and fair use principles and emphasizing security awareness for all individuals.    source

Meeting AI Regulations: A Guide for Security Leaders Read More »

All In On AI: 3 Ways to Win With an AI-Supported Team

As businesses rapidly adopt AI, sales leaders are keen to demonstrate the return on these investments. AI-driven technologies enable future-ready operations, boost productivity, and allow teams to focus on high-value work that enhances customer experiences. However, realizing AI’s full potential requires effective change management and training for sales teams. The competitive edge now lies not in merely adopting AI, but in how effectively teams can leverage it. Here are the top three takeaways for leaders to future-proof their sales operations with AI: Cut through the AI Noise Leveraging AI effectively means understanding how it can improve specific outcomes unique to a team or organization’s current challenges. Gartner reports that B2B sellers using AI are 3.7 times more likely to meet their quota than those who do not. To stay competitive, sales leaders must ask: Are the AI-powered tools we are using or considering capable of improving experiences for customers and sales representatives? Sales leaders can be overwhelmed by the wide array of AI tools available. To navigate today’s extensive selection, leaders should evaluate their current sales journey and identify areas where AI can make the most impact. Analyzing customer feedback and net promoter scores is a good starting point. A team struggling with a lengthy sales conversion process may consider an AI solution that automates key steps, like lead qualification or price quoting, streamlining the sales journey and improving efficiency. Additionally, a team finding that data is not driving sales decisions may seek an AI solution with more effective data analytics capabilities to enhance processes like forecasting. By focusing on the most impactful elements of the sales and customer journey, teams can be more intentional about the AI solutions they adopt first. Maximize AI Investments with Effective Change Management Maximizing AI investments requires effective change management, especially given AI’s transformative impact on workflows. To develop an effective AI change management program, prioritize three key elements: a clear vision, consistent user engagement, and comprehensive AI training and resources. Leaders who set and consistently communicate a strategic AI vision can keep their teams focused on goals and the benefits of AI adoption, even when challenges arise. Once a vision is established, intentional and consistent interactions with users allow leaders to gain early insights into issues, quickly pivot to address feedback, and create new best practices. Ongoing and customized training helps employees adopt and understand new AI-powered tools, build new skillsets, and instill confidence in the strategic vision. Implementing change management, with its new ways of working, technologies, and processes, can present challenges. However, partnering with technology experts and AI specialists can enhance results and drive improved outcomes. For instance, the HP Amplify AI partner program boosts capabilities in achieving positive AI outcomes by offering AI guidance, tools, resources, training, and certification. Programs like this complement overall change management efforts and support sales teams in their increasingly AI-advisory role to customers. Leaders can invest in and deploy the best AI tools, but without comprehensive and consistent change management, their organization — and customers — will not fully realize the benefits of AI. The Next Breakthrough in Data-Driven Decision Making AI’s ability to process and analyze data at scale, automate routine tasks, provide real-time information and predictive insights will drive the next breakthrough in data-driven decision making. Sales organizations that embrace AI will be better equipped to make informed, accurate, and timely decisions, leading to improved outcomes and a competitive advantage. Automating routine or lengthy tasks allows AI to give sales representatives additional time to concentrate on higher-value activities. For instance, creating a proposal based on RFP requirements and providing pricing for a potential customer can take several days, which delays decision-making. AI-powered tools can expedite this process, enabling sales representatives to focus more on activities like building customer relationships, consultative selling, and ensuring a better overall experience. AI is poised to revolutionize how teams leverage data to inform, validate, and streamline their decisions. The question is no longer whether a company should adopt AI, but rather how the right tools can be implemented to create a future-ready sales strategy for long-term success. source

All In On AI: 3 Ways to Win With an AI-Supported Team Read More »

How AI is Revolutionizing Photography

AI revolutionizes just about everything. Photography is no exception.  AI is a powerful tool, says Conor Gay, vice president of business operations at MarathonFoto, a firm specializing in marathon race photography. When used appropriately, it can enhance great photography and create incredible designs, he explains in an email interview. “When used carelessly, it can cause confusion, misinformation, or just plain ruin a photo.”  AI helps photographers realize a creative vision, observes John McNeil, founder and CEO of John McNeil Studio, a San Francisco-area based creative firm. “It’s an incredibly powerful tool, helping even less-than-professional photographers create more professional images,” he notes in an online interview. “Features such as exposure correction, auto enhance, and auto skin tone, allow just about anyone to take great pictures.”  Johnny Wolf, founder and lead photographer at Johnny Wolf Studio, a New York-based corporate photography studio, says that AI allows him to explore complex concepts in pre-production and create realistic mockups for client approval, all without even having to touch a camera. “It gives me the ability to quickly test and iterate on ideas without having to invest time and resources,” he explains via email. “This results in a more focused discovery phase with clients and leads to fewer revisions during the editing process.”  Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report Efficiency and Quality  AI tools enable greater efficiency and higher quality when capturing images, automatically detecting subjects, optimizing an image at the moment it’s taken, says Chris Zacharias, founder and CEO of visual image studio Imgix. AI tools can identify subjects and objects within an image to allow greater precision in editing,” he notes in an email interview. “We can remove unwanted elements or introduce new ones into a photograph in pursuit of a creative vision.”  Wolf says that AI’s greatest impact has been automating the mundane. “Basic tasks, like whitening a subject’s teeth, or cloning-out distracting background elements, used to involve a time-consuming masking process, which can now be done with one click,” he explains. “With AI handling the drudgery of post-production, I’m free to dedicate more time and energy into creative exploration, improving my craft and delivering a more personalized and impactful final product.”  AI has allowed us to identify images faster and more accurately than ever before, Gay says. “In the past two years, we’ve been able to get more images into runners’ galleries, typically within 24 hours of their finish,” he notes. “AI has also allowed us to capture more unique shots and angles.”  Related:Inside The Duality of AI’s Superpowers Gay adds that AI can also capture relevant photo data that can be used by race partners and sponsors. “We’re now able to identify sponsor-branding that appears in our photos, and even capture data around apparel and footwear.” The technology is also used to enhance images. “We see different weather and lighting conditions throughout the day,” he notes. “AI allows us to enhance these images to their highest quality.”  AI’s power, control, flexibility, and possibilities are absolutely incredible, McNeil states. “Photoshop was a game changer 30 years ago, and in less than three years, AI makes things like histograms and layers seem positively quaint.”  The Downside  AI’s ethical implications are significant, and will require discussion, consideration, and action by a wide range of stakeholders and organizations, Zacharias says. “There’s much to consider, and the impacts are already being felt.”  Maintaining authenticity is a top concern, Gay says. “Especially in our industry, runners work tirelessly to complete their races,” he notes. “The idea of someone being able to create a fake finish line moment with AI discredits the hard work each athlete puts into their race.” Gay says his goal is to document runners’ journeys on race day and to be as accurate as possible.  Related:Keynote Sneak Peek: Forrester Analyst Details Align by Design and AI Explainability McNeil worries that there may now be too much reliance on AI. “The term ‘we’ll fix it in post’ used to be a lazy joke people would make on set,” he says. “Today, it’s literally the process.” Yet such an attitude can lead to images that are poorly crafted, uninventive, and looking like they were generated by AI. “Ultimately, as creative people and artists, we need to be more critical about the work we’re putting into the world.”  While photo manipulation is nothing new, AI’s ability to instantly generate photography that’s indistinguishable from reality has led to a frightening inflection point, Wolf warns. “Anyone with an agenda and a web browser can now create and disseminate AI-generated propaganda as a real-time response to events,” he explains. “If society can no longer trust photos as evidence of truth, we’ll retreat further into our echo chambers and consume content that has been generated to reinforce our views.”  Looking Forward  Artists have always adapted and leveraged new tools and technologies to create novel forms of self-expression, Zacharias says. “The coming years will see a lot of discussion about what is real or authentic,” he notes. “At the end of the day, AI is and will continue to be a tool, and it is we humans who will define what the soul of the medium is.” source

How AI is Revolutionizing Photography Read More »

Cyber Awareness Is a Joke: Here’s How to Actually Prepare for Attacks

This year has been a wake-up call, exposing just how fragile our digital world really is, and leaving leaders scrambling to contain the fallout from relentless cyberattacks. System compromises, breaches, and ransomware attacks have devastated organizations, tallying an average of five million dollars per incident. These cyber crises test leaders’ abilities to make high-stakes decisions under pressure, navigate ethical dilemmas, and inspire resilience within their organizations. Antiquated Training Yields False Sense of Security Instead of facing the real dangers of today’s threats, most businesses cling to outdated, useless cyber training that does nothing to prepare them for what’s coming. These traditional trainings provide a false sense of security for organizations, as success is determined by test completion rather than capability.  These methods can do more harm than good — so why do leaders insist on relying on outdated videos or some tabletop exercise? How can we be confident in our teams’ cyber abilities if we aren’t evaluating skills based on real-world scenarios? This is where cyber drills come into play.  A cyber-attack occurs every 39 seconds, according to a study by Cybersecurity Ventures. It’s not enough to rely on passive learning or outdated training methods. Cyber drills and exercises aren’t optional — they’re essential. Everyone, from staff to execs, needs hands-on experience with real threats. Stop treating them as a formality and start using them to build real cyber skills. Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust If your team isn’t prepared for role-specific threats, you’re wasting their time.  Regularly exercising cybersecurity teams and running cyber drills is non-negotiable. With cyber threats evolving faster than traditional training programs can, it’s not enough to know what to do — teams must be ready to act, instantly and effectively.  To effectively implement a cyber drill program, cyber leaders should focus on these essential components:  Realistic attack scenarios: Cyber drills should utilize realistic attack scenarios through simulations and gamification. Engaging, gamified environments boost participation and should encompass the full range of cybersecurity threats, allowing organizations to continually assess, strengthen, and validate their teams’ skills in real-world conditions.  Consistency: As threats become more sophisticated — particularly as attackers leverage AI to carry out attacks at greater scale and speed — organizations need to run drills consistently. Drills should occur frequently enough to match the fast pace of cyber threats, helping teams build muscle memory for effective response during incidents.  Enterprise-wide: Effective cyber drills should engage the entire organization, from entry-level employees to board members, ensuring that everyone, not just the cybersecurity team, is prepared for potential threats.  Customized: Cyber drills should be customized to fit the specific responsibilities of each role within the organization. A one-size-fits-all approach won’t work; every employee needs training that addresses the unique challenges of their position.  Proof of capabilities: To fully understand and improve cyber resilience, organizations need detailed performance data from drills. This means focusing on activities that produce insights into breach readiness and incident response, moving beyond simple metrics like attack frequency to build a more targeted and effective resilience strategy.  Continuous assessment: Regularly evaluate the skills and knowledge of cybersecurity teams to identify strengths, weaknesses, and areas for improvement over time. You can then refine procedures based on drill outcomes. Benchmarking isn’t just a trendy term — it’s a requirement for survival. If you don’t benchmark where your team is falling short and address those deficiencies with no-nonsense cyber drills, you’re flying blind.  Related:Beyond the Election: The Long Cybersecurity Fight vs Bad Actors With data breaches looming, check-the-box awareness is not just lazy — it’s dangerous. By immersing employees in real-world scenarios, cyber drills ensure that cybersecurity capabilities are not only developed but continuously refined to keep pace with emerging threats. These drills provide invaluable hands-on experience, helping cyber leaders and their organizations anticipate potential issues and stop crises before they escalate.  Related:Beyond the Election: The Long Cybersecurity Fight vs Bad Actors Hands-on, measurable exercise programs tailored for specific individuals, teams, and departments are crucial for mitigating the impact of cyber incidents and protecting sensitive business data. You also need to be able to demonstrate tangible results. If you can’t prove efforts are making a real difference, you’re risking your company ’s future.  The mantra is clear: Adapt or be breached. Cyber leaders need to take aggressive action and shift from ‘awareness’ to ‘results.’ Cyber drills are the key to that adaptation, empowering our workforce with the ‘human edge’ necessary to stay resilient and secure.  source

Cyber Awareness Is a Joke: Here’s How to Actually Prepare for Attacks Read More »

The SEC Fines Four SolarWinds Breach Victims

On October 22, 2024, the Securities and Exchange Commission (SEC) announced it had charged four current and former public companies — Unisys Corp., Avaya Holdings Corp., Check Point Software Technologies Ltd., and Mimecast Limited — with making materially misleading disclosures about cybersecurity risks and intrusions. The civil penalties ranged from $990,000 to $4 million, with Unisys fined the most.  Sanjay Wadhwa, acting director of the SEC’s Division of Enforcement, said in a press release statement, “…while public companies may become targets of cyberattacks, it is incumbent upon them to not further victimize their shareholders or other members of the investing public by providing misleading disclosures about the cybersecurity incidents they have encountered.”  Kurt Sanger, counsel at Buchanan Ingersoll and Rooney’s Cybersecurity & Data Privacy practice, says his law firm expects state and foreign governments to continue being assertive regarding companies’ claims about their cybersecurity, artificial intelligence, and other developing technologies.   “New technologies generally have three characteristics that make them difficult to communicate about: They are complex and poorly understood, they offer great promise, and they pose unknown and potentially significant risks,” says Sanger in an email interview. “Some may believe the inherent complexities and lack of understanding give them cover to omit certain facts. Some may believe so strongly in their technologies that they describe them based on aspirations rather than reality and probability. When organizations make questionable statements based on the information available to them at the time, they leave themselves open to government, customer, and shareholder scrutiny.”  Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust Is This the Tip of the Iceberg?  Mike Piazza, partner at CM Law and former regional trial counsel for the SEC, says the dissent by Commissioners Hester Peirce and Mark Udeya is worth noting because they disagree about what is “material.”  “The SEC is supposed to adhere to a materiality standard, and yet it’s hard to discern what the guiding principles are in determining what’s material to disclose from those four decisions,” says Piazza in an email interview. “As a result of the election, control of the Commission will change. Thus, the guidance from the dissenting Commissioners about how to determine materiality for the purposes of disclosure in these circumstances likely will become the guiding principles upon which companies should focus going forward.”  Related:Next Steps to Secure Open Banking Beyond Regulatory Compliance Mike Piazza, CM Law There’s also the question of whether intent or negligence makes a difference. According to Ken Herzinger, partner and global co-chair of the Investigations and White Collar Defense practice at the Paul Hastings law firm, SEC rule 10b-5 covers standard fraud, though the SEC can bring negligence charges under The Securities Act of 1933.  “Since June of 2021, the SEC has been sending letters to hundreds of public companies that were purportedly affected by the SolarWinds incident,” says Herzinger. “Then, in August of 2021, the SEC began sending another wave of letters. They offered amnesty to those companies that would disclose whether they were a victim of the SolarWinds breach or not, and any issues they suffered from that breach. The SEC did not offer amnesty for any insider trading, regulation FD or disclosure and procedure violations.”  More fundamentally, the SEC wants to understand every breach public companies have experienced since October 2019 without limitation to materiality.  “Some companies responded. Some did not. My assessment is that these four cases likely came out of that sweep,” says Herzinger. “I think there are more victims of the victims that the SEC is investigating behind the scenes.”  Aaron Charfoos, partner and co-chair of the Data Privacy and Cybersecurity group at Paul Hastings, says he anticipates more such litigation because the SEC is pushing on several fronts.  Related:Beyond the Election: The Long Cybersecurity Fight vs Bad Actors “We’re not only seeing the enforcement side, but the corporate convergence and the affirmative disclosure side, a real focus on bringing forward these kinds of vulnerabilities, making it clear what’s happening,” says Charfoos.  Timing also matters.  “If you have a cyber breach, it needs to be treated as a top priority. [I] know this is difficult because I’ve been involved in these situations, and sometimes it’s really hard to wrap your arms around the scope of the breach,” says Piazza. “But you have these artificial timelines the SEC has been built in now, so you need to get an initial 8K out with whatever information you deem material, and then supplement that as the investigation goes along. You need to be prepared to do a quick investigation, then hopefully remedy the situation quickly and follow that up with a supplemental filing with the SEC so your investors are fully aware of what’s going on.”  How to Avoid the Same Fate Companies should ensure the cyber and data security information they share within their organizations is consistent with what they share with government agencies, shareholders and the public, according to Buchanan Ingersoll & Rooney’s Sanger. This applies to their security posture prior to a breach, as well as their responses afterward.  “Consistent messaging is difficult to manage given that dozens, hundreds or thousands could be responsible for an organization’s cybersecurity. Investigators will always be able to find a dissenting or more pessimistic outlook among the voices involved,” says Sanger. “If there is a credible argument that circumstances are or were worse than what the organization shares publicly, leadership should openly acknowledge it and take steps to justify the official perspective.”  Corporate cybersecurity breach reporting is still relatively uncharted territory, however.  “Even business leaders who intend to act with complete transparency can make inadvertent mistakes or communicate poorly, particularly because the language used to discuss cybersecurity is still developing and differs between communities,” says Sanger. “It’s noteworthy that the SEC framed each penalized company as having, ‘negligently minimized its cybersecurity incident in its public disclosures.’ The Commission’s carefully crafted characterization is a warning that companies must not only avoid intentional misrepresentations, but they must also use due care when making public statements

The SEC Fines Four SolarWinds Breach Victims Read More »

Does the US Government Have a Cybersecurity Monoculture Problem?

The way Microsoft provided the US government with cybersecurity upgrades is under scrutiny. ProPublica published a report that delves into the “White House Offer”: a deal in which Microsoft sent consultants to install cybersecurity upgrades for free. But those free product upgrades were only covered for up to one year.   Did this deal give Microsoft an unfair advantage, and what could it take to shift the federal government’s reliance on the tech giant’s services?   The White House Offer  ProPublica spoke to eight former Microsoft employees that played a part in the White House Offer. With their insight, the ProPublica’s report details how this deal makes it difficult for users in the federal government to shift away from Microsoft’s products and how it helped to squeeze out competition.   While the cybersecurity upgrades were initially free, government agencies need to pay come renewal time. After the installation of the products and employee training, switching to alternatives would be costly.   ProPublica also reports that Microsoft salespeople recommended that federal agencies drop products from competitors to save costs.   Critics raise concerns that Microsoft’s deal skirted antitrust laws and federal procurement laws.   “Why didn’t you allow a Deloitte or an Accenture or somebody else to say we want free services to help us do it? Why couldn’t they come in and do the same thing? If a company is willing to do something for free like that, why should it be a bias to Microsoft and not someone else that’s capable as well?” asks Morey Haber, chief security advisor at BeyondTrust, an identity and access security company.   Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust ProPublica noted Microsoft’s defense of its deal and the way it worked with the federal government. Microsoft declined to comment when InformationWeek reached out.   Josh Bartolomie, vice president of global threat services at email security company Cofense, points out that the scale of the federal government makes Microsoft a logical choice.   “The reality of it is … there are no other viable platforms that offer the extensibility, scalability, manageability other than Microsoft,” he tells InformationWeek.  The Argument for Diversification  Overreliance on a single security vendor has its pitfalls. “Generally speaking, you don’t want to do a sole provider for any type of security services. You want to have checks and balances. You want to have risk mitigations. You want to have fail safes, backup plans,” says Bartolomie.    And there are arguments being made that Microsoft created a cybersecurity monoculture within the federal government.  Related:Next Steps to Secure Open Banking Beyond Regulatory Compliance Sen. Eric Schmitt (R-Mo.) and Sen. Ron Wyden (D-Ore.) raised concerns and called for a multi-vendor approach.   “DoD should embrace an alternate approach, expanding its use of open-source software and software from other vendors, that reduces risk-concentration to limit the blast area when our adversaries discover an exploitable security flaw in Microsoft’s, or another company’s software,” they wrote in a letter to John Sherman, former CIO of the Department of Defense.   The government has experienced the fallout that follows exploited vulnerabilities. A Microsoft vulnerability played a role in the SolarWinds hack.    Earlier this year it was disclosed that Midnight Blizzard, a Russian state-sponsored threat group, executed a password spray attack against Microsoft. Federal agency credentials were stolen in the attack, according to Cybersecurity Dive.   “There is proof out there that the monoculture is a problem,” says Haber.   Pushback  Microsoft’s dominance in the government space has not gone unchallenged over the years. For example, the Department of Defense pulled out of a $10 billion cloud deal with Microsoft. The contract, the Joint Enterprise Defense Infrastructure (JEDI), faced legal challenges from competitor AWS.   Related:Beyond the Election: The Long Cybersecurity Fight vs Bad Actors Competitors could continue to challenge Microsoft’s dominance in the government, but there are still questions about the cost associated with replacing those services.   “I think the government has provided pathways for other vendors to approach, but I think it would be difficult … to displace them,” says Haber.   A New Administration   Could the incoming Trump administration herald changes in the way the government works with Microsoft and other technology vendors?   Each time a new administration steps in, Bartolomie points out that there is a thirst for change. “Do I think that there’s a potential that he [Trump] will go to Microsoft and say, ‘Give us better deals. Give us this, give us that’? That’s a high possibility because other administrations have,” he says. “The government being one of the largest customers of the Microsoft ecosystem also gives them leverage.”  Trump has been vocal about his “America First” policy, but how that could be applied to cybersecurity services used by the government remains to be seen. “Do you allow software being used from a cybersecurity or other perspective to be developed overseas?” asks Haber.  Haber points out that outsourced development is typical for cybersecurity companies. “I’m not aware of any cybersecurity company that does exclusive US or even North America … builds,” he says.   Any sort of government mandate requiring cybersecurity services developed solely in the US would raise challenges for Microsoft and the cybersecurity industry as a whole.   While the administration’s approach to cybersecurity and IT vendor relationships is not yet known, it is noteworthy that Trump’s view of tech companies could be influential. Amazon pursued legal action over the $10 billion JEDI contract, claiming that Trump’s dislike of company founder Jeff Bezos impacted its ability to secure the deal, The New York Times reports. source

Does the US Government Have a Cybersecurity Monoculture Problem? Read More »

AI and the War Against Plastic Waste

Plastic pollution is easy to visualize given that many rivers are choked with such waste and the oceans are littered with it. The Great Pacific Garbage Patch, a massive collection of plastic and other debris, is an infamous result of plastics proliferation. Even if you don’t live near a body of water to see the problem firsthand, you’re unlikely to walk far without seeing some piece of plastic crushed underfoot. But untangling this problem is anything but easy.   Enter artificial intelligence, which is being applied to many complex problems that include plastics pollution. InformationWeek spoke to research scientists and startup founders about why plastics waste is such a complicated challenge and how they use AI in their work.   The Plastics Problem   Plastic is ubiquitous today as food packaging, clothing, medical devices, cars, and so much more rely on this material. “Since 1950, nearly 10 billion metric tons of plastic has been produced, and over half of that was just in the last 20 years. So, it’s been this extremely prolific growth in production and use. It’s partially due to just the absolute versatility of plastic,” Chase Brewster, project scientist at Benioff Ocean Science Laboratory, a center for marine conservation at the University of California, Santa Barbara, says.   Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report Plastic isn’t biodegradable and recycling is imperfect. As more plastic is produced and more of it is wasted, much of that waste ends up back in the environment, polluting land and water as it breaks down into microplastics and nanoplastics.    Even when plastic products end up at waste management facilities, processing them is not simple. “A lot of people think of plastic as just plastic,” Bradley Sutliff, a former National Institute of Standards and Technology (NIST) researcher, says. In reality, there are many different complex polymers that fall under the plastics umbrella. Recycle and reuse isn’t just a matter of sorting; it’s a chemistry problem, too. Not every type of plastic can be mixed and processed into a recycled material.    Plastic is undeniably convenient as a low-cost material used almost everywhere. It takes major shifts in behavior to reduce its consumption, a change that is not always feasible.   Virgin plastic is cheaper than recycled plastic, which means companies are more likely to use the former. In turn, consumers are faced with the same economic choice, if they even have one.   There is no one single answer to solving this environmental crisis. “Plastic pollution is an economic, technical, educational, and behavioral problem,” Joel Tasche, co-CEO and cofounder of CleanHub, a company focused on collecting plastic waste, says in an email interview.   Related:Inside The Duality of AI’s Superpowers So, how can AI arm organizations, policymakers, and people with the information and solutions to combat plastic pollution?    AI and Quantifying Plastic Waste  The problem of plastic waste is not new, but the sheer volume makes it difficult to gather the granular data necessary to truly understand the challenge and develop actionable solutions.   “If you look at the … body of research on plastic pollution, especially in the marine environment, there is a large gap in terms of actually in situ collected data,” says Brewster.   The Benioff Ocean Science Laboratory is working to change that through the Clean Currents Coalition, which focuses on removing plastic waste from rivers before it has the chance to enter the ocean. The Coalition is partnered with local organizations in nine different countries, representing a diverse group of river systems, to remove and analyze plastic pollution.   “We started looking into what artificial intelligence can do to help us to collect that more fine data that can … help drive our upstream action to reduce plastic production and plastic leaking into the environment in the first place,” says Brewster.   Related:Keynote Sneak Peek: Forrester Analyst Details Align by Design and AI Explainability The project is developing a machine learning model with hardware and software components. A web cam is positioned above the conveyor belts of large trash wheels used to collect plastic waste in rivers. Those cameras count and categorize trash as it is pulled from the river.   This system “… automatically [sends] that to the cloud, to a data set, visualizing that on a dashboard that can actively tell us what types of trash are coming out of the river and at what rate,” Brewster explains. “We have this huge data set from all over the world, collected synchronously over three years during the same time period, very diverse cultures, communities, river sizes, river geomorphologies.”  That data can be leveraged to gain more insight into what kinds of plastic end up in rivers, which flow to our oceans, and to inform targeted strategies for prevention and cleanup.   AI and Waste Management   Very little plastic is actually recycled; just 5% with some being combusted and the majority ends up in landfills. Waste management plants face the challenge of sorting through a massive influx of material, some recyclable and some not. And, of course, plastic is not one uniform group that can easily be processed into reusable material.   AI and imaging equipment are being put to work in waste management facilities to tackle the complex job of sorting much more efficiently.   During Sutliff’s time with NIST, a US government agency focused on industrial competitiveness, he worked with a team to explore how AI could make recycling less expensive.   Waste management facilities can use near-visible infrared light (NIR) to visualize and sort plastics. Sutliff and his team looked to improve this approach with machine learning.  “Our thought was that the computer might be a lot better at distinguishing which plastic is which if you … teach it,” he says. “You can get a pretty good prediction of things like density and crystallinity by using near infrared light if you train your models correctly.”  The results of that work show promise, and Sutliff released the code to NIST’s GitHub page. More accurate sorting can help waste management facilities monetize more recyclable materials, rather

AI and the War Against Plastic Waste Read More »

How Is AI Empowering Earth Intelligence?

Better understanding our planet — its changes, patterns and activities on land, sea, forests, and cities — is the key to improving it. We’ve all experienced or heard about the tragedies resulting from extreme weather events, such as hurricanes Helene and Milton. While the consequences of these natural events have been dire, new Earth intelligence technologies and capabilities are enabling us to better predict them, learn from the ravages they leave behind, and find new ways to outsmart them. The use of artificial intelligence is one of those technologies that is revolutionizing how we not only see the Earth but also deliver actionable solutions to improve it.  Data sources, such as satellite imagery, sensors and climate data, are being used to train AI models to tell a story and predict future outcomes based on historical trends.  In fact, according to the World Economic Forum, by 2032 observation of Earth via satellites is expected to generate over 2 exabytes (2 billion gigabytes) of data cumulatively. The massive volume and complexity of this data has made it almost impossible to provide any actionable insights. Yet, through the use of AI, this data is able to be processed and analyzed to reveal key truths that can inform future direction and proactive planning. According to the same article, some machine learning models can generate climate estimates up to 1,000 times faster than traditional climate models. They accomplish this by analyzing historical weather data alongside real-time meteorological inputs. In this way, AI models can forecast hurricanes, floods, and droughts, providing communities with critical time to prepare and ultimately saving lives, homes, and critical infrastructure.  Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report Yet the role of AI-driven Earth intelligence is not just for monitoring and predicting; it is also a powerful tool for mitigating climate change. Innovations in energy management, such as the use of smart grids, use AI to optimize electricity distribution and reduce our use and reliance on fossil fuels.  In addition to climate action, Earth intelligence is harnessing the power of AI to process vast amounts of environmental data to enable actionable insights for sustainability and conservation of our Earth and its resources.   Consider some of the ways that AI is reshaping how we see the planet and address complex environmental challenges.  Tracking deforestation.  A recent report stated that in 2023, the world lost 6.37 million hectares of forest, which is equivalent to the size of about 9.1 million soccer fields. Researchers are using AI to analyze satellite images to monitor the effects of deforestation in real-time. By training machine learning models on historical data, these systems can detect changes in land use, providing timely information that is vital for conservation efforts. This data also can help industry groups and governments provide data-driven proof of impact so that appropriate investments and resources can be put into conservation activities.  Related:Inside The Duality of AI’s Superpowers Resource management. Another area where AI is making a significant impact is in managing vital resources, such as agriculture and water. In agriculture, precision farming techniques utilize AI to analyze soil health, crop conditions and weather patterns, so that farmers can make data-driven decisions about when to plant, irrigate and harvest, resulting in higher yields, reduced use of resources and less spread of infestations and disease.   Similarly, in water management, AI systems can analyze consumption patterns and predict shortages, enabling better allocation of water resources. For instance, cities can use AI to optimize their distribution of water, as well as track the quality and health of water resources.  Biodiversity and conservation. Biodiversity, or the variety of life in our natural world, is critical to maintaining the planet’s health and ensuring the protection of vital plant and animal species. Geospatial intelligence, enabled through satellite imagery and computer vision, can monitor and analyze wildlife populations, helping researchers track endangered species and their habitats. It also can help with habitat restoration efforts by predicting which plant species will thrive in specific conditions, allowing conservationists to make informed decisions about reforestation projects.  Related:Keynote Sneak Peek: Forrester Analyst Details Align by Design and AI Explainability Further, geospatial intelligence is being used to combat poaching activities by identifying and monitoring traps to protect vulnerable species, and alerting authorities so that appropriate actions can be taken.   The Future of AI-Driven Earth Intelligence: Multimodal AI  AI is helping us more closely monitor Earth so that we can better understand what is happening now and in the future. When it comes to addressing issues such as the effects of heat islands in urban areas, for example, AI is enabling a new level of geospatial intelligence. It is helping us count the number and types of roof tops in a particular neighborhood to see how many are repelling or attracting heat, and the proximity of each building to one another. It’s also counting the number of air-polluting vehicles in the street.   Yet, while this information is extremely valuable, the future of AI-driven geospatial intelligence will go one step further — not only identify things in data and images, but actually creating real-time reports that explain the ramifications of taking different types of actions and offering the best course of action. These insights also leverage historical data in addition to satellite imagery, so that trends begin to emerge.   In addition to computer vision, other types of AI, such as large language models for generative AI, will come into play to write the reports. Different forms of AI are coming together, such as computer vision, generative AI, natural language processing, and deep learning. Together they are forming the complete picture by connecting the dots, between massive and unrelated datapoints to create a cohesive whole and provide a blueprint for moving forward.  Challenges and Ethical Considerations  While the benefits of AI in enabling Earth intelligence are substantial, challenges remain. Data privacy must be addressed to ensure that the data used to train AI models is kept confidential when it belongs to specific people or organizations. There’s

How Is AI Empowering Earth Intelligence? Read More »