Information Week

How Digital Fraud Has Evolved: Key Takeaways for CISOs

Fraudsters have been around since the dawn of time. But the internet has completely transformed the scale at which they operate. There are now an unlimited number of potential victims they can target with various schemes, from phishing attacks and identity theft to sophisticated scams and financial fraud. And that’s exactly what they’ve been doing. According to the Global Anti-Scam Alliance, scammers stole over $1 trillion in 2023 alone. As the world continues to embrace new technologies, digital fraud is expected to rise proportionately. For companies in all industries, this means that cybersecurity measures and capabilities to combat fraud are no longer optional but necessary. Let’s look at some of the main digital fraud trends organizations are facing today and ways to effectively mitigate them. It has been an eventful few years for digital fraud powered by emerging technologies like artificial intelligence and machine learning. Account takeover attacks (ATO), particularly via session hijacking, have made many headlines throughout this year, forcing browser developers to implement stricter security controls. And that’s only one example. With cybercriminals attacking from all angles, it’s difficult to pinpoint all rising threats. With that said, there are a few that stand out. Related:In a Digital World, Anti-Fraud and Security Teams Should be Partners Deepfake Technology While deepfakes have been around for some time now, they have drastically evolved in recent years. Thanks to various AI tools, they’re not only more realistic and harder to detect but also significantly easier to create. Digital fraud involving deepfake technology is costing organizations millions. In one severe case, a Hong Kong-based company lost $25 million to scammers after they deepfaked the company’s CFO in a live video call. It’s easy to blame the worker who fell for the scam in this scenario, but was the organization doing anything to provide adequate training and tools to prevent such incidents? Digital Impersonation Prominent business figures aren’t the only ones being impersonated. Scammers are also creating fake websites that mimic legitimate businesses to commit fraud against unsuspecting users. This is a huge problem for businesses, as according to a report by Memcyco, 40% of customers who fall victim to fake-site scams stop doing business with the company being impersonated. There is also a lot of talk about government regulation stepping in to force companies to reimburse their customers who fell victim to fraud, which has already begun in the UK. This puts even more pressure on businesses to swiftly detect and mitigate fraudulent activities related to their brand. Evolution in Phishing By utilizing deepfake technology, generative AI, large language models (LLMs), and other technologies, cybercriminals can now orchestrate very sophisticated phishing attacks that are incredibly difficult even for security-savvy individuals to detect. Just two to three years ago, phishing messages were evidently crafted by non-native speakers, with many spelling and other errors that made them easier to spot. Now, the messages are not only grammatically correct but also much more personalized, thanks to advanced data mining and social engineering techniques. Considering these evolving threats, CISOs and other security professionals have their hands full in the effort to protect their organizations. Here are some of the most effective methods in combating the many forms of today’s digital fraud: Security Awareness and Phishing Training for Employees Human error is the number one cause (74%) of all cyberattacks. All the threats and attack vectors I discussed are largely ineffective unless an actual human falls for them. That’s why regular security awareness training should be among the first priorities for organizations looking to boost their fraud resilience. The training should include real-life scenarios and simulations of the latest techniques to make it easier for employees to pinpoint similar attempts from attackers. Fraud Detection Technologies Just as criminals are using technology to fill their pockets, the business community can also leverage advanced technologies to protect themselves. Sophisticated fraud detection systems utilize real-time scanning, machine learning, and behavior analytics to find suspicious activity, such as fake websites or unusual transaction attempts. It’s also worth mentioning that while 72% of the businesses surveyed in the above-mentioned report by Memcyco use website impersonation protection, only 6% found it effective. So, it’s important to invest in the right technologies. Otherwise, a business may have a false sense of security, which is worse than having no protection at all. Threat Intelligence Sharing with Peers and Law Enforcement The cybersecurity community is fairly tight-knit, but murky information sharing, particularly when it comes to ransomware threats, makes it difficult for businesses to react in time. Open-source platforms like MISP and OTX encourage threat intelligence sharing among peers and should be used as a key resource to combat digital fraud. Based on the trends discussed in this article and others being used in the wild, it appears that deception is a highly prevalent tactic among cybercriminals. Therefore, it’s important to exercise caution during our everyday internet activity, whether it’s checking emails, visiting websites, or even making video calls. From an organizational perspective, the onus is on security leaders to stay on top of emerging threats and help employees learn how to deal with them effectively. Regular training, robust fraud detection systems, and a culture of vigilance are key to combating digital fraud these days. source

How Digital Fraud Has Evolved: Key Takeaways for CISOs Read More »

How Developers Drive Security Professionals Crazy

COMMENTARY In the evolving landscape of software development, the integration of DevSecOps has emerged as a critical paradigm, promising a harmonious blend of development, security, and operations to streamline feature delivery while ensuring security. However, the path to achieving this seamless integration is fraught with hurdles — ranging from the lack of security training among developers to the complexity of security tools, the scarcity of dedicated security personnel, and the generation of non-actionable security alerts.   Historically, there has been a palpable tension between members of development teams, who prioritize rapid feature deployment, and security professionals, who focus on risk mitigation. This discrepancy often results in a “the inmates are running the asylum” scenario, where developers, driven by delivery deadlines, may inadvertently sideline security, leading to frustration among security teams. However, the essence of DevSecOps lies in reconciling these differences by embedding security into the development life cycle, thereby enabling faster, more secure releases without compromising productivity. Let’s explore strategies for embedding security into the development process in a harmonious manner, thereby enhancing productivity without compromising on security.  The DevSecOps Imperative The adoption of DevSecOps marks a significant shift in how organizations approach software development and security. By weaving security practices into the development and operations processes from the outset, DevSecOps seeks to ensure that security is not an afterthought but a fundamental component of product development. This approach not only accelerates the deployment of features but also significantly reduces the organizational risk associated with security vulnerabilities. Yet, achieving this delicate balance between rapid development and stringent security measures requires overcoming substantial obstacles.  Understanding Your Risk Portfolio The foundation of effective DevSecOps implementation lies in gaining a comprehensive understanding of the organization’s risk portfolio. This involves a thorough assessment of all software resources, including the codebase of applications and any open source or third-party dependencies. By integrating these assets into a centralized system, security teams can monitor security and compliance, ensuring that risks are identified and addressed promptly.  Automating Security Testing Automating security testing represents another cornerstone of effective DevSecOps. By embedding risk management policies directly into DevOps pipelines, organizations can shift the responsibility of initial security assessments away from developers, allowing them to focus on their core tasks while still ensuring that security is not compromised. This automation not only streamlines the security testing process but also ensures that vulnerabilities are promptly flagged to the security teams for further action.  Continuous Monitoring for Proactive Security Continuous monitoring is a critical component of DevSecOps, enabling organizations to maintain a vigilant watch over their repositories. By automatically triggering security tests upon any change in the codebase, this approach minimizes the need for developer intervention, ensuring that security checks are an integral, ongoing part of the development life cycle.  Simplifying the Developer Experience To truly integrate security into the development process, it is imperative to simplify the developer experience. This can be achieved by enabling developers to access information about security vulnerabilities within their familiar working environments, such as the integrated development environment (IDE) or bug-tracking tools. By making security an intrinsic aspect of their daily tasks, developers are more likely to embrace these practices, reducing the friction associated with external security mandates.  Conclusion The journey toward a successful DevSecOps implementation is complex, requiring a strategic approach to overcome the myriad challenges it presents. By fostering a culture of collaboration, automating security processes, and integrating security into the fabric of development workflows, organizations can mitigate risks without sacrificing speed or innovation. The goal of DevSecOps is not to hinder development with security but to empower developers with the tools and processes needed to build secure, high-quality software efficiently. By adopting these principles, companies can move beyond the “inmates running the asylum” paradigm to a more balanced, productive, and secure software development life cycle. The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of his employer. source

How Developers Drive Security Professionals Crazy Read More »

GenAI’s Impact on Cybersecurity

Generative AI adoption is becoming ubiquitous as more software developers include the capability in their applications and users flock to sites like OpenAI to boost productivity. Meanwhile, threat actors are using the technology to accelerate the number and frequency of attacks.   “GenAI is revolutionizing both offense and defense in cybersecurity. On the positive side, it enhances threat detection, anomaly analysis and automation of security tasks. However, it also poses risks, as attackers are now using GenAI to craft more sophisticated and targeted attacks [such as] AI-generated phishing,” says Timothy Bates, AI, cybersecurity, blockchain & XR professor of practice at University of Michigan and former Lenovo CTO. “If your company hasn’t updated its security policies to include GenAI, it’s time to act.”  According to James Arlen, CISO at data and AI platform company Aiven, GenAI’s impact is proportional to its usage.   “If a bad actor uses GenAI, you’ll get bad results for you. If a good actor uses GenAI wisely you’ll get good results. And then there is the giant middle ground of bad actors just doing dumb things [like] poisoning the well and nominally good actors with the best of intentions doing unwise things,” says Arlen. “I think the net result is just acceleration. The direction hasn’t changed, it’s still an arms race, but now it’s an arms race with a turbo button.”  Related:2024 InformationWeek US IT Salary Report: Profits, Layoffs, and the Continued Rise of AI The Threat Is Real and Growing GenAI is both a blessing and a curse when it comes to cybersecurity.  “On the one hand, the incorporation of AI into security tools and technologies has greatly enhanced vendor tooling to provide better threat detection and response through AI-driven features that can analyze vast amounts of data, far quicker than ever before, to identify patterns and anomalies that signal cyber threats,” says Erik Avakian, technical counselor at Info-Tech Research Group. “These new features can help predict new attack vectors, detect malware, vulnerabilities, phishing patterns and other attacks in real-time, including automating the response to certain cyber incidents. This greatly enhances our incident response processes by reducing response times and allowing our security analysts to focus on other and more complex tasks.”   Meanwhile, hackers and hacking groups have already incorporated AI and large language modeling (LLM) capabilities to carry out incredibly sophisticated attacks, such as next-generation phishing and social engineering attacks using deep fakes.   “The incorporation of voice impersonation and personalized content through ‘deepfake’ attacks via AI-generated videos, voices or images make these attacks particularly harder to detect and defend against,” says Avakian. “GenAI can and is also being used by adversaries to create advanced malware that adapts to defenses and evades current detection systems.”  Related:Curtail Cloud Spend With These Strategies Pillar Security’s recent State of Attacks on GenAI report contains some sobering statistics about GenAI’s impact on cybersecurity:   90% of successful attacks resulted in sensitive data leakage.  20% of jail break attack attempts successfully bypassed GenAI application guardrails.  Adversaries require an average of just 42 seconds to execute an attack.   Attackers needed only five interactions, on average, to complete a successful attack using GenAI applications.   The attacks exploit vulnerabilities at every stage of interaction with GenAI systems, underscoring the need for comprehensive security measures. In addition, the attacks analyzed as part of Pillar Security’s research reveal an increase in both the frequency and complexity of prompt injection attacks, with users employing more sophisticated techniques and making persistent attempts to bypass safeguards.   “My biggest concern is the weaponization of GenAI — cybercriminals using AI to automate attacks, create fake identities or exploit zero-day vulnerabilities faster than ever before. The rise of AI-driven attacks means that attack surfaces are constantly evolving, making traditional defenses less effective,” says University of Michigan’s Bates. “To mitigate these risks, we’re focusing on AI-driven security solutions that can respond just as rapidly to emerging threats. This includes leveraging behavioral analytics, AI-powered firewalls, and machine learning algorithms that can predict potential breaches.”  Related:Forrester Speaker Sneak Peek: Analyst Jayesh Chaurasia to Talk AI Data Readiness In the case of deepfakes, Josh Bartolomie, VP of global threat services at email threat and defense solution provider Cofense recommends an out-of-band communication method to confirm the potentially fraudulent request, utilizing internal messaging services such as Slack, WhatsApp, or Microsoft Teams, or even establishing specific code words for specific types of requests or per executive leader.  And data usage should be governed.   “With the increasing use of GenAI, employees may look to leverage this technology to make their job easier and faster. However, in doing so, they can be disclosing corporate information to third party sources, including such things as source code, financial information, customer details [and] product insight,” says Bartolomie.  “The risk of this type of data being disclosed to third party AI services is high, as the totality of how the data is used can lead to a much broader data disclosure that could negatively impact that organization and their products [and] services.”  Casey Corcoran, field chief information security officer at cybersecurity services company Stratascale — an SHI company, says in addition to phishing campaigns and deep fakes, bad actors are using models that are trained to take advantage of weaknesses in biometric systems and clone persona biometrics that will bypass technical biometric controls.   “[M]y two biggest fears are: 1) that rapidly evolving attacks will overwhelm traditional controls and overpower the ability of humans to distinguish between true and false; and 2) breaking the ‘need to know’ and overall confidentiality and integrity of data through unmanaged data governance in GenAI use within organizations, including data and model poisoning,” says Corcoran.   Tal Zamir, CTO at advanced email and workspace security solutions provider Perception Point warns that attackers exploit vulnerabilities in GenAI-powered applications like chatbots, introducing new risks, including prompt injections. They also use the popularity of GenAI apps to spread malicious software, such as creating fake GenAI-themed Chrome extensions that steal data.  “Attackers leverage GenAI to automate tasks like building phishing pages and crafting hyper-targeted social engineering messages,

GenAI’s Impact on Cybersecurity Read More »

AI on the Road: The Auto Industry Sees the Promise

Generative AI is reshaping the future of the automotive industry. For industry leaders, this is not just some cutting-edge technology, but a strategic enabler poised to redefine the market landscape. With 79% of executives expecting significant AI-driven transformation within the next three years, harnessing GenAI is no longer optional but essential to remain competitive in a rapidly evolving sector.  As AI continues to make its mark, it transforms how vehicles are designed, secures them against evolving threats, and enhances the overall driving experience. From enabling cars to anticipate and respond to cyber risks to accelerating innovation in design, and creating more personalized driving experiences, AI is redefining the key aspects of automotive development and usage.  Stopping Security Breaches  With the automotive industry undergoing rapid transformation, the cybersecurity risks it encounters are also increasing and becoming more complex. High-profile breaches, such as the Pandora ransomware attack on a major German car manufacturer in March 2022, highlight the urgent need for more advanced security strategies. The attackers compromised 1.4TB of sensitive data, including purchase orders, technical diagrams, and internal emails, exposing vulnerabilities within the sector.   AI-driven systems, including predictive and generative models, process vast amounts of data in real-time, making them indispensable for detecting unusual patterns that signal potential attacks. By continuously learning from past threats and dynamic adaptation to emerging risks, AI-driven systems detect intrusions and work alongside rule-based or supervised models to predict outcomes and simulate attack scenarios for training purposes. These include isolating compromised nodes, blocking malicious IP addresses, and mitigating threats before they escalate. For this reason, 82% of IT decision-makers intend to invest in AI-driven cybersecurity within the next two years.  GenAI’s abilities to generate data and patterns empower organizations to stay ahead of cybercriminals by anticipating attacks before they occur. A prime example is a leading automotive manufacturer that has significantly improved the security of its vehicle-to-everything (V2X) communication systems by leveraging generative models to simulate various network attack scenarios. This approach allows the network’s defensive mechanisms to be trained and tested against imminent breaches.   By utilizing models such as variational autoencoders (VAEs) and generative adversarial networks (GANs), which can generate synthetic attack data for simulations, the company could mimic various cyberattack scenarios. This allowed them to detect and mitigate up to 90% of simulated attacks during the testing phases, demonstrating a robust improvement in the overall security posture.   Redefining Automotive Design  Generative AI is ushering in a new wave of innovation in automotive architecture, transforming vehicle design with cutting-edge capabilities. By leveraging generative design techniques, AI-driven systems can automatically produce multiple design iterations, enabling manufacturers to identify the most efficient and effective solutions. GenAI design optimizes engineering and aesthetic decisions, helping manufacturers reduce development time and costs by up to 20%, according to Precedence Research, giving companies a competitive edge in expediting time-to-market.  Toyota Research Institute has integrated a generative AI tool that enables designers to leap from a text description to design sketches by specifying stylistic attributes such as “sleek,” “SUV-like,” and “modern.” Tackling the challenges where designs frequently fell short of meeting engineering requirements, this tool integrates both aesthetic and engineering requirements. That allows designers and engineers to collaborate more effectively while ensuring that the final designs meet critical technical specifications. By bridging the gap between creative and engineering teams, companies can ensure that final designs meet essential specifications while enhancing both the speed and quality of design iterations, enabling faster and more efficient innovation.  A More Connected and Personalized Driver Experience   Original equipment manufacturers are transforming the customer experience with GenAI in an increasingly demanding market. Unlike traditional voice command systems that rely on static and pre-programmed responses, AI-powered voice technology offers dynamic, natural conversations. Integrated into vehicles, GenAI enhances GPS navigation, entertainment systems, and other in-car functionalities, allowing drivers to interact meaningfully with their vehicle’s AI assistant.   Volkswagen, for example, became the first automotive manufacturer to integrate ChatGPT into its voice assistant IDA. This offers drivers an AI-powered system that manages everything from infotainment to navigation and answers general knowledge questions.   As GenAI continues to become more advanced, delivering an exceptional driver experience is now a key differentiator for manufacturers looking to stay competitive. Despite the significant advancements in leveraging AI to enhance customer interactions, many original equipment manufacturers (OEMs) struggle to meet customer expectations. A recent Boston Consulting Group study revealed that, while the quality of the car-buying experience is the most critical decision factor for many customers, only 52% of customers say they are completely satisfied with their most recent car-buying experience. This underscores the need for OEMs to refine the integration of AI-driven systems further to enhance both the purchasing and ownership experience.  source

AI on the Road: The Auto Industry Sees the Promise Read More »

How ‘Cheap Fakes’ Exploit Our Psychological Vulnerabilities

At a time when sophisticated AI tools such as deepfakes are being deployed by cybercriminals on an increasingly large scale, it’s easy to overlook other forms of deception and manipulation that don’t make as many headlines. From mislabeled media to selectively edited videos, images, and audio files, there are plenty of “cheap fakes” that still fool people into trusting them every day.  We’ve entered an era in which employees can no longer trust their own eyes or ears, so the ability to identify suspicious activity and accurately assess any media’s legitimacy has never been more important. As a result, the core principle that IT professionals must emphasize is verify before you trust. That overarching theme will underpin every human response to the threat landscape moving forward.  It’s also vital for IT teams to build cybersecurity awareness training programs around the full range of attack vectors. While AI has altered the cyberthreat landscape, cybercriminals will continue using cheap fakes, which require less sophisticated technology and fewer resources. But IT leaders shouldn’t confuse the accessibility of cheap fakes with ineffectiveness: They remain extremely potent tools for deceiving employees and infiltrating companies.   How Do Cheap Fakes Differ From Deepfakes?  Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust While deepfakes use AI to create or alter video and audio content, cheap fakes rely on editing and mislabeling media to create a false impression. An example of a deepfake was a robocall that impersonated President Joe Biden’s voice before the 2024 New Hampshire Democratic primary. But an example of a cheap fake was an edited 2020 video of former House Speaker Nancy Pelosi speaking in a slurred and awkward way. The video purported to show that Pelosi was intoxicated, but it had been slowed down to create that false impression.   Misinformation in politics is only part of the problem. There are countless ways cybercriminals can use cheap fakes to deceive people, and IT teams must be aware of how these tactics are deployed. Bad actors can publish and mislabel a real video clip to suggest that it occurred at a different time or place. They can use software like Photoshop or Final Cut to edit images and videos directly. They can combine separate videos or audio recordings to create the illusion of interactions and events that never happened. IT teams must understand that all of these methods can be used to deceive employees and manipulate them into making a mistake. For example, cybercriminals can send employees fake content from a software company that instructs them to change security settings or fool them into clicking on a malicious link with a doctored headline.   Related:Juliet Okafor Highlights Ways to Maintain Cyber Resiliency Because cheap fakes are so easy to make, cybercriminals of all skill levels are capable of experimenting with them on a large scale. Many cheap fakes reframe or alter authentic content, which gives them a veneer of legitimacy and makes it easier for cybercriminals to convince people that fraudulent content is real. Given all the ways cybercriminals can deploy cheap fakes — and their continued reliance on these attacks in many contexts — it’s clear that IT leaders must make them a priority in their awareness training programs.   Why are Cheap Fakes So Effective?   Cheap fakes exploit a range of psychological vulnerabilities, like fear, greed, and curiosity. These vulnerabilities make social engineering attacks prevalent across the board — over two-thirds of data breaches involve a human element — but cheap fakes are particularly effective at leveraging them. This is because many people are unable to identify manipulated media, particularly when it aligns with their preconceptions and existing biases.  According to a study published in Science, false news spreads much faster than accurate information on social media. Researchers found several explanations for this phenomenon: false news tends to be more novel than the truth, and the stories elicited “fear, disgust, and surprise in replies.” Cheap fakes rely on these emotions to spread quickly and capture victims’ attention — they create inflammatory imagery, aim to increase political and social division, and often present fragments of authentic content to produce the illusion of legitimacy.   Related:Next Steps to Secure Open Banking Beyond Regulatory Compliance While deepfakes are rapidly improving and becoming easier to create, a 2024 study found that cheap fakes “can be at least as credible as more sophisticated forms of artificial intelligence-driven audiovisual fabrication.” This is why the study reports that cheap fakes are still used more extensively than deepfakes. While this may not continue to be the case as deepfakes become easier to make, IT leaders have to make sure that employees know how to resist both forms of deception.   Preparing the Workforce To Identify Cheap Fakes  Cybercriminals are adept at exploiting psychological weaknesses, and they recognize that cheap fakes are among the most powerful weapons they have for deceiving and manipulating people. This is because cheap fakes have a long track record of successfully fooling victims, even though they’re easier and less expensive to produce than AI-generated media. Cheap fakes can also augment more advanced AI cyberattacks by providing false information that promotes and reinforces deepfake content.   At a time when cheap fakes and deepfakes are rapidly proliferating, IT teams must emphasize a core principle of cybersecurity: Verify before you trust. Employees should be taught to doubt their initial reactions to digital content, particularly when that content is sensational, coercive, or divisive. Employee training is one of the top factors in mitigating the financial damage caused by data breaches, and it’s among the first investments companies make after they suffer a breach. But cybersecurity should never be reactive. All employees have to be aware of how cybercriminals are capable of using their psychological vulnerabilities against them with potent tools like cheapfakes.  At a time when cyberattacks are on the rise, the cost of data breaches is surging, and social engineering tactics like phishing remain the most common initial attack vectors, the development of a robust awareness training program is critical. These

How ‘Cheap Fakes’ Exploit Our Psychological Vulnerabilities Read More »

How IT Can Show Business Value From GenAI Investments

As IT leaders, we’re facing increasing pressure to prove that our generative AI investments translate into measurable and meaningful business outcomes. It’s not enough to adopt the latest cutting-edge technology; we have a responsibility to show that AI delivers tangible results that directly support our business objectives.   To truly maximize ROI from GenAI, IT leaders need to take a strategic approach — one that seamlessly integrates AI into business operations, aligns with organizational goals, and generates quantifiable outcomes. Let’s explore advanced strategies for overcoming GenAI implementation challenges, integrating AI with existing systems, and measuring ROI effectively.  Key Challenges in Implementing GenAI  Integrating GenAI into enterprise systems isn’t always straightforward. There are several hurdles IT leaders face, especially surrounding data and system complexity.   Data governance and infrastructure. AI is only as good as the data it’s trained on. Strong data governance enforces better accuracy and compliance, especially when AI models are trained on vast, unstructured data sets. Building AI-friendly infrastructure that can handle both the scale and complexity of AI data pipelines is another challenge, as these systems must be resilient and adaptable.  Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report Model accuracy and “hallucinations.” GenAI models can produce non-deterministic results, sometimes generating content that is inaccurate or entirely fabricated. Unlike traditional software with clear input-output relationships that can be unit-tested, GenAI models require a different approach to validation. This issue introduces risks that must be carefully managed through model testing, fine-tuning, and human-in-the-loop feedback.  Security, privacy, and legal concerns. The widespread use of publicly and privately sourced data in training GenAI models raises critical security and legal questions. Enterprises must navigate evolving legal landscapes. Data privacy and security concerns must also be addressed to avoid potential breaches or legal issues, especially when dealing with heavily regulated industries like finance or healthcare.  Strategies for Measuring and Maximizing AI ROI  Adopting a comprehensive, metrics-driven approach to AI implementation is necessary for assessing your investment’s business impact. To ensure GenAI delivers meaningful business results, here are some effective strategies:  Define high-impact use cases and objectives: Start with clear, measurable objectives that align with core business priorities. Whether it’s improving operational efficiency or streamlining customer support, identifying use cases with direct business relevance ensures AI projects are focused and impactful.  Quantify both tangible and intangible benefits: Beyond immediate cost savings, GenAI drives value through intangible benefits like improved decision-making or customer satisfaction. Quantifying these benefits gives a fuller picture of the overall ROI.  Focus on getting the use case right, before optimizing costs: LLMs are still evolving. It is recommended that you first use the best model (likely most expensive), prove that the LLM can achieve the end goal, and then identify ways to reduce cost to serve that use case. This will make sure that the business need is not left unmet.  Run pilot programs before full rollout: Test AI in controlled environments first to validate use cases and refine your ROI model. Pilot programs allow organizations to learn, iterate, and de-risk before full-scale deployment, as well as pinpoint areas where AI delivers the greatest value, learn, iterate, and de-risk before full-scale deployment.  Track and optimize costs throughout the lifecycle: One of the most overlooked elements of AI ROI is the hidden costs of data preparation, integration, and maintenance that can spiral if left unchecked. IT leaders should continuously monitor expenses related to infrastructure, data management, training, and human resources.   Continuous monitoring and feedback: AI performance should be tracked continuously against KPIs and adjusted based on real-world data. Regular feedback loops allow for continuous fine-tuning, ensuring your investment aligns with evolving business needs and delivers sustained value.   Related:Sidney Madison Prescott Discusses GenAI’s Potential to Transform Enterprise Operations Overcoming GenAI Implementation Roadblocks  Related:Sidney Madison Prescott Discusses GenAI’s Potential to Transform Enterprise Operations Successful GenAI implementations depend on more than adopting the right technology—they require an approach that maximizes value while minimizing risk. For most IT leaders, success depends on addressing challenges like data quality, model reliability, and organizational alignment. Here’s how to overcome common implementation hurdles:   Align AI with high-impact business goals. GenAI projects should directly support business objectives and deliver sustainable value like streamlining operations, cutting costs, or generating new revenue streams. Define priorities based on their impact and feasibility.  Prioritize data integrity. Poor data quality prevents effective AI. Take time to establish data governance protocols from the start to manage privacy, compliance, and integrity while minimizing risk tied to faulty data.  Start with pilot projects. Pilot projects allow you to test and iterate real-world impact before committing to large-scale rollouts. They offer valuable insights and mitigate risk.  Monitor and measure continuously. Ongoing performance tracking ensures AI remains aligned with evolving business goals. Continuous adjustments are key for maximizing long-term value.  source

How IT Can Show Business Value From GenAI Investments Read More »

The AI-Driven Security Operations Platform for the Modern SOC E-Book

“The AI-Driven Security Operations Platform for the Modern SOC E-Book“ Don’t Imagine the Future. Deploy It. Unleash ML intelligence on your SOC. Cybersecurity has a threat remediation problem. The proliferation of applications, workloads, microservices and users is quickly expanding the digital attack surface. It’s generating vast amounts of data faster than you can detect and protect. As such, the cybersecurity industry needs to continually innovate to stay ahead of evolving challenges. Cortex® XSIAM™ embraces an AI-driven architecture in profound ways. It’s transforming SecOps by leaning into AI in areas where machine learning can best augment teams. XSIAM is the realization of our vision to create the autonomous security platform of the future. It enables dramatically better security with near-real-time detection and response. It allows the SOC team to be proactive instead of reactive. And it frees analysts to focus on the critical issues, like unusual behavior and anomalies. Don’t get bogged down in outdated methods. Download this e-book for a detailed look under the hood of XSIAM. Offered Free by: Palo Alto Networks See All Resources from: Palo Alto Networks Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. Thank you This download should complete shortly. If the resource doesn’t automatically download, please, click here. source

The AI-Driven Security Operations Platform for the Modern SOC E-Book Read More »

Retrieval-Augmented Generation Makes AI Smarter

A core problem with artificial intelligence is that it’s, well, artificial. Generative AI systems and large language models (LLMs) rely on statistical methods rather than intrinsic knowledge to predict text outcomes. As a result, they sometimes spin up lies, errors and hallucinations.  This lack of real-world knowledge has repercussions that extend across domains and industries. The problems can be particularly painful in areas such as finance, healthcare, law, and customer service. Bad results can lead to bad business decisions, irate customers, and wasted money.  As a result, organizations are turning to retrieval-augmented generation (RAG). According to a Deloitte report, upwards of 70% of enterprises are now deploying the framework to augment LLMs. “It is essential for realizing the full benefits of AI and managing costs,” says Jatin Dave, managing director of AI and data at Deloitte.  RAG’s appeal is that it supports faster and more reliable decision-making. It also dials up transparency and energy savings. As the competitive business landscape intensifies and AI becomes a tool that differentiates organizations, RAG is emerging as an important tool in the AI arsenal.   Says Scott Likens, US and Global Chief AI Engineering Officer at PwC: “RAG is revolutionizing AI by combining the precision of retrieval models with the creativity of generative models.”  Related:IT Pros Love, Fear, and Revere AI: The 2024 State of AI Report RAG Matters  What makes RAG so powerful is that it combines a trained generative AI system with real-time information, typically from a separate database. “This synergy enhances everything from customer support to content personalization, providing more accurate and context-aware interactions,” Likens explains.  RAG increases the odds that results are accurate and up to date by checking external sources before serving up a response to a query. It also introduces greater transparency to models by generating links that a human can check for accuracy. Then there’s the fact that RAG can trim the time required to obtain information, reduce compute overhead and conserve energy.  “RAG enables searches through a very large number of documents without the need to connect to the LLM during the search process,” Dave points out. “A RAG search is also faster than an LLM processing tokens. This leads to faster response times from the AI system.”  This makes RAG particularly valuable for handling diverse types of data from different sources, including product catalogs, technical images, call transcripts, policy documents, marketing data, and legal contracts. What’s more, the technology is evolving rapidly, Dave says. RAG is increasingly equipped to manage larger datasets and operate within complex cloud frameworks.   Related:Keynote Sneak Peek: Forrester Analyst Details Align by Design and AI Explainability For example, RAG can combine generalized medical or epidemiological data held in an LLM with specific patient information to deliver more accurate and targeted recommendations. It can connect a customer using a chatbot with an inventory system or third-party logistics and delivery data to provide an immediate update about a delayed shipment. RAG can also personalize marketing and product recommendations, based on past clicks or purchases.  The result is a higher level of personalization and contextualization. “RAG can tailor language model outputs to specific enterprise knowledge and enhance the LLMs core capabilities,” Likens says. Yet all of this doesn’t come without a string attached. “RAG adds complexity to knowledge management. It requires dealing with data lineage, multiple versions of the same source, and the spread of data across different business units and applications, “he adds.  Beyond the Chatbot  Designing an effective RAG framework can prove challenging. Likens says that on the technology side, several components are foundational. This includes vector databases, orchestration, a document processing tool, and a scaled data processing pipelines.”  Related:Sidney Madison Prescott Discusses GenAI’s Potential to Transform Enterprise Operations It’s also important to adopt tools that streamline RAG development and improve the accuracy of information, Likens says. These include hybrid retrieval solutions, experiment tracking and data annotation tooling. More advanced tools, such as LLMs, vector databases, data pipeline and compute workflow tools are typically available through hyperscalers and SaaS providers  “There is not a one-size-fits-all RAG pipeline, so there will always be a need to tailor the technology to the specific use case,” Likens says.  Equally important is mapping out a data and information pipeline. Chunking — breaking data into smaller strings that an LLM can process — is essential. There’s also a need to fine-tune the language model so that it can contextualize the RAG data, and it’s important to adapt a model’s weights during post-training processes.  “People typically focus on the LLM model, but it’s the database that often causes the most problems because, unlike humans, LLMs aren’t good with domain knowledge,” explains Ben Elliot, a research vice president at Gartner. “A person reads something and knows it makes sense without understanding every detail.”  Elliott says that a focus on metadata and keeping humans in the loop is critical. Typically, this involves tasks like rank ordering and grounding that anchor a system in the real world — and increase the odds that AI outputs are meaningful and contextually relevant. Although there’s no way to hit 100% accuracy with RAG, the right mix of technology and processes — including using footnoting so that humans can review output — boosts the odds that an LLM will deliver value.  Designs on Data  There’s no single way to approach RAG. It’s important to experiment because a system might not initially generate the right information or response for an appropriate reason, Likens says. It’s also wise to pay close attention to data biases and ethical considerations, including data privacy. Unstructured data magnifies the risks. “It may contain personally identifiable information (PII) or other sensitive information,” he notes.  Organizations that get the equation right take LLMs to a more functional and viable level. They’re able to achieve more with fewer resources. This translates into a more agile and flexible Gen AI framework with less fine tuning. “RAG equals the playing field between ultra-large language models that exceed 100 billion parameter and more compact models of 8

Retrieval-Augmented Generation Makes AI Smarter Read More »

Next Steps to Secure Open Banking Beyond Regulatory Compliance

The concept of open banking, the ability for customers to share their financial information easily with third parties, is gaining momentum in the United States though in a piecemeal way. The Consumer Financial Protection Bureau recently finalized rules for financial institutions to offer open banking securely. It is one of the latest steps to further define how banks, credit card issuers, and other financial institutions should proceed forward in this space. Open banking already has footing in Europe. Meanwhile countries such as Canada, Japan, and Singapore have yet to formally adopt it, though their policymakers are exploring open banking frameworks. Though there is no single cohesive regulatory policy in the US yet, securing financial information will be paramount as open banking is made available. What is the balance of making financial information available to authorized parties versus keeping financial data secure? For this episode of DOS Won’t Hunt, Ben Shorten (upper left in video), Accenture’s finance, risk and compliance lead for banking and capital markets in North America; Adam Preis (lower right), director of product and solution marketing with Ping Identity; and Fernando Luege (upper right), CTO with Fresh Consulting, came together to discuss security hurdles and the way ahead for open banking. Related:2024 Cyber Resilience Strategy Report: CISOs Battle Attacks, Disasters, AI … and Dust Listen to the full podcast here. source

Next Steps to Secure Open Banking Beyond Regulatory Compliance Read More »