How to Detect Deepfakes
Deepfakes are a clear and present danger to businesses. According to Markets and Markets, the deepfake market will balloon from $564 million in 2024 to $5.1 billion by 2030, which represents a 44.5% compound annual growth rate. Deepfakes represent several types of threats including corporate sabotage, enhanced social engineering attacks, identity spoofing, and more. More commonly, bad actors use deepfakes to increase the effectiveness of social engineering. “It’s no secret that deep fakes are a significant concern for businesses and individuals. With the advancement of AI-generated fakes, organizations must spot basic manipulations and stay ahead of techniques that convincingly mimic facial movements and voice patterns,” says Chris Borkenhagen, chief digital officer and chief information security officer at identity verification and fraud prevention solutions provider AuthenticID, in an email interview. “Detecting deep fakes requires advanced machine learning models, behavioral analysis, and forensic tools to identify subtle inconsistencies in images, videos, and audio. Mismatches in lighting, shadows, and eye movements can often expose even the most convincing deep fakes.” Organizations should leverage visual and text fraud algorithms that utilize deep learning to detect anomalies in the data underpinning deepfakes. This approach should go beyond surface-level detection to analyze content structure for signs of manipulation. Related:What CIOs Can Learn from an Attempted Deepfake Call “The responsibility for detecting and mitigating deep fake threats should be shared across the organization, with CISOs leading the way. They must equip their teams with the right tools and training to recognize deep fake threats,” Borkenhagen says. “However, CEO and board-level involvement is important, as deep fakes pose risks that extend beyond fraud. They can damage a brand’s reputation and compromise sensitive communications. Organizations must incorporate deep fake detection into their broader fraud prevention strategies and stay informed about the latest advancements in AI technologies and detection tools.” Chris Borkenhagen, AuthenticID As deep fakes become more sophisticated, organizations must be prepared with both advanced detection tools and comprehensive response strategies. “AI-powered solutions like Reality Defender and Sensity AI play a key role in detecting manipulated content by identifying subtle inconsistencies in visuals and audio,” says Ryan Waite, adjunct professor at Brigham Young University-Hawaii and VP of public affairs at digital advocacy firm Think Big. “Tools like FakeCatcher go further, analyzing physiological markers such as blood flow in the face to identify deep fakes. Amber Authenticate adds another layer of security by verifying the authenticity of media files through cryptographic techniques.” Related:California’s New Deepfake Laws Await Test of Enforcement Deep fake detection should be a priority, with CISOs, data science teams, and legal departments working together to manage these technologies. In addition to detection, companies must implement a deep fake response strategy, he says. This involves: Having clear protocols for identifying, managing, and mitigating deep fakes. Training employees to recognize manipulated content. Making sure the C-suite understands the risks of impersonation, fraud and reputational damage, and plan accordingly. Staying informed on evolving AI and deep fake legislation is critical. As regulatory frameworks develop, companies must be proactive in ensuring compliance and safeguarding their reputation. “Combining cutting-edge tools, a robust response strategy, and legislative awareness is the best defense against this growing threat,” says Waite. How Deepfakes Facilitate Social Engineering Deepfakes are being used in elaborate scams against businesses by threat actors leveraging synthetic videos, audio, and images to enhance their social engineering attacks, like Business Email Compromise (BEC) and phishing techniques. The use of AI has also made it incredibly easy to produce a deepfake and spread it far and wide. Moreover, there is a wealth of readily available tools on the dark web. Related:How to Protect Your Enterprise from Deepfake Damage “We have seen evidence of deepfake videos being used in virtual meetings and audio in voicemail or live conversations, deceiving targets into revealing sensitive information or clicking malicious links,” says Azeem Aleem, managing director client leadership, EMEA and managing director of Northern Europe. Financial services firms are especially worried about the use of AI or generative-AI fraud, with Deloitte Insights showing a 700% rise in deepfake incidents in fintech in 2023.” Other examples of deepfake techniques include “vishing” (voice phishing), Zoom bombing and biometric attacks. “Hackers are now combining email and vishing with deepfake voice technology, enabling them to clone voices from just three seconds of recorded speech and conduct highly targeted social engineering fraud,” says Aleem. “This evolution makes it possible for attackers to impersonate C-level executives using their cloned voices, significantly enhancing their ability to breach corporate networks.” Zoom bombing occurs when uninvited guests disrupt online meetings or when attackers impersonate trusted individuals to infiltrate meetings. There are also biometric attacks. “Businesses frequently use biometric authentication systems, such as facial or voice recognition, for employee verification,” says Aleem. “However, deepfake technology has advanced to the point where it can deceive these systems to bypass customer verification processes, including commands like blinking or looking in specific directions.” According to accounts payable automation solution provider, Medius, 53% of businesses in the US and UK have been targets of a financial scam powered by deepfake technology, with 43% falling victim to such attacks. “Beyond BEC, attackers use deepfakes to create convincing fake social media profiles and impersonate individuals in real-time conversations, making it easier to manipulate victims into compromising their security,” says Aleem. “It’s not necessarily targeted, but it does prey on natural vulnerabilities like human-error and fear. As AI applications develop, deepfakes can be produced to also request profile changes with agents and train voice bots to mimic IVRs. These deepfake voice techniques allow attackers to navigate IVR systems and steal basic account details, increasing the risk to organizations and their customers.” The business risk is potential fraud, extortion, and market manipulation. “Deepfakes are disrupting various industries in profound ways. Call centers at banks and financial institutions are grappling with deepfake voice cloning attacks aimed at unauthorized account access and fraudulent transactions,” says Aleem. “In the insurance sector, deepfakes are exploited to submit false evidence for fraudulent claims, causing significant financial losses. Media companies suffer reputational damage
How to Detect Deepfakes Read More »