Deepfakes, the AI-generated synthetic media that convincingly replicates real people’s appearances and voices, pose significant challenges to organizations worldwide. These hyper-realistic videos and audio clips can be used for various malicious purposes, including misinformation, fraud, and reputational damage.
Deepfakes are not just limited to video and audio, they can include textual, social media and real-time or live feeds such as video conferencing. Recently a UK company based in Hong Kong was duped into sending £20M to criminals by an AI generated video call.
Here are several strategies organizations can adopt to combat the threat of deepfakes.
- Implement Advanced Detection Technologies:
Organizations need to leverage advanced detection technologies to identify deepfakes. AI and machine learning models can be trained to spot inconsistencies that are invisible to the human eye but detectable by algorithms. Companies like Facebook and Microsoft have been developing such technologies, with Facebook’s Deepfake Detection Challenge and Microsoft’s Video Authenticator tool as notable examples.
Amber Authenticate, for instance, is a cryptographic tool that generates hashes at specified intervals during a video. If any alterations are made to the video, these hashes will change, notifying the user that the content has been modified.
- Educate and Train Employees:
Raising awareness about deepfakes is crucial. Employees should be trained to recognize potential deepfake content and understand the threats it poses. Regular training sessions and workshops can help staff stay informed about the latest deepfake techniques and prevention strategies. Businesses must adopt a systematic approach to fostering a security-aware workplace culture. By educating employees about security threats and preventive strategies, they will become more adept at identifying deepfakes.
- Strengthen Verification Processes:
Organizations should enhance their verification processes for sensitive communications and transactions. Implementing multi-factor authentication (MFA) can ensure that critical actions require additional layers of verification. For example, voice or video verification should be supplemented with biometric or password-based authentication to prevent deepfake-induced fraud.
Businesses need to have fundamental procedures in place based on “trust but verify” that all employees are aware of.
- Collaborate with Industry and Government:
Combating deepfakes effectively requires collaboration. Organizations should participate in industry forums and work with government agencies to share intelligence and develop best practices. The Cybersecurity and Infrastructure Security Agency (CISA) and other bodies offer resources and guidelines for addressing deepfake threats. Collaborative efforts can lead to the development of more robust detection and response strategies.
- Develop and Enforce Strict Content Policies:
Organizations, especially those in the media and entertainment industries, should develop stringent content policies to minimize the risk of deepfakes being disseminated through their platforms. Automated systems and human moderators can work together to detect and remove deepfake content quickly. Clear policies and enforcement mechanisms can deter the creation and spread of harmful deepfakes.
- Invest in Research and Development:
Continuous investment in research and development is essential for staying ahead of deepfake technologies. By funding research initiatives and collaborating with academic institutions, organizations can help develop new methods for detecting and countering deepfakes. Staying at the forefront of technological advancements ensures that organizations can effectively mitigate emerging threats.
- Utilise Blockchain Technology:
Implementing blockchain technology could be a viable solution. As a decentralized system, blockchain enables users to store information online without relying on centralized servers. Moreover, blockchains are resistant to many security vulnerabilities that affect centralized data storage. While distributed ledgers currently cannot store large amounts of data, they are well-suited for storing hashes and electronic signatures.
Conclusion:
The threat of deepfakes is real and growing, posing significant risks to organizations across various sectors. The advances in AI technology are giving the bad actors better tools to create more realistic content. By adopting these measures organizations can combat the threat of deepfakes effectively. Proactive measures and continuous vigilance are key to safeguarding against the malicious use of deepfake technology.
Speak to our security team at Archway Securities to find out more on combatting cybercrime.