How to protect biometric systems against deepfakes

How to protect biometric systems against deepfakes

Mario Cesar Santos, VP Global Solutions at Aware, discusses the rise of deepfakes and how they can be exploited by malicious actors.

Mario Cesar Santos, VP Global Solutions at Aware

The rise of deepfakes significantly threatens people and businesses through videos and images created by Artificial Intelligence. Fraudsters and cybercriminals are finding applications for data breaches to generate deepfakes that have the potential to cause widespread damage.

By exploiting the credibility and authenticity of these fraudulent media, deepfakes can deceive, manipulate and defraud organizations and their customers. Understanding how deepfakes can be used against your customers is crucial for companies to develop effective strategies to mitigate their impact and protect against their misuse.

The term ‘deepfake’ originates from the combination of ‘Deep Learning’ and ‘fake’. Although it does not have a universally accepted definition, a deepfake generally means that a person present in existing content is replaced by the image of another person. Essentially, a deepfake is content like a photo, audio, or video that has been manipulated by Machine Learning (ML) and Artificial Intelligence (AI) to appear as something it is not.

Although deepfakes have garnered attention for their entertainment and creative value, they also present serious risks to businesses. Our idea here is to list some ways malicious actors can exploit this type of online fraud:

  • Fraudulent content: One of the most immediate threats of deepfakes is their potential use in creating fraudulent content. Malicious actors can impersonate individuals in videos, making it seem like they are saying or doing things they never did. On a personal or business level, this can be used to spread false information and damage the reputations of people and brands.
  • Social engineering attacks: Deepfakes can also be used in social engineering attacks, where attackers manipulate individuals to disclose confidential information or perform harmful actions. For example, a deepfake video could impersonate a CEO instructing an employee to transfer funds to a fraudulent account.
  • Disinformation campaigns: Deepfakes can be weaponized in disinformation campaigns to manipulate public opinion. By creating convincing videos of political figures or other public figures saying or doing things they did not do, malicious actors can sow chaos and confusion.
  • Identity theft: Deepfakes can be used to steal someone’s identity by creating fake videos or images that appear to be of the individual. This could be used to access confidential accounts or commit other forms of fraud.
  • Sabotage and espionage: Deepfakes can also serve purposes of sabotage or espionage. For example, a deepfake video can manipulate a company’s stock price or damage its reputation.

Individuals and organizations need to be aware of these risks and take the necessary steps to protect themselves, such as using strong authentication methods (like biometrics) and being highly cautious with this type of manipulated media. For consumers, this means choosing vendors and solutions that consider these threats and offer robust protection against them. For business leaders, it is essential to consider these concerns and take necessary actions to protect customers and their businesses.

One such action includes integrating biometric authentication technology into existing solutions and offerings. How does biometrics help defend against deepfakes? Let’s list the main possibilities:

  • Liveness detection: This is a crucial component of biometric authentication that helps ensure the authenticity of captured biometric data. This technology is designed to detect whether a biometric sample, such as a facial image or voice recording, comes from a living person or a reproduction, manipulation or deepfake. Liveness detection algorithms analyze various factors, such as the presence of natural movements in a facial image or physiological signs in a voice recording, to determine if the biometric data is from a living person. These algorithms also protect against injection or emulation attacks. When it comes to deepfake threats, liveness detection is essential to prevent malicious actors from using static images or pre-recorded videos to spoof biometric authentication systems. By verifying the liveness of the person providing the biometric sample, liveness detection technology helps defend against deepfake attacks and ensures the integrity of the authentication process.
  • Behavioral biometrics: This involves analyzing patterns in an individual’s behavior, such as typing speed, mouse movements and swipe patterns on a touchscreen device. These behavioral patterns are unique to each individual and can be used to verify their identity. When applied to deepfake detection, behavioral biometrics can help identify anomalies in user behavior that may indicate a video or image has been manipulated.
  • Voice recognition: By analyzing various aspects of a person’s voice, such as volume, tone and cadence, voice recognition systems can verify the validity of an identity. In the context of deepfake detection, this method can help identify unnatural or inconsistent speech patterns that may indicate a video or audio recording has been manipulated.
  • Multimodal biometrics: This involves combining multiple biometric authentication methods to increase security. By using a combination of facial recognition, voice recognition and behavioral biometrics, for example, it is possible to achieve a more robust defense against deepfake threats. By requiring multiple forms of biometric authentication, these systems can make it more difficult for malicious actors to create convincing deepfakes.
Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive