Gartner: Deciphering the impact of AI and deepfakes on identity verification

In the digital landscape, where identities are woven into every aspect of our online interactions, the emergence of AI-driven deepfakes has become a disruptive force, challenging the very essence of identity verification. In navigating this ever-evolving terrain, CIOs and IT leaders must dissect the intricate interplay between emerging technologies and their profound impact on the integrity of identity verification processes.

Online identity verification today consists of two key steps. Firstly, a user being asked to take a picture of their government-issued identity document, which is inspected for authenticity. And secondly, the user being asked to take a selfie, which is biometrically compared to the picture on the identity document. Traditionally only used in regulated know-your-customer (KYC) use cases such as online bank account opening, identity verification is now used in a range of contexts today from interactions with government services, preserving the integrity of online marketplace platforms, employee onboarding, and improving security during password reset processes.

Subversion of the identity verification process through fraudulent identity presentation, for example by using a deepfake of an individual to defeat the selfie step, thus introduces considerable risk to an organisation.

Mechanisms to Subvert Deepfake Attacks

As attackers leverage the relentless progress of GenAI to craft increasingly convincing deepfakes, CIOs and IT leaders must adopt a proactive stance, bolstering their defences with a multifaceted approach. Key to this is ensuring that your identity verification vendor deploys robust liveness detection. This capability is deployed during the second step when the selfie is being taken, to check whether the selfie is in fact being taken of a live person who is genuinely present during the interaction. This liveness detection can be active, in which a user responds to a prompt such as turning their head, or it may be passive, in which subtle features such as micro movements or depth perspective are assessed without the user having to move.

The integration of active and passive liveness detection techniques, coupled with additional signals indicative of an attack, offers a holistic defence framework against evolving deepfake attacks. Such additional signals that can indicate an attack can be revealed using device profiling, behavioural analytics and location intelligence. Identity verification vendors may develop some of these capabilities natively, or use partners to deliver them, but they should be packaged up as a single solution for you to deploy.

Leveraging GenAI to Improve Identity Verification

The versatility of GenAI presents intriguing opportunities for defence against deepfake attacks. By leveraging GenAI’s ability to develop synthetic datasets, product leaders can reverse-engineer attack variants and fine-tune their algorithms for improved detection rates. Beyond security applications, GenAI can also address issues of demographic bias in face biometrics processes. Traditional methods of obtaining diverse training datasets pose challenges in terms of cost and effort, often resulting in biased machine-learning algorithms. However, the creation of deepfake images using GenAI offers a solution by generating large datasets of synthetic faces with artificially elevated levels of training data for underrepresented demographic groups. This not only reduces the barriers to obtaining diverse datasets but also helps minimise bias in biometric processes. Challenge your identity verification vendors as to whether they are innovating and using GenAI for positive purposes, not just treating it as a threat.

Select vendors who have embraced this new world and taken proactive measures such as introducing bounty programmes to challenge hackers to defeat liveness detection processes. By incentivising individuals to identify and report potential vulnerabilities, vendors and hence organisations can bolster their defensive capabilities against deepfake attacks.

As we chart a course towards a secure digital future, collaboration emerges as the cornerstone of our collective defence against deepfake adversaries. By fostering dynamic partnerships and cultivating a culture of vigilance, CIOs and IT leaders can forge a resilient ecosystem that withstands the relentless onslaught of AI-driven deception. Armed with insight, innovation, and a steadfast commitment to authenticity, look to embark on a journey towards a future where identities remain inviolable in the face of technological upheaval.

Gartner analysts will further explore how organisations can defend themselves against the attack of AI-driven fraud. at the Gartner Security & Risk Management Summit, taking place from 23 – 25 September in London, UK.

Akif Khan, VP Analyst at Gartner.

Akif writes indispensable research and delivers actionable advice and insights to business leaders to help them achieve their mission-critical priorities.

With a focus on fraud detection and identity proofing in digital channels, Akif speaks with dozens of clients every week, across both end-user organisations and vendors, resulting in a view of the market that is both broad and deep.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE