Back to Blog Cybersecurity

AI Deepfakes Are Defeating Facial Recognition — Why Fingerprint Biometrics Are Making a Comeback

Jameson Smallwood · · 6 min read
AI deepfakes biometrics Face ID Touch ID cybersecurity authentication
Table of Contents

Facial recognition has become the default way we verify identity — from unlocking phones to onboarding bank accounts to joining video calls. The premise was simple: your face is unique, always with you, and nearly impossible to replicate.

Then AI changed the equation.

The Deepfake Threat Is Real — But Not Where You Think

Let’s be clear upfront: Apple’s Face ID, which uses 3D infrared depth mapping with over 30,000 invisible dots, is still very difficult to fool with a deepfake. A 2D video or image — no matter how realistic — won’t unlock your iPhone.

But Face ID on your phone is only one piece of the facial recognition puzzle. The real vulnerability is in software-based facial verification — the kind used by banking apps, KYC onboarding, remote identity checks, and video conferencing. And that’s where deepfakes are already causing massive damage.

The numbers in 2025 are staggering:

  • Over $1 billion in deepfake-related losses in 2025 alone, with cumulative losses exceeding $1.56 billion (Surfshark)
  • A deepfake identity attack now occurs every five minutes, with deepfakes linked to 1 in 5 biometric fraud attempts across 1 billion+ verifications (Entrust 2026 Identity Fraud Report)
  • Virtual camera attacks surged 2,665% and face swap attacks jumped 300%, with over 115,000 potential attack combinations identified (iProov 2025 Threat Intelligence Report)
  • 40% of companies faced a deepfake-related threat in 2025 (World Economic Forum)
  • $25.6 million stolen from engineering firm Arup when attackers used real-time deepfakes to impersonate a CFO and colleagues on a video call — the finance worker was the only real person in the meeting (CNN)

And the barrier to entry has collapsed. Deepfake-as-a-Service is now a market: a ready-to-use synthetic identity costs as little as $15, voice cloning services run under $10 per month, and face-swapping software rents for $1,000 to $10,000 (Cyble).

The attack method is straightforward: instead of holding a fake face up to a camera, attackers inject a deepfake video stream directly into the application’s data pipeline using virtual camera software or device emulators. Basic liveness checks — blink, nod, turn your head — are easily defeated by real-time AI face synthesis.

Your face is public. It’s on LinkedIn, Facebook, company websites, and conference photos. Every high-resolution image is potential training data for a deepfake model. And unlike a password, you can’t change your face after it’s been compromised.

Why Fingerprints Are Harder to Fake

Fingerprint biometrics operate on a fundamentally different security model:

Physical contact required. A fingerprint scanner requires your actual finger on the sensor. There’s no remote attack vector — no virtual camera injection, no video stream manipulation. An attacker needs physical access to either you or a high-fidelity replica of your fingerprint.

No public training data. Unlike your face, your fingerprints aren’t plastered across the internet. There’s no social media profile an attacker can scrape to build a synthetic fingerprint.

Subsurface detection. Modern ultrasonic fingerprint sensors — like Qualcomm’s 3D Sonic Max — use ultrasonic waves that penetrate through the outer skin layer to read ridge patterns, sweat pores, and even blood vessels in the dermis beneath. This makes silicone molds and printed replicas far less effective than they are against optical sensors.

No injection attack surface. The deepfake attacks devastating software-based facial verification rely on injecting fake video into an app’s camera feed. Fingerprint sensors don’t have a camera feed to hijack — the reading happens at the hardware level.

The Industry Is Already Shifting

The signs are there if you know where to look:

  • Samsung’s Galaxy S series has used Qualcomm ultrasonic in-display fingerprint sensors on every flagship since the S10 in 2019, treating fingerprint as the primary biometric even as facial recognition improved
  • Google’s Pixel 9 upgraded from slow optical fingerprint sensors to Qualcomm’s ultrasonic 3D Sonic Gen 2 — the same sensor Samsung uses — signaling that even Google sees ultrasonic fingerprint as the future
  • Apple is bringing Touch ID back — the upcoming foldable iPhone (expected late 2026) will feature Touch ID in the power button, the first new iPhone with a fingerprint sensor since the iPhone SE. Apple has also filed multiple patents for under-display Touch ID technology (MacRumors)
  • Gartner predicts that by the end of 2026, 30% of enterprises will no longer consider identity verification solutions reliable in isolation due to AI-generated deepfakes (Gartner)

Apple’s move is particularly telling. Face ID requires a bulky TrueDepth sensor array, and the foldable iPhone’s thin design reportedly can’t accommodate it. But it’s not just a design constraint — it’s a signal. If Apple is engineering Touch ID back into its product line, expect it to expand beyond the foldable.

Regulators Are Catching Up

Governments are responding to the deepfake threat — fast:

  • The TAKE IT DOWN Act, signed into law in May 2025, is the first U.S. federal law addressing AI deepfakes, requiring platforms to remove deepfake content within 48 hours
  • 47 U.S. states have enacted deepfake legislation as of January 2026, with 82% of all state deepfake laws passed in just the last two years
  • The EU AI Act transparency obligations take effect in August 2026, mandating clear labeling of AI-generated media with penalties up to EUR 35 million or 7% of global revenue
  • NIST SP 800-63-4 updated its Digital Identity Guidelines with specific provisions against deepfakes and AI-generated attacks

The regulatory momentum is clear: facial recognition and identity verification systems that can’t defend against deepfakes are becoming a compliance liability, not just a security one.

What This Means for Your Business

If your organization relies on facial recognition for identity verification, access control, or customer onboarding, it’s time to reassess:

  1. Audit your biometric dependencies — Identify where software-based facial verification is your sole authentication factor. These are your highest-risk points
  2. Layer your defenses — Implement multi-factor authentication that doesn’t rely exclusively on any single biometric. Pair facial recognition with device-based factors or fingerprint verification
  3. Evaluate injection attack resistance — Ask your identity verification vendors specifically how they defend against virtual camera and video injection attacks, not just presentation attacks. Note that ISO/IEC 30107-3 testing does not cover injection attacks and cannot validate against deepfake content from generative AI
  4. Prefer fingerprint where possible — For high-security environments, fingerprint-based authentication eliminates the remote attack surface that makes facial recognition vulnerable
  5. Train your team — Employees should understand that video calls and identity verification can be targets for deepfake attacks. The Arup case proves that even sophisticated professionals can be fooled when every face on the screen looks real

The Bottom Line

AI didn’t just make deepfakes possible — it made them cheap and accessible. A synthetic identity for $15. A voice clone for $10 a month. An attack every five minutes. The same technology that powers creative tools and productivity features is systematically eroding the security assumptions behind facial recognition.

Hardware-based Face ID with 3D depth sensing remains strong. But the broader ecosystem of software-based facial verification — the kind businesses actually depend on for KYC, remote onboarding, and video meetings — is increasingly compromised.

Fingerprint biometrics aren’t perfect, but they’re grounded in physical reality in a way that facial recognition in the AI era can’t match. As deepfake attacks scale and regulators tighten requirements, expect the industry to shift back toward what’s hardest to fake remotely — and that’s the fingerprint you press against the sensor, not the face you show the camera.


Need help evaluating your organization’s biometric security and identity verification stack? Contact Katalism Cybersecurity for a complimentary security assessment.

Share:

How Secure Is Your Business?

Get a free cybersecurity assessment and find out where your vulnerabilities are before someone else does.

Get Your Free Assessment