Back to Posts
3 Real World Cases of Identity Fraud
Post

3 Real World Cases of Identity Fraud

Identity fraud is no longer limited to isolated phishing scams or weak passwords. Today’s cybercriminals deploy increasingly sophisticated, multi-layered attacks — from deepfake videos and manipulated images to the injection of biometric data, forged documents, or pre-recorded videos into digital systems.

According to IBM’s 2024 Cybersecurity Report, 43% of breaches involving biometric systems are directly linked to presentation and injection attacks. This alarming figure highlights not only the growing complexity of fraud tactics, but also the urgent need to strengthen authentication mechanisms.

Here are three real-world cases that illustrate how identity fraud is evolving across different contexts:

A fraudster’s treasure trove: 533 million Facebook identities leaked

In 2021, one of the largest personal data breaches came to light when the profiles of over 533 million Facebook users were posted on a hacking forum. The leaked dataset included names, email addresses, phone numbers and dates of birth from users in over 100 countries.

This vast amount of genuine information became the perfect raw material for phishing and impersonation attacks, paving the way for a phenomenon known as “Frankenstein fraud”, where criminals create fake identities using authentic data stitched together from multiple sources.

Ticket scam fuelled by a stolen identity

In 2023, an Australian man shared a photo of his driving licence with someone he believed to be a legitimate ticket seller. It turned out to be a scam. Not only was he defrauded, but his images and personal data were later used to create fake social media profiles that continued selling fraudulent event tickets.

Despite repeatedly reporting the impersonation accounts, they remained active for weeks. The incident demonstrates how a single data leak can fuel a chain of scams, with long-lasting consequences for the victim.

Deepfakes on a video call: $25 million transferred

In 2024, an employee took part in a video call with individuals he believed to be senior executives from his company. Unbeknownst to him, every other participant was a deepfake AI-generated replicas of the CFO and other directors. During the call, they instructed him to transfer a total of $25 million to accounts controlled by the attackers.

This case, which blends social engineering, AI and video manipulation, shows just how convincingly fraudsters can replicate high-stakes corporate scenarios to execute major financial crimes.

How can we build digital trust in the face of these threats?

Identity fraud doesn’t rely on a single tactic. From social engineering to biometric spoofing or document injection, attackers exploit any weakness they find.

To counter this, organisations must adopt advanced identity verification tools, including certified biometrics, liveness detection, and end-to-end security architecture that protects data throughout its entire journey, from capture to validation.

Only then can we anticipate attacks, safeguard users, and build genuine trust in digital environments.