top of page

The Frankenstein Problem: Why Synthetic Identities Are the New Frontier of Cybercrime

SHILPI MONDAL| DATE: FEBRUARY 05, 2026



We’ve spent the last decade fortifying our perimeters against identity theft. We locked down endpoints, encrypted databases, and trained employees to spot phishing emails. But while we were busy protecting real people’s data, criminals shifted tactics entirely. They stopped trying to steal our identities and started manufacturing their own.


It’s called Synthetic Identity Fraud (SIF), and it’s arguably the most sophisticated threat facing the global financial ecosystem today. Unlike traditional theft, where a criminal hijacks an existing account, SIF involves creating a "Frankenstein" persona splicing a legitimate Social Security Number (often from a child) with a fictitious name and address.


The result? A "person" who looks real on paper but doesn't exist in the physical world. And because there’s no consumer victim to complain about unauthorized charges, these ghosts can haunt your systems for years before they strike.


The Anatomy of a Ghost


Here’s the thing about synthetic fraud: it’s a crime of creation, not just extraction.

In a traditional attack, the victim notices suspicious activity a weird charge, a credit alert and shuts it down. But with SIF, the "victim" is the financial institution itself.


According to ACAMS, the fundamental difference lies in the lack of a direct consumer victim. The fraudster creates a new identity, applies for credit, and effectively nurtures this fake persona within the banking system.


They often start with a clean slate. Research from Proofpoint indicates that criminals target "dormant" identifiers SSNs belonging to children, the elderly, or the incarcerated because these individuals aren't actively monitoring their credit reports. A child’s SSN, for instance, offers a fraudster a decade-long runway to build a credit history before the legitimate owner ever applies for a student loan or a car note.


The Long Game: From Harvesting to the "Bust-Out"


Unlike a smash-and-grab data breach, synthetic fraud is an investment strategy. It requires patience that we don’t typically associate with cybercrime. The lifecycle typically spans 12 to 24 months, moving through distinct phases of "nurturing" to maximize the eventual payout.


The Setup:

It begins with data harvesting. With over 1.6 billion consumer records exposed in data breaches by 2024, as noted by AFCEA International, the raw materials for these identities are cheap and plentiful.


The Piggyback:

Once the persona is assembled, the fraudster needs to give it legitimacy. They often use a tactic called "piggybacking." As described by the Federal Reserve, this involves adding the synthetic identity as an authorized user on a legitimate, high-credit account. The synthetic ID instantly "inherits" the good credit history of the host account, tricking algorithms into assigning it a high credit score.

 

The Bust-Out:

After months or years of behaving like a model customer-making small payments and increasing credit limits- the trap snaps shut. The fraudster executes a “bust-out,” maxing out every available line of credit simultaneously. Then, they simply vanish. Because the identity wasn’t real, there’s no one to chase, so banks often record these losses as bad debt rather than confirmed fraud. This happens because synthetic identities frequently evade detection until accounts are charged off, making the scale of loss difficult to measure directly. According to reporting by What to Know About the Growing Threat of Synthetic Identity Fraud- Equifax Insight Center, synthetic identity fraud is now the dominant and fastest-growing type of credit fraud, accounting for roughly 50 %–70 % of reported credit fraud losses in some industry analyses- underscoring how much of this risk may be hidden within traditional charge-offs rather than explicitly identified as fraud.

 

Generative AI: The Force Multiplier


If this sounds bad, the integration of Generative AI has made it infinitely worse. We are moving from artisanal fraud to industrial-grade deception.


In the past, building a synthetic identity took time and manual effort. Now, automation handles the heavy lifting. Medium contributor Marton Schneider highlights that "agentic AI" can now autonomously build backstories, register emails, and even engage with customer service chatbots to resolve account issues.


The Death of Liveness Checks


For years, we relied on "liveness checks" video selfies to prove a user was human. That defense is crumbling.


Deepfakes: 

Generative Adversarial Networks (GANs) can now create hyper-realistic videos that blink, smile, and turn heads on command. According to Entrust's 2025 Identity Fraud Report, deepfake attempts are happening once every five minutes accounting for roughly 40% of all biometric fraud attempts worldwide.


Injection Attacks:

Sophisticated attackers don't even need to show a face to the camera. They use software to inject AI-generated data directly into the authentication stream, bypassing the camera sensor entirely.

 

The barrier to entry has lowered dramatically. A single attacker, armed with AI tools, can now manage hundreds of synthetic identities at once, each behaving with the subtle imperfections of a real human.


The Hidden Cost to Your P&L


The financial impact here is staggering, and it’s often hidden in plain sight on your balance sheet. Analysts project that global fraud losses will reach $58.3 billion by 2030, a 153% increase from 2025 levels, according to Juniper Research. But the scary part is how these losses are categorized.


When a synthetic ID busts out, it looks like a credit risk failure, not a security failure. The account goes delinquent, collections calls go unanswered (obviously), and eventually, it’s charged off. This prevents risk teams from seeing the pattern.


It’s not just banks, either. The Motley Fool notes that auto lending is a prime target, with exposure in the U.S. reaching $3.3 billion by early 2025. Fraudsters use these identities to secure high-value vehicles, which are shipped overseas before the first payment is missed.


How to Fight Back: Behavior Over Data


So, how do you verify a person who doesn't exist but has valid government credentials?

The answer isn't in what data they provide, but in how they provide it. Static data checks (PII matching) are dead. If a fraudster has the SSN and the address, they pass the test.


Behavioral Biometrics:

Real humans are messy. We hesitate, we make typos, we use the mouse in slightly curved paths. Bots and scripts are perfect.


This is where behavioral biometrics comes in. By analyzing keystroke dynamics, mouse movements, and touch pressure, organizations can spot non-human patterns. Innovify reports that these systems are achieving 98.7% accuracy in distinguishing legitimate users from synthetic personas.


Government-Backed Verification (eCBSV):

In the United States, the game changer is the Electronic Consent-Based Social Security Number Verification (eCBSV) service. As detailed by Socure, this allows financial institutions to validate whether a name, SSN, and date of birth combination actually matches official Social Security Administration records in real-time.

 

It’s a powerful tool for catching "manipulated" synthetics where a birthdate is tweaked slightly to hide a bad credit history.


Graph Analytics:

You have to look at the network, not just the individual. Graph-based analysis can reveal hidden connections-like ten different "people" logging in from the same device fingerprint or sharing a similar IP subnet.


The Road Ahead


We are entering an era where "digital trust" is the currency of commerce. The fraudsters have industrialized their operations, leveraging AI to scale their attacks. To keep up, we have to modernize our defenses.


It’s no longer enough to ask, "Is this data correct?" We have to ask, "Is this behavior human?"


For IT leaders and CIOs, this means tearing down the silos between fraud teams and cybersecurity teams. It means investing in dynamic, behavioral defenses rather than static checklists. And ultimately, it means accepting that in the age of AI, seeing shouldn't necessarily mean believing.


Are your current risk models capable of spotting a ghost? Or are you just writing them off as bad debt?


KEY TAKEAWAYS


The "Frankenstein" Identity:

Synthetic fraud blends real and fake data (like a child's SSN with a fake name) to create a persona that has no immediate victim, making detection incredibly difficult.


AI is the Accelerant: 

Generative AI and "agentic" bots are automating the creation and nurturing of these identities, overwhelming traditional manual verification processes.


Hidden Losses: 

Up to 70% of what banks classify as "bad debt" or credit losses may actually be undetected synthetic fraud, masking the true scale of the problem.

 

Behavioral Defense is Key: 

Static data checks fail because the data is valid. The most effective defense is analyzing user behavior keystrokes, mouse drift, and interaction patterns to spot non-human actors.

 

 
 
 
bottom of page