How to Fight Synthetic Identities with Behavioral AI

Deepfakes and bots are yesterday’s problem. Today’s threat is AI-built identities that behave almost like humans—and require behavior-based detection to stop them.
Jan. 13, 2026
6 min read

Key Highlights

  • Synthetic identities are created by combining real data points into convincing personas, making them difficult to detect with traditional fraud systems.
  • Behavioral analysis and AI models trained to recognize human patterns are essential for identifying anomalies in synthetic identities.
  • Organizations need to move from credential-based checks to behavior-based detection, focusing on how users interact with systems over time.
  • Proactive measures include establishing behavioral baselines, monitoring deviations, and considering user intent to detect malicious activity early.

You already know that deepfakes grab headlines. And phishing attacks make news. And recently, we’ve seen more stories circulating of North Korean IT workers infiltrating large companies. 

This might get all the fuss, but in all honesty, I think we should be talking about (and worrying about!) another identity threat as much, if not more, than these other types.

The threat I’m talking about is subtler, but seriously insidious. Threat actors now use AI to aggregate public and leak personal data into synthetic identities. I’m talking about personas that look and act like real people because they're built from real data — just reassembled into someone who doesn't exist.

No visual glitches like a deepfake. No awkward phrasing like a phishing email. And these personas probably use a lot of ten-dollar words like “insidious,” which I’m now very self-conscious about having used in the previous paragraph. At least I’m self-aware, which means I’m still passing the Turing test more easily than AI can. For now.

These synthetic identities are a big deal because fraud detection systems weren't designed for this. We need AI models trained to recognize abnormal human behavior and malicious intent. And we need them, well, yesterday.

Platforms and Governments are Waking Up

YouTube expanded its likeness detection technology to all creators in its Partner Program in September 2025. Creators upload a face image, and the system flags AI-generated content using their likeness without permission. That’s helpful for catching deepfakes after they're posted.

India's parliamentary standing committee on communications and information technology recommended exploring a licensing regime for AI content creators and compulsory labeling of AI-generated videos and content. These recommendations aim to curb misinformation, but their implications extend beyond that.

Both responses focus on content that has already been created and published. It’s a start, but it’s not ideal. Clearly, it’s reactive. To make a real difference, synthetic identities need to be caught before they do damage, not after.

What Makes Synthetic Identities Different

You can spot a bad deepfake. You don’t need any kind of special training. The eyes look dead, the mouth moves incorrectly, there are mysterious extra limbs, or the audio lags behind the video by a fraction of a second. Detection tools have gotten good at finding these glitches, and most people are decent at it, too.

Bots are similar — they behave in ways that don't look human. They post too often, respond too quickly, use obviously unnatural language, or connect with networks in ways that real people don't.

You can spot a bad deepfake. You don’t need any kind of special training. The eyes look dead, the mouth moves incorrectly, there are mysterious extra limbs, or the audio lags behind the video by a fraction of a second.

Synthetic identities work differently. They're constructed from legitimate data points, real addresses, real employment histories, real social connections, mixed and matched into a convincing persona. Each data point checks out. The combination is fabricated.

If that sounds a little creepy, good. It should.

Fraud detection systems look for known “bad” patterns. Banned information. Previously flagged credentials. Velocity checks. And that works well when attackers repeatedly reuse the same fraudulent information. However, synthetic identities are generally not reused. Each one is built from a massive pool of personal data circulating online and in breach databases. By the time you've flagged one, the attacker has already moved on to the next.

Credentials Can be Faked. Behavior is Harder

Fake an identity document? Doable. Fake credentials? Sure. Fake a whole social network to make the identity look legit? Attackers do it all the time. But faking consistent human behavior over time? Okay, now that's hard.

Real humans have patterns. How they type. When they're active. Which systems they access and in what order. How they react when something unexpected happens. There's a rhythm to how people work. (You might call it…natural?)

Synthetic identities — even the good ones — have gaps. The behavior feels off. Access patterns don't match the role. Activity spikes at weird hours. When challenged with a routine verification, the response feels scripted. Still, it’s not as obvious as a bad deepfake, and it’s really, really easy to fall for it.

AI models trained on human behavior can catch these gaps. Not by matching against a list of known bad signatures, but by noticing when something just doesn't look like a real person.

Thinking Like an Attacker

When proactive security professionals design security controls, they consider them from two perspectives: how customers will use them and how bad actors might misuse them. Viewing capabilities from the adversary's perspective should inform these decisions.

Identity verification needs the same treatment. We need to stop asking whether an identity has valid credentials. Ask whether it behaves like the person it claims to be.

When proactive security professionals design security controls, they consider them from two perspectives: how customers will use them and how bad actors might misuse them.

That’s a sweeping statement, I know. So, what does that look like in practice?

Build baselines. Model how legitimate users interact with systems over time, broken down by role, function, geography, etc.

Threat detection should look for deviation, not just known threats. It has to look for behavior that doesn't fit human patterns, which should raise flags even if it doesn't match a known attack signature.

Consider intent. What would a legitimate user be trying to accomplish? Does this activity align with that, or does it suggest something else?

Critically, security teams and their models must continue to learn. Static rules go stale. Behavioral models can evolve as attackers change tactics.

Most Organizations Aren't Ready for This

Most organizations still rely on credential-based authentication and rules-based fraud detection. Check the username and password. Verify the identity document isn't on a banned list. Call it secure and move on.

That does not hold up against synthetic identities built to pass exactly those checks.

Moving to behavior-based detection takes investment. We’re talking about fundamentally rethinking how identity verification works and accepting that credentials alone don't prove someone is who they claim to be. We need to assess how they behave over time. That’s more work up front, but this is one of those situations where an ounce of prevention is ultimately worth many, many pounds of a cure.

Where to Begin

Pick the systems where a synthetic identity would hurt you most — your highest-risk access points. Figure out what normal user behavior looks like on those systems by role, by function, by location. Then flag the deviations.

Will you catch everything? No. Attackers who know what you're looking for will adapt. But you'll catch a lot more than you would with credential-checking alone. And every synthetic identity you catch makes the next one more expensive to create. Joke’s on the threat actors.

I’m aware of the paradox here, that the same AI that creates synthetic identities can be used to detect them. Large language models are great at generating fake personas. They're also good at analyzing behavioral patterns and spotting when something looks off.

Both sides have access to these tools. For security professionals, it’s fighting fire with, I guess, smarter fire. Better-planned fire. Better-executed fire. You must get there first.

About the Author

Michael Riemer

Michael Riemer

Field CISO & SVP Network Security Global for Invanti

As Field CISO, Mike Riemer works closely with Ivanti customers and sales teams to assess IT and Information Security requirements and to streamline the sales process to deliver strong customer outcomes. 

Prior to joining Ivanti, Riemer served for 25 years in the U.S. Air Force, specializing in military intelligence and cybersecurity. After retiring, he became Chief Security Architect at Juniper Networks, where he specialized in UAC/OAC/SBR/SRX/IDP, with design, configuration, and installation expertise in IDP/FW-VPN/SA/NSM/UAC/OAC/SBR/SRX/EX.

After 10 years at Juniper Networks, Riemer was one of the five principals who went on to create Pulse Secure, which was later acquired by Ivanti. At Pulse Secure, Riemer acted as the Field Chief Technology Officer, where he supported strategic sales opportunities, helped shape engineering and product pipelines and evangelized Pulse Secure solutions and roadmaps.

Mike has an IT Computer Sciences degree from the Community College of the Air Force. He lives in Park City, Utah, where he enjoys hiking, biking, snowshoeing, and is actively involved in coaching his son’s football team.

 

Sign up for our eNewsletters
Get the latest news and updates