// identity poisoning — what it is and how to fight it · #IdentityPoisoning
AI-generated content is being used to impersonate minority people and push stereotypes — making it look like it comes from within that demographic. This is identity poisoning. And your reaction to it is exactly what it needs to survive.
#IdentityPoisoning · #PoisonedMedia
01 — What is identity poisoning
001
Someone uses generative AI to create an image or account that appears to be a member of a minority group. The persona looks real — a profile photo, a posting history, a voice that sounds like it comes from inside that community.
002
A harmful stereotype gets posted through that fake persona. Because it seems to come from inside the demographic, your guard is lower. "They're all the same." "This is why I can't—" "See, that's what they do." You've heard those reactions. That's the exploit.
003
A funny clip. A shocking moment. Something cute that turns ugly. Something disgusting you can't look away from. Discriminatory content has learned to thrive through humor especially — it hides behind "it's just a joke" while the association lands anyway. Funny, enraging, heartbreaking, it doesn't matter. The emotion is the delivery mechanism.
004
This is what makes it different from regular propaganda. You don't have to agree with it. You just have to engage with it. Engagement is the mechanism. The brain doesn't distinguish between "I'm sharing this because it's funny" and "I'm sharing this because it's outrageous." It logs the pattern either way.
005
While your brain is absorbing the pattern, every like, dislike, comment, and share is simultaneously telling the algorithm this content is worth pushing further. Your brain gets poisoned and the post gets boosted — at the same time, with the same action. You're not just a victim of it. You're involuntarily funding its spread.
006
Your brain learns through stories and emotion — and people have always known how to exploit that. But AI means this content can be produced endlessly, cheaply, and targeted at exactly what you're most vulnerable to. Your mind was not built to process this volume of manufactured emotion at this speed. It can't tell the difference between something real and something engineered. That gap is the weapon.
007
It destroys perception of entire groups. It breaks potential friendships before they start. It tears communities apart. It quietly manipulates politics. And it never announces itself.
02 — How to spot it
Learn to recognise it. These are the signs that something is identity poisoning and not a real person's content.
AI-generated faces often have asymmetric ears, teeth that don't align, unnatural hair edges, or backgrounds that blur strangely around the head. Zoom in.
Check how old the account is and how many posts it has. Fake accounts often appear with a few dozen posts, all recent, all on the same theme.
Real people are complex. Content that maps perfectly onto a single stereotype with no nuance or self-awareness is a strong signal something is constructed.
If a post makes you feel something strongly and immediately — especially rage or contempt toward a group — pause. That reaction is the whole point of the post.
AI still struggles with hands (extra fingers, wrong joints), teeth (too many, wrong shape), and embedded text (garbled letters). But backgrounds are where artifacts hide most often and go most unnoticed — look for warped architecture, repeating patterns, objects that don't make sense, or edges around the subject that blur or bleed unnaturally.
Real accounts have friends in the comments. People tagging each other, inside jokes, context that suggests actual relationships. Fake accounts don't have that.
03 — The copypasta toolkit
When you've spotted and confirmed identity poisoning in the wild — don't engage with the content. Copy the message below and paste it as a reply instead. Pick the version that fits the platform.
04 — Take it further
The copypasta is the entry point. Here's what you can do after that.
01 — easiest
Drop the copypasta on content you spot. Add #IdentityPoisoning so others can find it. Every tagged post makes the pattern more searchable and visible.
02
Spotted something in the wild? Submit it below. The more real examples we have documented, the stronger the evidence base gets.
⚠ Only submit content you've verified. Using this against real people causes the exact harm it was built to prevent.
Submit an example03
Send this site to someone you know. Not to go viral — just one person who doesn't know about identity poisoning yet. That's how immune responses actually spread.
04 — biggest impact
Have a real conversation with someone in your life. It doesn't have to be serious — a passing comment, a fun rabbit hole, whatever. Talking about it plants the pattern in someone's head. That sticks longer than any post.
06 — What needs to change
The copypasta is a defensive move. These are the structural changes needed to actually reverse this.
01
Platforms should be required to label AI-generated images and video at the point of upload — not buried in metadata, but visible. If it was made by a machine, say so.
02
Accounts using AI-generated profile photos should be flagged. Platforms already have detection tools — they choose not to enforce them at scale. That's a policy choice, not a technical limitation.
03
Platforms publish occasional transparency reports on state-sponsored manipulation. They should be required to do the same for identity-based synthetic media campaigns — who's running them, at what scale, and what they're targeting.
04
Industry-wide adoption of provenance tools like C2PA — which cryptographically sign media at the point of creation — would make it possible to verify whether an image is real at a glance. The technology exists. The will to mandate it doesn't yet.
05
Google's SynthID watermarks AI-generated content at the model level — invisible to the eye but detectable by machines. That's the right approach. Every major AI company should be required to do the same. If your model generated it, your model should sign it.