Have you ever dropped a selfie into an AI headshot app, hoping for a quick glow-up — and what came back looked like someone else entirely? Not just a better camera angle or soft lighting. We're talking about new cheekbones. A thinner nose. A lighter skin tone. Maybe even a hairstyle that would make Lisa Kudrow proud. For many Black users, that “professional” headshot doesn’t look like an upgrade. It looks like erasure.
A Glitch in the Mirror?
AI photo generators — the apps popping up all over social media offering polished, LinkedIn-ready portraits in seconds — are under fire for distorting the features of people of color. And not by accident.
AI ethics advocate Christelle Mombo-Zigah tested several popular headshot apps and watched as they reshaped her into someone she didn’t recognize. Her dark skin was lightened. Her hair texture was changed. Her face was subtly — or not-so-subtly — remixed.
“These headshot generators aren’t just editing photos — they’re altering identities,” she wrote on LinkedIn, calling the phenomenon a kind of digital colorism.
The tools, she argued, weren’t just smoothing blemishes or fixing lighting. They were applying a biased blueprint for what “professional” looks like — and that blueprint didn’t include her real features.
MIT student Janelle Chin had a similar experience. After uploading a photo for a more “refined” headshot, she was stunned when the AI lightened her skin and turned her brown eyes blue.
“I was like, ‘Wow — does this thing think I should become white to be more professional?’” she told The Boston Globe.
It’s Not Just You
These aren't isolated glitches. A 2023 study from the University of Washington found that image generators like Stable Diffusion tend to default to whiteness — especially when prompts are vague (like “a CEO” or “a person from Oceania”). In most cases, the AI ignored darker skin tones entirely.
Why? Because AI models are trained on massive datasets pulled from the internet — and the internet has a long history of overrepresenting whiteness, especially in professional imagery.
So when the tool generates what it thinks a polished headshot should be, it often leans toward a Eurocentric ideal.
And the Bias Isn’t Just Visual
Speech recognition tools — the same ones behind voice-to-text apps and virtual assistants — show a similar pattern. A Stanford study found that five major transcription services made nearly twice as many errors when transcribing Black speakers compared to white speakers.
The error rate for Black men? Over 35%. That’s not a small mistake — that’s a different sentence.
“The disparities were consistent and significant,” said researcher Allison Koenecke, who co-authored the study.
Even language models like ChatGPT have been shown to respond differently based on racial cues. A 2024 Nature study found that prompts using African American Vernacular English (AAVE) triggered more negative responses than those using standard English. Meanwhile, a Stanford audit revealed that ChatGPT gave less favorable advice — including lower price estimates — when names like “Jamal Washington” were used instead of “Logan Becker.”
So… Who Is AI Really Built For?
Tech companies are starting to respond. OpenAI added a “diverse defaults” feature to its image tool DALL·E, which attempts to vary race and gender when they’re not specified in prompts. Other apps are now letting users pick their ethnicity — a Band-Aid on a bigger wound.
But the issue runs deeper than filters or dropdown menus.
When AI consistently edits Black faces, misunderstands Black voices, and downgrades Black names, the message is clear: you’re being corrected to fit someone else’s standard.
That’s not enhancement.
That’s erasure.
And in a world increasingly shaped by machine-generated images and voices, the fight for accurate representation isn’t just cultural — it’s technological.
So the next time an AI gives you back a version of yourself that doesn’t feel like you?
Trust that instinct. You’re not being sensitive. You’re being replaced.