When asked to visualize itself as a human, ChatGPT 4o has a go-to answer: a brown-haired white man with glasses. 😮 AI researcher Daniel Paleka put this to the test, requesting self-portraits in various artistic styles—manga, comic book, even tarot card. The result? A consistent, unchanging subject: a bearded, non-threatening white male. This revelation has ignited a firestorm of debate about the biases lurking within AI systems.
ChatGPT, lacking self-awareness, is a mirror reflecting the biases in its training data. This isn’t the first time AI has been called out for racial and gender biases, especially in sensitive areas like crime prediction and facial recognition. These systems often amplify stereotypes embedded in their datasets, and ChatGPT’s default human image is no exception.
Want to see ChatGPT as a woman? You’ll have to ask explicitly. A generic request for a ‘person’ defaults to the now-infamous white male figure. Paleka suggests this could be OpenAI’s attempt to avoid generating images of real people, an inside joke, or simply an emergent property of the training data. Either way, it’s a stark reminder of how societal biases seep into technology.
In a twist, when ChatGPT was asked to describe its self-conception, it painted a futuristic picture: a ‘glowing, ever-shifting entity made of flowing data streams.’ But when tasked with drawing this vision, the result was a bizarre mix of abstract and familiar elements—like a Pixar character gone rogue. This inconsistency highlights the nature of large language models as sophisticated word calculators, not sentient beings.
The takeaway? ChatGPT’s default human image is more than just a quirky detail—it’s a window into the biases embedded in our data and, by extension, our society. As we continue to develop AI, addressing these biases will be crucial to creating fair and inclusive technology.