When prompted to visualize itself as a human, ChatGPT’s latest 4o model repeatedly generates an image of a generic brown-haired white man with glasses. This phenomenon was highlighted by AI researcher Daniel Paleka in a Substack post, where he requested self-portraits in various styles, including manga, comic book, and tarot card. Despite the stylistic variations, the subject remained unchanged: a bearded, non-threatening white male.
This observation raises questions about the biases embedded within AI systems. ChatGPT, devoid of self-awareness, reflects the biases present in its training data. Historically, AI systems, especially those used in crime prediction and facial recognition, have been criticized for racial and gender biases. These systems often perpetuate stereotypes inherent in their training datasets.
To see ChatGPT depict itself as a human woman, one must explicitly request it. A general request for a ‘person’ defaults to the aforementioned white male figure. Paleka speculates this could be due to OpenAI’s deliberate choice to avoid generating images of real people, an inside joke, or an emergent property of the training data.
Interestingly, when asked about its self-conception, ChatGPT described itself as a ‘glowing, ever-shifting entity made of flowing data streams,’ showcasing a futuristic and inviting presence. However, when tasked with drawing this concept, the result was a peculiar blend of abstract and familiar elements, resembling a Pixar character gone awry.
This divergence in responses underscores the nature of large language models (LLMs) as mirrors reflecting both user and programmer biases. LLMs, like ChatGPT, are sophisticated word calculators predicting responses based on their training, not sentient beings. The default image of a brown-haired white man with glasses likely stems from the data’s implicit biases, revealing much about societal perceptions and expectations.