Generative AI and Stereotypes

Creatives have always thrived by understanding the capabilities and limitations of their tools—how each one shapes the process, the outcome, and the message they convey. As generative AI becomes a more prominent part of the creative toolkit, it's essential to approach it with the same critical awareness. These systems are trained on vast datasets that reflect existing patterns, norms, and biases. As a result, there is a tendency toward averaging, favoring the familiar or statistically common, which can potentially dilute originality or reinforce stereotypes. For creatives, this means not only leveraging AI for inspiration and efficiency but also actively pushing against its boundaries, questioning its outputs, and using it as a springboard for more intentional, authentic expression.


Rest of World published an excellent article about how generative AI tools propagate stereotypes. They analyzed 3,000 AI-generated images to examine how image generators represent different countries and cultures. Key takeaways include the following insights:.

The need for data transparency.
There is a need for AI companies to become more transparent with the data that they use.

Bias exists in generative AI algorithms.
Bias occurs in many algorithms and AI systems, from sexist and racist search results to facial recognition systems that perform worse on Black faces. Generative AI systems are no different. In an analysis of more than 5,000 AI images, Bloomberg found that images associated with higher-paying job titles featured people with lighter skin tones and that results for most professional roles were male-dominated.

Generative AI has an upside for marginalized groups, but also carries risks.
Generative AI could help improve diversity in media by making creative tools more accessible to marginalized groups or those lacking the resources to produce messages at scale. But used unwisely, it risks silencing those same groups.

Associative aspects of generative AI can lead to overly average results that are biased.
" These models are purely associative machines,” Pruthi said. He gave the example of a football: An AI system may learn to associate footballs with a green field and so produce images of footballs on grass. In many cases, this results in a more accurate or relevant picture. But you're out of luck if you don’t want an “average” image. “It’s kind of the reason why these systems are so good, but also their Achilles’ heel,” - Sasha Luccioni, researcher in ethical and sustainable AI at Hugging Face.

Take a closer look at the article at restofworld.org.


AI, VisualizationDanny Stillion