/dq/media/media_files/2025/07/30/gen-ai-er-2025-07-30-13-26-10.jpg)
A young Nigerian entrepreneur in a quiet co-working office in downtown Lagos loads up a prompt into a generative AI image engine: "a Black woman CEO, in traditional Ankara fabric, leading a boardroom of international executives". In less than fifteen seconds, imagery fills their screen — crisp, bright, and confident. For the first time in her professional career, she sees a high-quality image of her dream-inspired by her.
This moment is not just individual. It is game-changing. It signals a shift in consumer culture towards generative AI, that is giving the individual unprecedented representation power-and forcing brands, marketers and institutions to think about the psychological, economic and ethical implications of a new type of inclusion.
Welcome to the age of algorithmic visibility.
From advertising to affirmation
Advertising has functioned under an unstated logic for decades—showing the masses some kind of aspiration, usually in white, male, Youth, and hope they buy it—literally. It was a zero-sum game. If you weren't seen, you weren't sold to. The average Black woman in the U.S. saw herself represented authentically in fewer than 2.5% of prime-time TV ads as recently as 2018. LGBTQ+ representation was even less, while intersectional inclusion—say queer, disabled people of color—was almost non-existent.
But in a time when DALL·E, Midjourney, and OpenAI's Sora can now produce not just photos but entire films, to be generated with natural language, representation has gone exponential. What once took an expensive photo shoot, a creative team, and the blessing of a brand's budget now takes a sentence and less than 10 seconds. More importantly, the power of creation has now moved from boardrooms to browsers. And consumers are noticing.
Inclusion at the speed of prompt
A new study by McKinsey reports that 79% of Gen Z consumers say that representation in advertising influences their buying habits. But what's new is that 61% say they now expect brands to show visible evidence of their identities in real time—not in the next campaign, but in this one now. Generative AI is making this possible with its ability to personalize for the masses.
Consider Levi's. Early in 2023, levi's controversy exploded when it announced it would be using AI-generated models for the sake of "diversifying representation" on its website. The backlash was immediate; critics accused the brand of simulating diversity instead of actually hiring people. But underneath the outrage was a deeper truth—that representation is not reproducible as a still photo on a billboard. It is a dynamic, responsive algorithm.
Meanwhile, Netflix has been quietly testing AI-rendered thumbnails that adjust based on user preferences and demographics. A Black user might see a Black actor prominently featured in the image; a white user might see a different actor from the same show. This is hyper-targeted personalization, but what is revealed in the mix is a new type of identity-responsive media that begs important questions. What does it mean to be "seen" if what we can see is being dynamically adjusted by an invisible hand?
The psychological cost of not being seen
A 2024 study out of Columbia Business School found that consumers who saw themselves visually represented in an AI-generated ad scored 28% higher on measures of brand trust, 22% higher on willingness to recommend, and nearly 19% higher on purchase intent compared to consumers who viewed non-representative ads. The effect was even stronger for individuals from historically marginalized groups.
But the inverse is also true. When representation feels forced, inauthentic, or—perhaps most dangerously—algorithmically stereotyped, the backlash is swift. In one field experiment run in partnership with a national beauty brand, AI-generated ads tailored to racial identity led to increased engagement among Black and South Asian consumers—but only when cultural markers were accurate and respectful.
When the system over-indexed on skin tone without adjusting clothing or language, perceived authenticity fell by 37%. Inclusion, it turns out, is exquisitely fragile.
Are we all just prompt engineering ourselves?
There’s something thrilling and deeply strange about watching identity become promptable. A Filipino-American teacher in Queens recently told me she uses AI to generate posters for her students “so they can see themselves in science.”
Her prompt: “A young Filipino girl building a robot, in a public school classroom in New York City.” A year ago, she said, she wouldn’t have even looked for such an image—it didn’t exist. Now, she creates a dozen variations a week.
This DIY representation points to a deeper cultural transformation. Consumers are no longer just audiences. They’re curators of identity. In the TikTok age, where 15-second videos drive billion-dollar trends, generative AI gives individuals the tools to manifest their aspirational selves—visually, narratively, and even commercially. Etsy shops are now selling personalized, AI-generated children’s books featuring “your daughter as the hero.”
A new app, QueerStory, uses AI to generate short films starring users as LGBTQ+ protagonists in love stories set in historical periods or sci-fi worlds. Even wedding invitations are being designed with AI to “reflect our multicultural love story.” Each one of these micro-moments is rewriting what it means to belong in the consumer imagination.
The ethics of synthetic inclusion
But with great representation comes great responsibility—and risk. As generative AI becomes a mainstream marketing tool, the risk of coded bias scales with it. AI systems trained predominantly on Western data sets can easily reproduce and even amplify representational gaps. “It’s one thing for AI to ‘forget’ your culture,” said a Navajo artist who tested Midjourney to create indigenous art. “It’s another thing for it to replace it with a stereotype.”
Indeed, MIT researchers in 2023 found that many popular AI image generators rendered “African doctor” prompts as less competent-looking compared to “European doctor” prompts—mirroring long-standing societal biases. Inclusion through AI is not just about who is added, but how they’re constructed.
This is where the future of marketing hangs in the balance. Because generative AI doesn’t just enable more representation—it forces us to answer a harder question: what kind of representation is good enough?
The inclusion arms race
Already, brands are competing not just on product or price, but on perceived inclusion performance. In a survey of 3,000 consumers conducted by Adobe in late 2024, 68% said they would stop buying from a brand that “erases or misrepresents” their identity. Meanwhile, 44% said they’ve interacted with AI-generated content without realizing it—and of those who later found out, 59% felt manipulated.
That’s the paradox: AI can make you feel seen—but only if you don’t realize it’s trying. Authenticity remains the currency of inclusion, even when the imagery is synthetic. And so we enter a new marketing paradox: authentic artificiality. Can an image created by a machine still feel like it knows you? Can inclusion be real if it was designed to feel real?
The human after the machine
In the end, inclusion is not a visual trick. It’s an emotional truth. It’s the difference between being targeted and being understood. Between being used and being seen. Generative AI has opened the door to a world where everyone, potentially, gets to belong.
The question is: who gets to write the prompt?
That is the battleground of the next decade—not just for technology, but for culture, commerce, and the human spirit. And for once, the choice won’t be made solely in the boardrooms of Madison Avenue. It will be made in a million prompts, typed by people who have waited far too long to see themselves. Not just sold to. But centered. Now, finally, they are.
By Dr. Harish Kumar, Assistant Professor, Chairperson – Research, Great Lakes Institute of Management , Gurgaon
and
Dr. Richali Jain, PhD, Researcher in Artificial Intelligence & Consumer Psychology, GLIM