banner
Home / Blog / AI images of women from around the world have gone viral. Do they promote colourism and cultural beauty standards?
Blog

AI images of women from around the world have gone viral. Do they promote colourism and cultural beauty standards?

Jul 19, 2023Jul 19, 2023

What does a "beautiful" woman from India look like?

What about the Philippines or Laos?

Artificial Intelligence (AI) purports to have the answer.

AI-generated images of "beautiful women" from around the world have been going viral for months.

One TikTok video featuring AI images of South and East-Asian women — recently posted by an account named AI World Beauties — has more than 1.7 million views.

However, experts say that because they're "trained" on biased data sets and stereotypes, the images can perpetuate limited, exclusionary and potentially harmful ideals.

The images going viral are created by what's known as "generative AI", or "GenAI", programs.

"GenAI is a type of artificial intelligence powered by machine learning models," Shibani Antonette, a lecturer in data science and innovation at the University of Technology Sydney, told the ABC.

"It uses patterns and information its learned from millions of data points to create new content that never existed before."

Dr Antonette says the quality and diversity of the training data determines an image generator's output.

When contacted by the ABC, the creator of the viral video said they used a diffusion model called Midjourney to generate the images.

They declined to be named in the story or comment further.

Fair skin, thin noses, full lips and high cheekbones.

According to the viral videos, "beautiful" women share these same features.

Bias is "a serious problem" in image generation and facial recognition technologies, Dr Antonette says.

"The models can create a distorted reality by amplifying biases and stereotypes on race and gender," she says.

"Most of the generated images perpetuate colourism and cultural beauty standards."

When looking at the viral AI images, Dr Antonette says the model that generated them likely "did not have a diverse training dataset that contained many faces of people of colour with varying skin tones and shapes".

"After all, data for these models are pulled from the entire internet over the last few decades — without accountability for coverage, diversity, and inclusion — to cater to specific applications."

Asia Jackson often has people guess at her ethnicity, and usually gets a "you don't look like that" when she tells people.

The Black and Filipino actress and content creator says as a child, being "mixed" created "a lot of identity issues".

Now 29, Ms Jackson says she has a stronger sense of self and identity.

"I definitely get way more offended when someone tells me 'I don't look Black' or 'don't look Asian'.

"Because both of these racial categories contain such a large spectrum of skin colour and features."

The same goes for when people tell Ms Jackson she "doesn't look Filipino".

"The Philippines is a country with more than 7,100 islands and hundreds of different ethnic groups," she says.

Ms Jackson feels "pretty indifferent" about the viral AI images.

"AI is just copying human behaviour, however non-inclusive or non-politically correct it might be," she says.

"This isn’t anything different from what happens in real life.

"At the same time, I really don't think it's possible to include the vast diversity of features or ethnicities from every country in a 30 second video."

When Ishara Sahama first saw the above images, she found them "almost ethereal".

She then realised what she was seeing was "the most accepted beauty standards of each ethnic group".

"The diversity of ethnic groups within each country is generalised into one model. It's reductive and far from reflective of those countries' diversity," she says.

The 25-year-old co-founder of strategic design agency, Echo Impact Group, has been mistaken for Indian, Pakistani, Arab and Indigenous Australian.

She's Sri Lankan, with Sinhalese, Tamil and Malay backgrounds.

"Assuming one's ethnicity and then associating them with that without asking is what annoys me the most."

She says it's "beauty stereotypes" seen in these AI images that causes people to think there is "one look" for each ethnicity.

"When people see these AI images of women, they may associate those features with what an Indian, Pakistani, or in my case, a Sri Lankan woman looks like," she says.

"I clearly don't look like those images.

"I think such AI has the capability to challenge perspectives and identities. But unfortunately, AI art can only respond to data it has."

Every feature in these AI-generated images fits into what is deemed as "the model minority," says Kriti Gupta.

"It's all the parts of our ethnic group that the internet [which is informed by society's preferences] deem as attractive."

Ms Gupta, a 27-year-old Indian Australian who works in social media strategy and consulting, says she doesn't see herself in any of the images.

"They're what every guy thinks of when he has a 'brown girl fetish,'" she says.

Ms Gupta says people have assumed she is Spanish, Mexican, Moroccan or from other Latin American countries.

"I think I'm always in this middle ground of like, why are you assuming my ethnicity? What value does that bring to this conversation?" she says.

"I only bring up my background when I feel it's relevant to the conversation."

Like many South Asian women, Ms Gupta made changes to her appearance in Australia.

She dyed her blonde and didn't worry about letting her brown skin get darker from being in the sun, admitting it was to make herself more appealing to the male gaze here.

"But then I would go to India and douse myself in whitening cream to fit in there," she says.

Still, Ms Gupta knows she's not really seeing what most South Asian women look like in those AI images.

"We're seeing white-washed imagery," she says.

"Most of the time, these AI platforms are created by men, and most of the coding that goes into these algorithms to create these images from the world's use of the internet is fixated on a Western use."

Scholars and activists have warned that the datasets used to train AI models are biased.

And it can be be problematic.

A research study by Cornell University from March this year revealed how popular AI models produced images of men with lighter skin tones for high-paying jobs such as a "lawyer," "judge" or "CEO".

Whereas darker skinned people over-represented lower paying professions such as a "janitor" and "fast food worker".

While there's no all-encompassing answer, Dr Antonette points to a few key actions.

"Tech-developers and companies rolling out services should ensure that their AI is fair and equitable by diversifying their datasets and avoiding over-representation of certain groups of people," she says.

"They should consider the implications of how their technology might be extended to contexts other than what it was originally built for."

Dr Antonette says researchers improving their accountability and transparency is also key.

"This is by publishing open-source models that others can critique and build on by adding more diverse data," she says.

Those also using and viewing AI should do so critically and responsibly, Dr Antonette says.

"The creation of biased synthetic images can inadvertently fuel future biases, entangling us in vicious cycle.

"Embracing diversity in data, championing transparency, and using AI tools thoughtfully can lead us towards a future where AI benefits everyone, without perpetuating harmful stereotypes."