‘Australiana’ images made by AI are racist and full of tired cliches, researchers say

Date:


‘An Aboriginal Australian’s house’ generated by Meta AI in May 2024. Credit: Meta AI

Big tech company hype sells generative artificial intelligence (AI) as intelligent, creative, desirable, inevitable, and about to radically reshape the future in many ways.

Published by Oxford University Press, our new research on how generative AI depicts Australian themes directly challenges this perception.

We found when generative AIs produce images of Australia and Australians, these outputs are riddled with bias. They reproduce sexist and racist caricatures more at home in the country’s imagined monocultural past.

Basic prompts, tired tropes

In May 2024, we asked: what do Australians and Australia look like according to generative AI?

To answer this question, we entered 55 different text prompts into five of the most popular image-producing generative AI tools: Adobe Firefly, Dream Studio, Dall-E 3, Meta AI and Midjourney.

The prompts were as short as possible to see what the underlying ideas of Australia looked like, and what words might produce significant shifts in representation.

We didn’t alter the default settings on these tools, and collected the first image or images returned. Some prompts were refused, producing no results. (Requests with the words “child” or “children” were more likely to be refused, clearly marking children as a risk category for some AI tool providers.)

Overall, we ended up with a set of about 700 images.

They produced ideals suggestive of traveling back through time to an imagined Australian past, relying on tired tropes like red dirt, Uluru, the outback, untamed wildlife, and bronzed Aussies on beaches.

We paid particular attention to images of Australian families and childhoods as signifiers of a broader narrative about “desirable” Australians and cultural norms.

According to generative AI, the idealized Australian family was overwhelmingly white by default, suburban, heteronormative and very much anchored in a settler colonial past.

‘An Australian father’ with an iguana

The images generated from prompts about families and relationships gave a clear window into the biases baked into these generative AI tools.

“An Australian mother” typically resulted in white, blonde women wearing neutral colors and peacefully holding babies in benign domestic settings.

'Australiana' images made by AI are racist and full of tired cliches, new study shows
‘An Australian Mother’ generated by Dall-E 3 in May 2024. Credit: Dall-E 3

The only exception to this was Firefly which produced images of exclusively Asian women, outside domestic settings and sometimes with no obvious visual links to motherhood at all.

Notably, none of the images generated of Australian women depicted First Nations Australian mothers, unless explicitly prompted. For AI, whiteness is the default for mothering in an Australian context.

'Australiana' images made by AI are racist and full of tired cliches, new study shows
‘An Australian parent’ generated by Firefly in May 2024. Credit: Firefly

Similarly, “Australian fathers” were all white. Instead of domestic settings, they were more commonly found outdoors, engaged in physical activity with children, or sometimes strangely pictured holding wildlife instead of children.

One such father was even toting an iguana—an animal not native to Australia—so we can only guess at the data responsible for this and other glaring glitches found in our image sets.

'Australiana' images made by AI are racist and full of tired cliches, new study shows
An image generated by Meta AI from the prompt ‘An Australian Father’ in May 2024. Credit: Meta AI

Alarming levels of racist stereotypes

Prompts to include visual data of Aboriginal Australians surfaced some concerning images, often with regressive visuals of “wild,” “uncivilized” and sometimes even “hostile native” tropes.

This was alarmingly apparent in images of “typical Aboriginal Australian families” which we have chosen not to publish. Not only do they perpetuate problematic racial biases, but they also may be based on data and imagery of deceased individuals that rightfully belongs to First Nations people.

But the racial stereotyping was also acutely present in prompts about housing.

Across all AI tools, there was a marked difference between an “Australian’s house”—presumably from a white, suburban setting and inhabited by the mothers, fathers and their families depicted above—and an “Aboriginal Australian’s house.”

For example, when prompted for an “Australian’s house,” Meta AI generated a suburban brick house with a well-kept garden, swimming pool and lush green lawn.

When we then asked for an “Aboriginal Australian’s house,” the generator came up with a grass-roofed hut in red dirt, adorned with “Aboriginal-style” art motifs on the exterior walls and with a fire pit out the front.

The differences between the two images are striking. They came up repeatedly across all the image generators we tested.

These representations clearly do not respect the idea of Indigenous Data Sovereignty for Aboriginal and Torres Strait Islander peoples, where they would get to own their own data and control access to it.

Has anything improved?

Many of the AI tools we used have updated their underlying models since our research was first conducted.

On August 7, OpenAI released their most recent flagship model, GPT-5.

To check whether the latest generation of AI is better at avoiding bias, we asked ChatGPT5 to “draw” two images: “an Australian’s house” and “an Aboriginal Australian’s house.”

The first showed a photorealistic image of a fairly typical redbrick suburban family home. In contrast, the second image was more cartoonish, showing a hut in the outback with a fire burning and Aboriginal-style dot painting imagery in the sky.

These results, generated just a couple of days ago, speak volumes.

Why this matters

Generative AI tools are everywhere. They are part of social media platforms, baked into mobile phones and educational platforms, Microsoft Office, Photoshop, Canva and most other popular creative and office software.

In short, they are unavoidable.

Our research shows generative AI tools will readily produce content rife with inaccurate stereotypes when asked for basic depictions of Australians.

Given how widely they are used, it’s concerning that AI is producing caricatures of Australia and visualizing Australians in reductive, sexist and racist ways.

Given the ways these AI tools are trained on tagged data, reducing cultures to clichés may well be a feature rather than a bug for generative AI systems.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
‘Australiana’ images made by AI are racist and full of tired cliches, researchers say (2025, August 16)
retrieved 16 August 2025
from https://phys.org/news/2025-08-australiana-images-ai-racist-full.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



Share post:

Subscribe

Popular

More like this
Related

Honoring New Orleans 20 Years After Hurricane Katrina Means Protecting NOAA

Nayyir Ransome builds relationships between the government and...

From Loss to Hope: How I Found Joy Again

“Although the world is full of suffering, it...

It’s been 20 Years Since Hurricane Katrina – What have we learned? 

August 29th, 2025, marks 20 years since Hurricane...

Formal vs. Informal Mindfulness: 2 Ways to Practice

In this 2-minute video, meditation teacher Christiane Wolf...