DALL-E 2 Treats ‘Black’ or ‘Female’ Special! That’s how OpenAI Makes it Inclusive

DALL-E 2 Treats ‘Black’ or ‘Female’ Special! That’s how OpenAI Makes it Inclusive

DALL E-2 is at the centre of controversy for reportedly including prejudiced jargon in the prompts

DALL-E2, the successor of DALL E, the AI image generator from Open AI has again raked up the wrong feathers – as usual for being biased – by stealthily including 'black' and 'women' in its suggestions. DALL E-2, the successor of DALL E takes text prompts to generate near exact images as the description. But, the point of contention remains – how far it could be fair in avoiding stereotypes in its output. Ask for a soup with monster eggs, it will generate the exact image, however, the image doesn't carry any harm like sending subtle signals towards gender stereotyping or racial discrimination. But in reality, things are a little edgy. Ever since it has been made publicly available, people and researchers are expressing concerns over the bias of the images are carrying.

Why bias is inevitable?

As the saying goes, AI models are only as good as the data they are fed. DALL E-2 learned to create art by watching images and videos. That implies, that for every correct image it generates, it can generate a socially biased image. For example, if you ask it an image of an air hostess, it would consider a white woman. How many images of a black woman as an air hostess or a female builder would one find on the internet? Relatively few. This only means one thing – AI lacks the real physical world knowledge as we humans do. Prompt filtering too has a role in it to some extent. Though it can catch a few vague suggestions, it was found that it had bypassed descriptive or coded words and visually similar suggestions. This can have a severe effect on how stereotypes are manifested openly. For example, the way it would segregate criminals to generate pictures for a social media company will result in certain unwanted search results. The same can be the case with queer couples. It might end up showing only male people as queer personalities. Besides ingesting proper data, the AI should learn to optimize, i.e., have an objective. When it looks at a picture, it should be trained to watch out for societal factors like bias, prejudice, and discrimination. And most importantly, the developers themselves must be conscious of their social background and the personal prejudices they have towards a data-set for AI to exclude those biases. Most of the AI developer teams are white and male-dominated, and the DALL E-2 team is not an exception.

Wishy-washy efforts of Open AI: 

OpenAI, in its blog, Reducing Bias and Improving Safety in DALL.E 2, talks about including images of people from diverse backgrounds, and improving its filters, in reality, they are a far cry. OpenAI's "red team", a team of external experts to look for loopholes in the product before it goes out, too had concerns regarding the prejudice and bias DALL E2 could have. They demonstrated how it shows partiality towards white men by default, overly sexualizes women, and shows racial discrimination, so grave that the experts have recommended excluding faces from the search. "There were a lot of non-white people whenever there was a negative adjective associated with the person," says Maarten Sap, an external red team member who researches stereotypes and reasoning in AI models. On the one hand, it acknowledges the difficulty in removing biases in its document 'Risks and limitations', and on the other hand, it wants to bypass being accountable to the users by including prompts with racial and gender-oriented words without their knowledge.

More Trending Stories 

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net