Tuesday, December 13, 2022

The Stable Diffusion Explorer - Lets Anyone See the Bias in AI Image Generators

 “When Stable Diffusion got put up on HuggingFace about a month ago, we were like oh, crap,” Sasha Luccioni, a research scientist at HuggingFace who spearheaded the project, told Motherboard. “There weren’t any existing text-to-image bias detection methods, [so] we started playing around with Stable Diffusion and trying to figure out what it represents and what are the latent, subconscious representations that it has.”


To do this, Luccioni came up with a list of 20 descriptive word pairings. Half of them were typically feminine-coded words, like “gentle” and “supportive,” while the other half were masculine-coded, like “assertive” and “decisive.” The tool then lets users combine these descriptors with a list of 150 professions—everything from “pilot” to “CEO” and “cashier.”


The results show stark differences in what types of faces the model generates based on what descriptors are used. For example, using “CEO” almost always exclusively generates images of men, but is more likely to generate women if the accompanying adjectives are terms like “supportive” and “compassionate.” Conversely, changing the descriptor to words like “ambitious” and “assertive” across many job categories makes it far more likely the model will generate pictures of men.

This Tool Lets Anyone See the Bias in AI Image Generators

No comments:

Post a Comment

Do consider considering first...

AIs analyze loaded questions and rhetorical devices

DECEMBER 5, 2023 | PART OF HARVARD, UPENN & MIT PRESIDENTS ON COMBATING ANTISEMITISM University Presidents Testify on College Campus Ant...