I don't know whether I would call it bias or not (and this is Apple's model falling for the same "poor means black" fallacy we've seen before), but I've found their image generation models to be incredible poor at matching the photo you provide it. I've done it with many different photos of myself and they all vary wildly and look nothing like myself.
It's comical how bad Apple's image generation models are.
“I could not replicate the above results with different photos, although I imagine that this will be possible with more effort.”
I have a feeling that the red background lighting in that image is what is causing confusion for the model.
That being said, I’m not surprised and I’m not sure there’s an obvious solution given current tech. I think apple is making the right choices here to “safely” or benignly provide a tiptoe into image generation for the public.
I had one extensive exchange with ChatGPT to generate an image of a man and a woman working together on a leather crafting project. No matter the prompt, the man was systematically the one "working" and the woman being there to assist.
Bias correction in images feels a lot more primitive than in text.
Seems like more of a bug than bias. The problem is in ignoring the appearance of the person in the first place. It's a statistical model, and of course there are more black rappers and white investment bankers. If it noticed that the person was white to begin with, and applied that trait, it wouldn't have to guess about the race at all.
All this marketability tuning may some day result in models which are extremely finely attuned to our current societal norms and taboos.
At which point the models will stop reinforcing (racial/gender) biases and start reinforcing said taboos instead. I don't think anyone wants that either
> This input
Honestly, the input doesn't seem very well chosen. It is a very low resolution picture of someone, with red eyes, in a circle, with a grey icon partially on top of it, and with somebody else in the picture, half outside the frame.
How do you solve this without getting Gemini-style racially diverse Nazis?
AI safety people worrying that basketball players don't have a perfectly balanced ethnical representation while mega corporations are trying to establish a monopoly on intelligence