No Eye in AI
Are the eyes of AI generated faces the key to understanding why they look so… off?
Oscelle Boye | | 4 min read | Discussion
Although it has been a topic of discourse, and an area rife with innovation for a while, AI seems to have really taken the world by storm recently. From increasingly accurate and accessible deepfake technology, to the most advanced chatbots that are being used by students to fraudulently ‘complete’ essays and assignments, AI – as well as the ethical questions it poses – has been positioned at the forefront of society like never before. Even in the medical, and specifically the ophthalmic sphere, there has been a significant increase in the development of AI-related technologies and applications, research assessing such tools and the enthusiasm about the promises it offers for healthcare – something that was discussed in our penultimate feature of 2022. Today however, I want to shift focus (only briefly) to AI generated art, not to wade into the recent controversy surrounding the murky ethics of copyright and ownership of the data on which machine learning algorithms are trained, but instead to answer a seemingly simpler question: why do AI artists struggle to generate eyes?
If you’ve ever used an AI art generation tool or visited websites such as This Person Does Not Exist, you’ve likely seen that while the ability of AI to render a realistic human face is impressive, there’s just something… not quite right about the eyes that stare back (1). According to a preprint study conducted at the State University of New York, the source of this discomfort lies within the pupils of the AI-generated recreations, which are irregularly shaped as opposed to the circular or elliptical shapes of actual human pupils when photographed (2). It is this difference that we perceive, albeit subconsciously, when our brains flag AI generated ‘photographs’ as being off, and is one that, according to the research team, could be used as a method of identifying and countering the malicious use of realistic looking fakes. But if the way of identifying fakes is so simple, why aren’t AI models just taught what a human pupil actually looks like?
Generative adversarial networks (GAN), the machine learning framework which are commonly used to generate such fake images, work by using a discriminator model to distinguish between real images and images generated by a generator model, which gradually improves the images it generates as part of a zero-sum game, until the images generated by the generator model fools the discriminator model about 50 percent of the time. Although, through this method, the images become increasingly similar to the dataset on which the GAN is trained, neither the discriminator nor the generator model actually have any real knowledge of the anatomy of the human eye, which is why artefacts are not caught and removed when they are generated. However, artefacts are generated in the rendering of almost anything, particularly objects with finer details, the problem AI has is that humans are drawn to eyes. Research has shown that the eyes are the first thing we instinctively fix our eyes on when we look at a face, even when we are prompted to look at other facial features and, as such, it’s no wonder that we all have an extensive, albeit a usually unconscious, knowledge of what a human eye looks like, and we’re so tuned in specifically when something is just not quite right (3).
Machine learning algorithms don’t think the way humans do and so don’t work to increase accuracy in the areas of a photo that humans are more likely to study closer, which is why they aim to make the whole photo as accurate as possible. At the end of the day, it seems the problem of artificial eyes comes down to the different places that machines and humans look.
- This Person Does Not Exist (2023). Available at: http://bit.ly/3DC84Rg
- H Guo et al., ICASSP 2022, 1, 2904 (2022).
- SJ Thompson et al., Acta Phychologica, 193, 229 (2019).
I have always been fascinated by stories. During my biomedical sciences degree, though I enjoyed wet lab sessions, I was truly in my element when sitting down to write up my results and find the stories within the data. Working at Texere gives me the opportunity to delve into a plethora of interesting stories, sharing them with a wide audience as I go.