Categories: IRL

Does facial recognition software have a racial bias problem?

Over the past few weeks, maybe you’ve seen your feed fill up with art selfies: pictures of your friends matched with portraits of their fine art doppelgängers. Google’s Art & Culture app shows how technology can help foster engagement with art. But it also reveals art’s historical bias. Many people of color discovered that their results were limited, inaccurate or painfully stereotypical. Instead of being able to see themselves as part of art history, they could only see themselves outside it.

In this week’s episode of IRL, we explore the rise of facial recognition technology. From Snapchat filters to Apple’s Face ID, biometric technology plays a growing role in our everyday lives. Are these algorithms helping us see more clearly? Or are they reinforcing existing bias?

Emily Kennedy and Glynnis MacNicol talk about the power and risks of facial recognition for marginalized communities. Artist Adam Harvey investigates how racism seeps into big data sets. Steven Talley shares his experience as a victim of mistaken identity. Joseph Atick, a forefather of facial recognition technology, reckons with its future. And we head to China, where biometric data is part of buying toilet paper. Listen to these stories and more on this edition of IRL.

On this week’s episode, we heard from artist Adam Harvey about the impact of biometrics on art. View more of his work here.

One comment on “Does facial recognition software have a racial bias problem?”

  1. Michael Hettiarachchi wrote on

    You are in a great mission to educate people,i appreciate it,thanks