Can Computer Programs Be Racist And Sexist?

Last summer, Jacky Alciné learned just how biased computers can be. Alciné, who is African-American, took a bunch of pictures with friends at a concert. Later he loaded them into Google Photos, which stores and automatically organizes images.

Google’s software is able to group together pictures of a particular friend, or pictures of dogs, cats, etc. But when it labeled a picture of one of Alciné’s friends, who is also African-American, it left him speechless.

“It labeled it as something else. It labeled her as a different species or creature,” says a horrified Alciné. Because it’s so cliché he doesn’t even want to say what creature it was. “I kind of refuse to. By saying that, I kind of reinforce the idea of it.”

I’m not going to reveal which animal it labeled his friend. But it also happened to others with dark skin. Alciné isn’t buying that it’s just some weird technical glitch.

“One could say, ‘Oh, it’s a computer,’ I’m like, OK … a computer built by whom? A computer designed by whom? A computer trained by whom?”

Alciné’s conclusion is that there probably weren’t any black people on the team that designed Google Photos. Google says it did test the product on employees of different races and ethnicities and it hasapologized for what happened. The company says it’s still early days for image labeling technology, and it’s working to improve it.

Alciné’s experience is one of many strange biases that turn up in computer algorithms, which sift through data for patterns.

Most of us are familiar with suggestion algorithms used by Amazon and Netflix — if you liked this movie, you’ll probably like this one. For example, the computer may learn over time that viewers who liked The Terminator also enjoyed Ex Machina.

But in another context, user feedback can harden societal biases. A couple of years agoa Harvard study found that when someone searched in Google for a name normally associated with a person of African-American descent, an ad for a company that finds criminal records was more likely to turn up.

The algorithm may initially have done this for both black and white people, but over time the biases of the people who did the search probably got factored in, says Christian Sandvig, a professor at the University of Michigan’s School of Information.

“Because people tended to click on the ad topic that suggested that that person had been arrested when the name was African-American, the algorithm learned the racism of the search users and then reinforced it by showing that more often,” Sandvig says.

He says other studies show that women are more likely to be shown lower-paying jobsthan men in online ads. Sorelle Friedler, a computer science professor at Haverford College in Pennsylvania, says women may reinforce this bias without realizing it.

“It might be that women are truly less likely to click on those ads and probably that’s because of the long history of women making less than men,” she says. “And so perhaps [women are] thinking, ‘Oh, that ad isn’t really for me. I’m not as likely to get that job.’ ”

And so the algorithm determines it should no longer show those ads to women, because they don’t click.

Read more: http://www.npr.org/sections/alltechconsidered/2016/03/15/470422089/can-computer-programs-be-racist-and-sexist

Camilla Wood

UK based Legal Aid Lawyer

Leave a Reply

Your email address will not be published. Required fields are marked *