Machine learning sounds futuristic, but it’s already at work when Google Maps gives directions or Netflix recommends a movie. But real-world cultural bias is already a problem in artificial intelligence, and without careful solutions, the issue is likely to grow, Camille Eddy, told participants at the Nonprofit Technology Conference.
Ms. Eddy, a mechanical-engineering student at Boise State University and a robotics intern at HP Labs, said algorithms can be distorted when the data sets used to train them aren’t diverse.
Take for example an exercise in word associations using Word2Vec, a data set used to train search engines. After being given the word pairing “man” and “computer programmer,” Word2Vec matched “woman” with “homemaker,” Ms. Eddy said. The pairing of “father” and “doctor” led to “mother” and “nurse.”
“This data set is brazenly sexist,” Ms. Eddy said. It’s not that someone set out to create a biased data set, she explained. Word2Vec is based on how people use words online. “That’s an important thing to know. Not all models are going to be designed inappropriately. It also comes from how we use language in our everyday lives.”
We’re sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from v144.philanthropy.com and chronicle.blueconic.net.
Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in or create an account if you don't already have one.
If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com