Let’s say you’ve submitted your résumé to Company X, hoping that all the work you’ve done on your cover letter will catch the eye of a discriminating member of the recruiting group. Perhaps your little joke about playing softball on the company team might even make you stand out from the pack.
But what if there is no recruiting group? No human one, that is. What if your CV is scanned by a computer algorithm, and that algorithm tosses out your résumé because it doesn’t like the fact that you played on a women’s softball team in university?
This, unfortunately, is not tomorrow’s dystopia; it’s today’s reality. A Reuters investigation last year revealed that Amazon’s planned résumé-vetting service was discounting applications that mentioned the word “women” or “women’s.” Why? Because the algorithm had been fed biased data. “Amazon’s computer models were trained to vet applicants by observing patterns in résumé submitted to the company over a 10-year period,” Reuters reported. “Most came from men, a reflection of male dominance across the tech industry. In effect, Amazon’s system taught itself that male candidates were preferable.”
Amazon scrapped that software, but bias in machine learning still exists in ways that will affect people every day, and are largely invisible. Some of those baked-in biases are outlined in a new report, Discriminating Systems: Gender, Race and Power in AI, from the AI Now Institute at New York University. They include facial-recognition software used by Uber that fails to recognize trans drivers; sentencing algorithms biased against black defendants; health-management systems that allocate resources to wealthier patients; chatbots that begin to spout racist and sexist language once they’re launched.
These biases matter because, as systems become more automated, they’ll affect all aspects of our lives, from education to transportation to health care. Right now, there’s little oversight into how and when these systems are deployed – and, crucially, who develops them.
The current state of women’s employment in AI fields is “dire,” the report warns – and it’s even worse for black and Latino people, whose employment in major tech companies is in the low single digits. The report cites the experience of one researcher, Timnit Gebru, who attended a machine-learning conference in 2016 and discovered she was one of only six black delegates – out of 8,500 participants.
Diversity among researchers and academics in the AI field is only one problem. As the AI Now report explains, much of the research that will ultimately transform our world is being conducted by a small, powerful group of companies operating under a cloak of proprietary secrecy, with little oversight or consideration outside of getting products to market quickly.
There’s reason to question whether some technologies – around surveillance, for example – should ever be rolled out at all. Yet, there’s very little discussion – at least not yet – about what role the public good should play in the development of these technologies. Expediency and profit are the only goalposts.
We should be having more of the types of public discussions that Yuval Noah Harari, a bestselling historian and philosopher, recently conducted with Fei-Fei Li, the co-director of Stanford University’s newly launched Human-Centred AI Institute. We need to start thinking differently, Dr. Harari said, when we build the tools that will shape the future: “What could be the cultural or political implications of what we’re building? It shouldn’t be a kind of afterthought that you create this neat technical gadget, it goes into the world, something bad happens and then you start thinking, ‘Oh, we didn’t see this one coming. What do we do now?’ "
In other words, it’s nearly impossible to catch the horse that’s bolted from the barn. Dr. Harari proposed a couple of solutions: One, people should engage in deep self-study so that algorithms don’t make better decisions for us than we do, and two, ethics should be a fundamental part of creating AI tools, which means actually putting ethics onto the curriculum for developers and engineers (I’m not sure which of these is going to be more difficult.)
Dr. Li, who said she “wake[s] up every day worried about the diversity, inclusion issue in AI,” echoed those thoughts. To perform better, she said, algorithms need to be developed by people with different backgrounds, with input from historians and philosophers, legal scholars and psychologists. Otherwise, they’ll end up replicating the narrow world view of the tiny group that created them.
The time to have these conversations is now, because the future is here, and it’s everywhere.