The people you manage may not be ethical and you therefore need guardrails to protect yourself and your organization. But the same is true of the machines you manage – the apps and algorithms of artificial intelligence that act on your behalf. That’s harder to get your head around and probably trickier to control.
“When it comes to AI, there are loads of ethical risks that need mitigating,” consultant Reid Blackman writes in Ethical Machines.
He points to an Uber self-driving car that killed a woman, the investigation of Goldman Sachs for creating AI that set credit card limits lower for women than men, and Amazon abandoning its resume-reading AI after two years because the company couldn’t figure out how to stop it from discriminating against women.
The first generation of algorithms pervading our workplace, he notes, involved figuring how certain inputs would lead to optimal outcomes. An insurance company, for example, might decide age would count twice as much as gender in determining premiums. But now we are moving into the age of machine learning, thanks to increased processing power and data, where the machine studies the information at its disposal and learns.
In Amazon’s case, it learned to shortchange women because historically they did not fare as well as men, and the machine didn’t know the patterns it was unearthing were unjust.
Bias is one of three factors to be alert to as you scrutinize your machines’ ethics. As well, the need for as much data as you can obtain to train your AI may lead you into treacherous territory, invading privacy. Indeed, Mr. Blackman notes that by gathering data from multiple sources, an AI can make inferences about people that are true, but those individuals don’t want companies to know. The third challenge is explainability: Can you and others in the company tell somebody why your machine made the decision it did?
“Using AI, a company might not know why it declined that request to a mortgage, why it issued that credit limit, or why it gave this person and not that person a job ad or an interview,” he says.
At a deeper level, you need to be clear on your organization’s ethics. He says there are actually about two dozen metrics for fairness, and they are not compatible with each other. “That means that an ethical judgment needs to be made for which data scientists and engineers are ill-equipped,” he writes.
He lists six ways bias generally creeps in:
- Real world discrimination: With so much discrimination around us, the historic data you use to grant mortgages or sift through resumes is going to reflect what you don’t want to repeat.
- Undersampling: The data you use to train your machines may not be as complex as the world you are grappling with. An example he offers: Studying travel patterns of people commuting to and from work in order to schedule transportation might come from geolocations of smartphones during commuting hours. But that excludes people who can’t afford a smartphone.
- Proxy bias: Sometimes you can’t get exact data about the subject of your interest so you use a proxy. To create a risk rating for criminal defendants, for example, you might want to know the likelihood they will commit a crime within two years of release. The available data is about who was arrested for a crime, but certain populations are arrested at higher rates so you have encountered bias.
- Coarse-grained data: People are different and if you treat them all the same you run into problems. Diabetes presents differently across ethnicity and gender, so your diabetes-detecting AI can’t use general, broad-based data but needs to reflect specific groups.
- Benchmark or testing bias: It seems smart to test your AI against a benchmark, such as checking how your mortgage-lending AI compares to most mortgage lenders. But if they are discriminating against Black people, you could wind up doing the same.
- Objective function bias: Your goals themselves may lead to bias. He offers the example of AI helping to decide who gets a lung transplant. Because you have asked it to make the lungs last for as long as possible, it ends up leaning toward white people because they live longer than Black people. At one level your metric makes sense, but at another it’s discriminatory.
If you are thinking a good way to counter this is ensuring greater diversity and a broader range of stakeholders in developing your AI, Mr. Blackman warns there is no evidence that actually helps. It’s nice to do but for bias identification and mitigation you need expertise in this complex field and structures that can help to guard against the unethical machines you may be unleashing and overseeing. That starts with becoming crystal clear on your ethical standards; making your data scientists, engineers and product managers aware of the issues; and providing product development teams with tools to help them ponder the ethical risks of products they are working on.
Cannonballs
- Turning our attention back to humans, incentives can unintentionally spur unethical behaviour in four ways, a new study finds. Goal-based incentives increase the risk of unethical behaviour, especially when goals seem out of reach. Incentive systems that are not monitored are vulnerable to abuse, as with a call centre where employees were told auditing was being cut back. Large disparities in pay and promotions between managers and subordinates, even when based on performance, often foster unethical behaviours, such as padding expense accounts, stealing and sabotage. Team-based incentives can lead team members to ignore, conceal or lie about peers’ ethical lapses to avoid disrupting their teams.
- Research shows shoppers appreciate seeing diverse models on e-commerce sites and it helps them to feel more confident in their purchase decision after seeing many different people wearing the item.
- Consultant Denise Lee Yohn says the most important leadership quality is love. Leading with love means taking time to get to know your employees personally, demonstrating love for your employees by trusting them, and being concerned about them as whole persons.
Harvey Schachter is a Kingston-based writer specializing in management issues. He, along with Sheelagh Whittaker, former CEO of both EDS Canada and Cancom, are the authors of When Harvey Didn’t Meet Sheelagh: Emails on Leadership.