brain scan
© John Lamb/Getty
Modern life runs on intelligent algorithms. The data-devouring, self-improving computer programmes that underlie the artificial intelligence revolution already determine Google search results, Facebook news feeds and online shopping recommendations. Increasingly, they also decide how easily we get a mortgage or a job interview, the chances we will get stopped and searched by the police on our way home, and what penalties we face if we commit a crime, too.

So they must be unimpeachable in their decision-making, right? Wrong. Skewed input data, false logic or just the prejudices of their programmers mean AIs all too easily reproduce and even amplify human biases - as the following five examples show.

1. Lock them up and throw away the key

COMPAS is an algorithm widely used in the US to guide sentencing by predicting the likelihood of a criminal reoffending. In perhaps the most notorious case of AI prejudice, in May 2016 the US news organisation ProPublica reported that COMPAS is racially biased. According to the analysis, the system predicts that black defendants pose a higher risk of recidivism than they do, and the reverse for white defendants. Equivant, the company that developed the software, disputes that.

It is hard to discern the truth, or where any bias might come from, because the algorithm is proprietary and so not open to scrutiny. But in any case, if a study published in January this year is anything to go by, when it comes to accurately predicting who is likely to reoffend, it is no better than random, untrained people on the internet.

2. The criminal minority

Already in use in several US states, PredPol is an algorithm designed to predict when and where crimes will take place, with the aim of helping to reduce human bias in policing. But in 2016, the Human Rights Data Analysis Group found that the software could lead police to unfairly target certain neighbourhoods. When researchers applied a simulation of PredPol's algorithm to drug offences in Oakland, California, it repeatedly sent officers to neighbourhoods with a high proportion of people from racial minorities, regardless of the true crime rate in those areas.

In response, PredPol's CEO pointed out that drug-crime data does not meet the company's objectivity threshold, and so in the real world the software is not used to predict drug crime in order to avoid bias. Even so, last year Suresh Venkatasubramanian of the University of Utah and his colleagues demonstrated that because the software learns from arrest rates rather than crime rates, PredPol creates a "feedback loop" that can exacerbate racial biases.

3. Here's looking at you, white man

Facial recognition software is increasingly being used in law enforcement - and is another potential source of both race and gender bias. In February this year, Joy Buolamwini at the Massachusetts Institute of Technology found that three of the latest gender-recognition AIs, from IBM Microsoft and Chinese company Megvii, could correctly identify a person's gender from a photograph 99 per cent of the time - but only for white men. For dark-skinned women, accuracy dropped to just 35 per cent.

That increases the risk of false identification of women and minorities. Again, it's probably down to the data on which the algorithms are trained: if it contains way more white men than black women, it will be better at identifying white men. IBM quickly announced that it had retrained its system on a new data set, and Microsoft said it has taken steps to improve accuracy.

4. Chief executive sought. Only men need apply

A 2015 study showed that in a Google images search for "CEO", just 11 per cent of the people it displayed were women, even though 27 per cent of the chief executives in the US are female. A few months later, a separate study led by Anupam Datta at Carnegie Mellon University in Pittsburgh found that Google's online advertising system showed high-income jobs to men much more often than to women.

Google pointed out that advertisers can specify that their ads only be shown to certain users or on certain websites. The company does allow its clients to target their adverts based on gender. But Datta and his colleagues also floated the idea that Google's algorithm could have determined that men are more suited to executive positions on its own, having learned from the behaviour of its users: if the only people seeing and clicking on adverts for high-paying jobs are men, the algorithm will learn to show those adverts only to men.

5. Facebook falsely feeds the intifada

Sometimes artificial intelligence feeds back to heighten human bias. In October 2017, police in Israel arrested a Palestinian worker who had posted a picture of himself on Facebook posing by a bulldozer with the caption "attack them" in Hebrew. Only he hadn't: the Arabic for "good morning" and "attack them" are very similar, and Facebook's automatic translation software chose the wrong one. The man was questioned for several hours before someone spotted the mistake. Facebook was quick to apologise.