Bias runs deep in the psyche but has now been found deep in the algorithm. What hope is there for a reduction in inequalities – whether by age, gender, ethnicity, ability or socio-economics?
Discrimination is hitting the headlines. Over the last year, #MeToo and #TimesUp have outed sexual harassment and the reality of sex discrimination. The extent of the gender pay gap has been revealed and the government has now announced that, leading on from its Race Audit, it will be looking into ethnicity and pay. But this week I heard that Amazon has discovered its recruitment algorithm creating new inequalities by weeding out potential candidates on the basis of being female.
If the ghost is deep in the machine, how, and where else, does the finger of discrimination and ‘bias’ fall – whether explicit, implicit or unintentional? And how can progressive public policies aimed at reducing such discrimination have real effect against the power of AI? Welcome to the world of bias in the algorithm!
Ms or Mr?
I bumped into the question of bias in the algorithm when I found last week that WhatsApp was auto-correcting Ms to Mr in my messaging. Was this the result of me typing Mr more times than Ms, or a reflection of something deeper?
Pondering the implications of this potentially innocent quirk of my smartphone, I wondered if the problem could be more widespread. The answer to my question came up in the news only a few days later.
The gender-biased algorithm
Recruitment Grapevine reported this week (see here) that Amazon has abandoned a high-tech recruiting programme which had started to ignore the CVs of female candidates. The problem? The programme was using CVs submitted over the past 10 years as a baseline for success.
The tech industry is highly dominated by men and so successful CVs over the last 10 years have been predominantly those submitted by men. The algorithm took in this information and learnt to penalise CVs containing words like ‘women’s’ – for example references to women’s colleges or membership of women’s sporting teams. These phrases would count negatively against a candidate and the applications were downgraded.
The real news here is in what the algorithm is doing – and well done to Amazon’s IT folks for working it out. The effect has been observed before. Last year, a study by Anglia Ruskin University (here) found significant age-related inequalities in recruitment. In CVs with identical qualifications and experience, candidates in their late 20s were over 4 times more likely than those over 50 to be offered an interview, and 50 year old women applying for a factory job were 25 times less likely to be called to interview than younger women. Given the widespread use of computer-based applicant tracking systems, the assumption must be that the algorithm was at work here too. (More here.)
The potential for bias
Algorithms are used extensively today, and the recruitment industry is only one example. On social media platforms, in marketing, product development, forecasting – algorithms collect and apply the data they collect.
Amazon’s IT people did not write a programme to discriminate against women, and the company stresses that programme recommendations are not the sole basis for hiring decisions. But they had created an algorithm which learnt from previous examples of success and used those to project effective outcomes.
Such an apparently sensible strategy wrought with unintended consequences has wide-reaching implications. Where historical patterns have been subject to inequality, bias or discrimination, the approach serves to recycle and embed that bias – perhaps unknowingly.
Man versus machine
The recent history of British society has seen people experience disadvantage on the basis of gender, sexual orientation, ethnicity, age, socio-economics, ability and health. Disadvantage continues today and progressive policy-makers are keen to address inequalities. But if our algorithms use historical patterns as the basis for projecting the future, policy and computer will be at odds with each other. Which will win?
If we are not aware of, and compensate for this risk, initiatives to combat discrimination and bias may well fail. The computer is here to stay. The emergence of AI pushes computerisation further into social and economic interaction. It is vital we acquaint ourselves with the risks and learn how to control for unintended outcomes.
Algorithms base decisions on past actions. In a society where their use is widespread, how will we deliver change?