Amazon offers cautionary tale of AI-assisted hiring
In recent years, the spread of artificial intelligence has led many to assume that a golden age of computer-assisted hiring is at hand, in which machines will solve the “who?” question. Amazon, one of the most innovative and data-rich companies in the world, leapt on that possibility as early as 2014.
Companies that go from “good to great”, management thinker Jim Collins wrote in his book of the same name, share some basic characteristics. One is that when they assemble teams, they ask “first who . . . then what?” and ensure they get “the right people on the bus”.
This step should even come before deciding the destination, he wrote: “If we get the right people on the bus, the right people in the right seats and the wrong people off the bus, then we’ll figure out how to take it someplace great.”
For the 17 years since Good to Great was published, chief executives, team leaders and project managers have wondered how to manage this deceptively simple-sounding task.
Every time I consider the question, I feel sympathy for the recruiters.
The real-world obstacles to boarding the right passengers are many. It is never as easy as Collins suggests to eject “the wrong people”.
Some insist on clinging onto a seat they may have been assigned by the previous driver. Others have an inflated assessment of their own right to a place.
Worse, a few self-deprecating experts may opt to step off, even though you desperately need their contribution.
Also, teams are almost never static: buses are constantly stopping and starting to take on or let off passengers. More important, en route from A to B, people change.
In recent years, the spread of artificial intelligence — or, more precisely, the spread of AI hype — has led many to assume that a golden age of computer-assisted hiring is at hand, in which machines will solve the “who?” question.
Amazon, one of the most innovative and data-rich companies in the world, leapt on that possibility as early as 2014.
It built a recruiting engine that analysed applications submitted to the group over the preceding decade and identified patterns. The idea was it would then spot candidates in the job market who would be worth recruiting.
“Everyone wanted this holy grail,” one person familiar with the initiative told Reuters, which broke the story in October.
Unfortunately, the data were dominated by applications from men, and the AI taught itself to prefer male candidates, discriminating against CVs that referred to “women’s” clubs, and setting aside graduates from certain all-women’s colleges.
The initiative was downgraded and the research team scrapped. Amazon has claimed it never used the programme to evaluate applicants.
For Vivienne Ming, a neuroscientist and entrepreneur, whom Amazon once tried to hire as its chief scientist for people, the group’s unsuccessful quest for the grail is an important cautionary tale.
Companies should be “incredibly careful” about introducing over-ambitious AI recruitment tools, she told me at a recent FT Live innovation conference.
“Because it is very, very possible that no one, including the people that built it, actually knows what it’s doing. And if it does have bias in its hiring — and remember one of the most sophisticated and innovative companies in the world wasn’t able to fix this problem — then you are now liable for something even though you didn’t intend for it.”
Her contention is that innovation often fails because people such as Amazon’s researchers do not understand the problems they work on.
“If you’ve got this one thing that you think will solve your problem, the one thing I can guarantee you is that you haven’t got anything,” she says.
In fact, the task of working out how to get the right people on the bus has got harder since 2001 when Jim Collins first framed it, as it has become clearer — and more research has underlined — that diverse teams are better at innovation.
For good reasons of equity and fairness, the quest for greater balance in business has focused on gender, race and background. But these are merely proxies for a more useful measure of difference that is much harder to assess, let alone hire for: cognitive diversity.
Might this knotty problem be solved with the help of AI and machine learning? Ms Ming is sceptical.
As she points out, most problems with technology are not technology problems, but human problems. Since humans inevitably inherit cultural biases, it is impossible to build an “unbiased AI” for hiring.
“You simply have to recognise that the biases exist and put in the effort to do more than those default systems point you towards,” she says.
What Amazon’s experience suggests is that instead of sending bots to crawl over candidates’ past achievements, companies should be exploring ways in which computers can help them to assess and develop the long term potential of the people they invite to board the bus.
Recruiters should ask, in Ms Ming’s words, “Who will [these prospective candidates] be three years from now when they’re at their peak productivity inside the company? And that might be a very different story than who will deliver peak productivity the moment they walk in the door.” THE FINANCIAL TIMES
ABOUT THE AUTHOR:
Andrew Hill is the Financial Times’ management editor.
Read more of the latest in