Zoë Corbyn 

The AI tools that might stop you getting hired

One-way video interviews, CV screeners and digital monitoring are among the ways employers are using tech to save time and money on recruitment. But do they work?
  
  

Giant robot throwing man in a trash can.

Investigating the use of artificial intelligence (AI) in the world of work, Hilke Schellmann thought she had better try some of the tools. Among them was a one-way video interview system intended to aid recruitment called myInterview. She got a login from the company and began to experiment – first picking the questions she, as the hiring manager, would ask and then video recording her answers as a candidate before the proprietary software analysed the words she used and the intonation of her voice to score how well she fitted the job.

She was pleased to score an 83% match for the role. But when she re-did her interview not in English but in her native German, she was surprised to find that instead of an error message she also scored decently (73%) – and this time she hadn’t even attempted to answer the questions but read a Wikipedia entry. The transcript the tool had concocted out of her German was gibberish. When the company told her its tool knew she wasn’t speaking English so had scored her primarily on her intonation, she got a robot voice generator to read in her English answers. Again she scored well (79%), leaving Schellmann scratching her head.

“If simple tests can show these tools may not work, we really need to be thinking long and hard about whether we should be using them for hiring,” says Schellmann, an assistant professor of journalism at New York University and investigative reporter.

The experiment, conducted in 2021, is detailed in Schellmann’s new book, The Algorithm. It explores how AI and complex algorithms are increasingly being used to help hire employees and then subsequently monitor and evaluate them, including for firing and promotion. Schellmann, who has previously reported for the Guardian on the topic, not only experiments with the tools, but speaks to experts who have investigated them – and those on the receiving end.

The tools – which aim to cut the time and cost of filtering mountains of job applications and drive workplace efficiency – are enticing to employers. But Schellmann concludes they are doing more harm than good. Not only are many of the hiring tools based on troubling pseudoscience (for example, the idea that the intonation of our voice can predict how successful we will be in a job doesn’t stand up, says Schellmann), they can also discriminate.

In the case of digital monitoring, Schellmann takes aim at the way productivity is being scored based on faulty metrics such as keystrokes and mouse movements, and the toll such tracking can have on workers. More sophisticated AI-based surveillance techniques – for example, flight risk analysis, which considers various signals, such as the frequency of LinkedIn updates, to determine the chance of an employee quitting; sentiment analysis, which analyses an employee’s communications to try to tap into their feelings (disgruntlement might point to someone needing a break); and CV analysis, to ascertain a worker’s potential to acquire new skills – can also have low predictive value.

It is not, says Schellmann, that she’s against the use of new approaches – the way humans do it can be riddled with bias, too – but we should not accept technology that doesn’t work and isn’t fair. “These are high stakes environments,” she says.

It can be hard to get a handle on how employers are using the tools, admits Schellmann. Though existing survey data indicate widespread use, companies generally keep quiet about them and candidates and employees are often in the dark. Candidates commonly assume a human will watch their one-way video but, in fact, it may only be seen by AI.

And the use of the tools isn’t confined to employment in hourly wage jobs. It is also creeping into more knowledge-centric jobs, such as finance and nursing, she says.

Schellmann focuses on four classes of AI-based tools being deployed in hiring. In addition to one-way interviews, which can use not just tone of voice but equally unscientific facial expression analysis, she looks at online CV screeners, which might make recommendations based on the use of certain keywords found in the CVs of current employees; game-based assessments, which look for trait and skills matches between a candidate and the company’s current employees based on playing a video game; and tools that scour candidates’ social media outputs to make personality predictions.

None are ready for prime time, says Schellmann. How game-based assessments check for skills relevant to the job is unclear, while, in the case of scanning a candidate’s social media history, she shows that very different sets of traits can be discerned depending on which social media feed the software analyses. CV screeners can embody bias. Schellmann cites the example of one that was found to be giving more points to candidates who had listed baseball as a hobby on their CV versus candidates who listed softball (the former is more likely to be played by men).

Many of the tools are essentially black boxes, says Schellmann. AI let loose on training data looks for patterns, which it then uses to make its predictions. But it isn’t necessarily clear what those patterns are and they can inadvertently bake in discrimination. Even the vendors may not know precisely how their tools are working, let alone the companies that are buying them or the candidates or employees who are subjected to them.

Schellmann tells of a black female software developer and military veteran who applied for 146 jobs in the tech industry before success. The developer doesn’t know why she had such a problem but she undertook one-way interviews and played AI video games, and she’s sure was subject to CV screening. She wonders if the technology took exception to her because she wasn’t a typical applicant. The job she eventually did find was by reaching out to a human recruiter.

Schellmann calls on HR departments to be more sceptical of the hiring and workplace monitoring software they are deploying – asking questions and testing products. She also wants regulation: ideally a government body to check the tools to ensure they work and don’t discriminate before they are allowed to hit the market. But even mandating that vendors release technical reports about how they have built and validated their tools so others could check them would be a good first step. “These tools aren’t going away so we have to push back,” she says.

In the meantime, jobseekers do have ChatGPT at their disposal to help them write cover letters, polish CVs and formulate answers to potential interview questions. “It is AI against AI,” says Schellmann. “And it is shifting power away from employers a little bit.”

  • The Algorithm: How AI Can Hijack Your Career and Steal Your Future by Hilke Schellmann is published by C Hurst & Co (£22). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply

 

Leave a Comment

Required fields are marked *

*

*