Using AI to Screen Job Applicants is a Terrible Idea
By Kolby Harvey, Chief of Content
One of our core values as a company is to develop and deploy AI technologies responsibly, always questioning who will benefit from our products and how. We recognize that while these technologies have enormous potential to improve human lives, we also cede that they can and will be used irresponsibly and/or unethically, in ways that further alienate already-marginalized communities. At InfinitAI, we strive to ethically deploy the tech we make, which sometimes entails saying something when we see others doing so in ways that are or could be damaging.
One such instance came across my Twitter feed last week: an American company, Hirevue, has begun implementing facial recognition technology intended to help employers screen job applicants using artificial intelligence.
According to an article published by The Telegraph, Hirevue’s algorithms evaluate the performances of applicants by comparing video interviews “against about 25,000 pieces of facial and linguistic information compiled from previous interviews of those who have gone on to prove to be good at the job.” Theoretically, these algorithms should allow companies using them to screen a greater number of candidates while also providing “a more reliable and objective indicator of future performance free of human bias,” that avoids “relying on CVs.”
If this sounds like a monumentally awful idea, that’s because it is. Regardless of what Hirevue claims, it’s common knowledge that facial recognition technology’s track record in identifying the faces of people of color, specifically dark-skinned women, is horrendous. The last few years have been replete with examples—HP’s motion-tracking webcam, Nintendo’s Mii Maker, Google’s image-labeling algorithms. It might be tempting to think that technology is improving in supposed leaps and bounds every day, that these missteps are being corrected, but consider this: Google’s “solution” to the aforementioned problem of its algorithms equating Black Americans to apes was to simply remove “gorilla” as a label from its systems. The bias that caused the issue in the first place is still there, festering somewhere in a labyrinth of code. It’s difficult to believe that Hirevue alone has discovered some secret method of eliminating racial bias in its facial recognition efforts when literally every other American tech company has struggled on this front. It’s equally difficult to imagine Hirevue’s product not flagging any face that isn’t a white man’s as less than desirable.
My concerns here go beyond the visual too. The Telegraph interviewed Hirevue CTO Loren Larsen, who clarified that the vast majority (80–90%) of analyses conducted by his company’s algorithms focused on applicants’ “verbal skills,” not their faces, by looking at a set of “350-ish” features. These include whether a candidate uses active or passive voice, whether one tends to use “we” more than “I,” as well as word choice and sentence length. These habits, in and of themselves, mean nothing and are merely arbitrary signifiers of a performance of professionalism and intelligence designed to appeal to employers, nothing more. With a single google search, anyone with internet access can pull up a list of buzzwords to punch up their resume or an article outlining the habits of “successful people.” Parroting these words and behaviors to potential employers says nothing about a candidate’s abilities, professionalism, or intelligence, save that they looked up a couple of listicles before their interview. More importantly, these linguistic “features” are essentially class markers, indicators that a candidate has enjoyed access to certain privileges in their formative years, specifically the vocabulary that comes with secondary education and having money. I wonder about the software’s ability to handle accents as well. Recall how much trouble Siri still has understanding people with accents, be they foreign or regional.
I’d also argue that the fixation on “active words”1 over “passive” ones reeks of sexism. While I freely admit that passive verb constructions can be cumbersome when overused, we must challenge dichotomies of strong vs. weak communication skills (and candidates), because the characteristics that end up being valorized turn out, invariably, to be stereotypical masculine (and white). When I talk about biases being passed on to our technology, this is a perfect example. Sure, in and of themselves, words like “active,” “strong,” or “direct,” are harmless, but in the context of job applicant screening, an area where marginalized candidates are often passed over in favor of more privileged ones, they become something else—a manifestation of how we denigrate anything associated with femininity. I maintain that clarity in communication (a concept that is harder to pin to any binary gender) is more important than “strength” or “directness” or any other coded way of saying “dick-swinging.”
Really, the above objections make up just the handful that flooded into my head reading The Telegraph’s article. A deeper dive would surely uncover more. What’s perhaps most troubling is Hirevue’s cavalier attitude about objectivity—the presence of biases in their system doesn’t seem to have crossed their mind, at least not in any meaningful way. 25,000, the number of data points offered up by Hirevue as the basis for their software’s analysis, may seem like a large number, but is it truly? When we look at the complexity of human personality, communication, bodily features, not to mention an individual’s personal history, can 25, 000 bits of information adequately convey the richness of that individual’s experience and abilities? I’d argue that it’s not enough and that it’s irresponsible at best to assert otherwise.
Kolby Harvey, PhD
Chief of Conversation
Kolby is a writer, designer, and artist living in Washington state. In 2018, he earned the University of Colorado’s first creative doctorate in Intermedia Art, Writing and Performance.
- Note: Technically speaking, words are not passive or active. Certain verb constructions, however, are. This may be pedantic of me, but I’d expect that someone overseeing the development of software that labels usage of this type of language problematic or undesirable to employ the correct terminology at the very least.