How Myers-Briggs and AI Are Abused

Suppose you are a job seeker who has a good idea of ​​what employers want to hear. Like many businesses today, your potential new workplace will give you a personality test as part of the hiring process. You plan to give answers that show that you are enthusiastic, a hard worker and a real person.

Then they put you on camera while you take the test orally, and you frown slightly during one of your answers, and their facial analysis program decides that you are ‘difficult’.

Sorry, next please!

This is just one of the many problems with the increasing use of artificial intelligence in hiring, claims the new documentary “Persona: The Dark Truth Behind Personality Tests,” which premieres Thursday on HBO Max.

The film, directed by Tim Travers Hawkins, begins with the origins of the Myers-Briggs Type Indicator personality test. It’s the middle of the 20th century that is a mother-and-daughter team, and it sorts people based on four factors: introversion / extraversion, intuition / intuition, thinking / feeling and judging / observing. The quiz, with an astrological cult following for its 16 four-letter ‘types’, has evolved into a rental tool used throughout the American business world, along with successors such as the ‘Big Five’, which measure five key personality traits : openness, conscientiousness, extraversion, comfort and neuroticism.

“Persona” argues that the written test contains certain built-in prejudices; for example, the potential to discriminate against those unfamiliar with the type of language or scenarios used in the test.

And according to the film, the recording of artificial intelligence makes things even more problematic.

The technology scans written applications for red flag words and, when engaging an on-camera interview, applicants look at facial expressions that may contradict answers.

Four generations of Briggs Meyers women.
Four generations of Briggs Meyers women.
HBO max

‘[It] works according to 19th-century pseudo-scientific reasoning that emotions and character can be standardized from facial expressions, ”said Ifeoma Ajunwa, associate professor of law and director of the AI ​​research program for decision-making at the University of North Carolina Law, by e- mail told The Post. .

Ajunwa, who appears in the film, says the potential for prejudice is great. ‘Since the automated systems are usually trained on white male faces and voices, the facial expressions or vocal tones of women and racial minorities can be misjudged. In addition, there is the concern about privacy that arises from the collection of biometric data. ”

One major recruitment company, HireVue, will analyze applicants’ “facial movements, word choice and voice” before ranking them against other applicants based on an automatically generated ’employment’ score, “reports the Washington Post. The company has meanwhile stopped the practice, he announced just last month.

Although they claim that ‘visual analysis no longer has a significant added value to the assessment’, the move followed a scream about possible harmful effects.

Cathy O’Neil is a computer science consultant, author of “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy,” and one of the experts interviewed in ‘Persona’. Her company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), conducted an audit of the practices at HireVue after their announcement.

“No technology is inherently harmful; it’s just a tool, ”she told The Post in an email. ‘But just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities. . . This is especially true because people often assume that technology is objective and even perfect. If we blindly trust in something that is complicated and deeply opaque, it is always a mistake. ‘

A typical question from the Myers-Briggs personality test.
A typical question from the Myers-Briggs personality test.
HBO max

There has been a spate of legislative actions over the past few years around the use of facial algorithms. But New York City is the first to introduce a bill that would specifically regulate its use in the rental process. This will oblige companies to announce to applicants that they use the technology, and conduct an annual audit for bias.

Just as a sharp knife can be used to cut bread or kill a man, facial recognition can be used to harm individuals or communities.

Data science consultant Cathy O’Neil

But Ajunwa thinks it does not go far enough. It is an essential first step in preserving the civil liberties of workers, ‘she said. But “what we need are federal regulations related to federal anti-discrimination laws that apply to all states, not just New York City.”

For those who knew Isabel Briggs Myers, it is far from her original intention to use the test, hand-in-hand with AI, to ruthlessly determine whether people are ‘hireable’, to help users to find real calls.

As one of Briggs Myers’ granddaughters in the film says, “I think there are ways she wants to correct.”

.Source