Discover the stupidity of AI emotion recognition with this little browser game

Technical companies not only want to identify you using face recognition – they also want to read your emotions using AI. For many scientists, however, claims about the ability of computers to understand emotion are fundamentally flawed, and a little web browser in the browser built by researchers from the University of Cambridge wants to show why.

Go to emojify.info, and you can see how your emotions are “read” by your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust and anger) that the AI ​​will try to identify. However, you will probably notice that the software’s readings are far from accurate, and even interpret exaggerated expressions as ‘neutral’. And even if you produce a smile that convinces your computer that you’re happy, you’ll know you’ve faked it.

This is the point of the website, says creator Alexa Hagerty, a researcher at the University of Cambridge Leverhulme Center for the Future of Intelligence and the Center for the Study of Existential Risk: to show that the basic premise underlying many emotion recognition technologies , that facial movements are intrinsically linked to changes in feeling, is defective.

“The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way,” Hagerty says. The edge. “When I smile, I’m happy. When I frown, I’m angry. But the APA did this great evidence in 2019 and they found that people’s emotional space could not be easily deduced from their facial movements. “In the game,” Hagerty says, “you have the opportunity to move your face quickly to personify six different emotions, but the point is that you did not feel within six different things in a row.”

A second mini-game on the site drives this point home by asking users to identify the difference between a wink and a wink – something machines can’t do. “You can close your eyes, and it could be an involuntary act, or it’s a meaningful gesture,” Hagerty says.

Despite these problems, emotion recognition technology is fast catching on, and companies promise that such systems can be used to vet job candidates (giving a “point of service”), see terrorists or determine if commercial executives are sleepy or is sleepy. . (Amazon even uses similar technology in its own pickups.)

Of course, people also make mistakes when we read emotions on people’s faces, but handing over this work to machines has specific disadvantages. First, machines cannot read other social cues as humans can (as with the wink / dichotomy). Machines also often make automated decisions that people can not question and can monitor on a large scale without our being aware of it. Moreover, as with face recognition systems, emotion detection AI is often racially biased, and the faces of black people are more often viewed as negative emotions. All of these factors make the detection of AI emotions much more disturbing than the ability of people to read others’ feelings.

“The dangers are manifold,” Hagerty says. ‘With wrong communication by people, we have many options to correct it. But once you automate something or the reading is done without your knowledge or scope, these options are gone. “

Source