Scientists create online games to show risks of AI emotion recognition | Artificial intelligence (AI) | The Guardian

It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate.

Technology designed to identify human emotions using machine learning algorithms is a huge industry, with claims it could prove valuable in myriad situations, from road safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racially biased.

A team of researchers have created a website – – where the public can try out emotion recognition systems through their own computer cameras. One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context.

Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.

“It is a form of facial recognition, but it goes farther because rather than just identifying people, it claims to read our emotions, our inner feelings from our faces,” said Dr Alexa Hagerty, project lead and researcher at the University of Cambridge Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk.

Facial recognition technology, often used to identify people, has come under intense scrutiny in recent years. Last year the Equality and Human Rights Commission said its use for mass screening should be halted, saying it could increase police discrimination and harm freedom of expression.

But Hagerty said many people were not aware how common emotion recognition systems were, noting they were employed in situations ranging from job hiring, to customer insight work, airport security, and even education to see if students are engaged or doing their homework.

Such technology, she said, was in use all over the world, from Europe to the US and China. Taigusys, a company that specialises in emotion recognition systems and whose main office is in Shenzhen, says it has used them in settings ranging from care homes to prisons, while according to reports earlier this year, the Indian city of Lucknow is planning to use the technology to spot distress in women as a result of harassment – a move that has met with criticism, including from digital rights organisations.

While Hagerty said emotion recognition technology might have some potential benefits these must be weighed against concerns around accuracy, racial bias, as well as whether the technology was even the right tool for a particular job.

“We need to be having a much wider public conversation and deliberation about these technologies,” she said.

The new project allows users to try out emotion recognition technology. The site notes that “no personal data is collected and all images are stored on your device”. In one game, users are invited to pull a series of faces to fake emotions and see if the system is fooled.

“The claim of the people who are developing this technology is that it is reading emotion,” said Hagerty. But, she added, in reality the system was reading facial movement and then combining that with the assumption that those movements are linked to emotions – for example a smile means someone is happy.

“There is lots of really solid science that says that is too simple; it doesn’t work quite like that,” said Hagerty, adding that even just human experience showed it was possible to fake a smile. “That is what that game was: to show you didn’t change your inner state of feeling rapidly six times, you just changed the way you looked [on your] face,” she said.

Some emotion recognition researchers say they are aware of such limitations. But Hagerty said the hope was that the new project, which is funded by Nesta (National Endowment for Science, Technology and the Arts), will raise awareness of the technology and promote discussion around its use.

“I think we are beginning to realise we are not really ‘users’ of technology, we are citizens in world being deeply shaped by technology, so we need to have the same kind of democratic, citizen-based input on these technologies as we have on other important things in societies,” she said.

Vidushi Marda, senior programme officer at the human rights organisation Article 19 said it was crucial to press “pause” on the growing market for emotion recognition systems.

“The use of emotion recognition technologies is deeply concerning as not only are these systems based on discriminatory and discredited science, their use is also fundamentally inconsistent with human rights,” she said. “An important learning from the trajectory of facial recognition systems across the world has been to question the validity and need for technologies early and often – and projects that emphasise on the limitations and dangers of emotion recognition are an important step in that direction.”