Timnit Gebru’s exit from Google exposes a crisis in AI

This year has involved many things, including bold allegations of breakthroughs with artificial intelligence. Industry commentators speculate that the language generation model GPT-3 may have achieved ‘artificial general intelligence’, while others praised the Alphabet subsidiary DeepMind’s protein folding algorithm – Alphafold – and its ability to ‘transform biology’. Although the basis of such claims is thinner than the exuberant headlines, it has not done much to dampen enthusiasm in the industry, the profit and prestige of which depend on the spread of AI.

Against this backdrop, Google has fired Timnit Gebru, our dear friend and colleague, and a leader in artificial intelligence. She is also one of the few black women in AI research and an unpredictable advocate for bringing more BIPOC, women and non-Western people into the field. In every way, she excelled at the work that Google hired her to do, including showing racial and gender differences in facial analysis technologies and developing reporting guidelines for datasets and AI models. Ironically, this and her outspoken advocacy for those under-represented in AI research are also the reasons, she says. The company fired her. According to Gebru, after Google demanded that she and her colleagues withdraw a research article on (profitable) large-scale AI systems, Google Research told her team that they accepted her resignation, despite the fact that she did not resign. (Google declined to comment on this story.)

Google’s appalling treatment of Gebru shows a double crisis in AI research. The field is dominated by an elite, predominantly white male staff, and is primarily controlled and funded by major players in the industry – Microsoft, Facebook, Amazon, IBM, and yes, Google. With the shooting of Gebru, the politeness policy that tore apart the young attempt to construct the necessary leanings around AI, bringing questions about the racial homogeneity of the AI ​​workforce and the inefficiency of corporate diversity programs at the center of the discourse. But this situation has also made it clear that – no matter how sincere a company may seem like Google’s promises – corporate funded research can never separate the reality of power and the flow of revenue and capital.

This should concern us all. With the spread of AI in domains such as healthcare, criminal justice, and education, researchers and advocates are raising urgent concerns. These systems make provisions that directly shape lives, while at the same time being embedded in organizations structured to reinforce the history of racial discrimination. AI systems also concentrate power in the hands of those who design and use it, while obscuring responsibility (and accountability) behind the veneer of complex computation. The risks are drastic and the incentives are certainly perverse.

The current crisis exposes the structural barriers that limit our ability to build effective protection around AI systems. This is especially important because the populations are subject to harm and prejudice because of AI’s predictions and stipulations, mainly BIPOC people, women, religious and gender minorities, and the poor – those who have borne the greatest pressure of structural discrimination. Here is a clear racial divide between those who benefit – the enterprises and the predominantly white male researchers and developers – and those who are likely to be disadvantaged.

Take, for example, facial recognition technologies, which show that people “recognize” darker skin less often than those with lighter skin. This alone is worrying. But these ‘racist’ mistakes’ are not the only problems with face recognition. Tawana Petty, director of organization at Data for Black Lives, points out that these systems are deployed disproportionately in predominantly black neighborhoods and cities, while cities that have managed to ban and push back the use of facial recognition are predominantly white.

Without independent, critical research that centers the perspectives and experiences of those harmed by these technologies, our ability to understand and challenge the over-anticipated demands made by the industry. Google’s treatment of Gebru makes it increasingly clear where the business’s priorities appear to be when critical work pushes back on its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people who are most vulnerable to their harm.

Operational control is further hampered by the close ties between technology companies and seemingly independent academic institutions. Researchers from corporations and academia publish papers and rub elbows at the same conferences, and some researchers even hold concurrent positions at technology enterprises and universities. It blurs the line between academic and corporate research and obscures the incentives that underpin such work. It also means that the two groups look very similar – AI research in academia suffers from the same damaging racial and gender homogeneity issues as the corporate counterparts. In addition, the top computer science departments accept large amounts of research funding from Big Tech. We just need to look at Big Tobacco and Big Oil for disturbing templates that expose how much influence the public understanding of complex scientific issues can exert on large companies if the creation of knowledge is left in their hands.