How technology companies ignore the mental health crisis of the pandemic

There are many scientists who do not understand the long-term effects of COVID-19 on society. But a year later, at least one thing seems clear: the pandemic was terrible for our collective mental health – and a surprising number of technology platforms seem to have thought very little about it.

First the numbers. Earth reported that the number of adults in the UK showing symptoms of depression nearly doubled from March to June last year, to 19 per cent. In the United States, 11 percent of adults reported depression between January and June 2019; by December 2020, the number had almost quadrupled, to 42 percent.

Prolonged isolation created by obstruction is associated with interruptions in sleep, increased drug and alcohol use, and weight gain. Preliminary data on suicides in 2020 vary, but the number of drug overdoses has skyrocketed, with experts saying many were likely intentional. Even before the pandemic, Glenn Kessler reported at The Washington Post“Suicide rates have risen annually in the United States since 1999, with a 35 percent gain over two decades.”

Issues related to suicide and self-harm affect almost every digital platform in any way. The internet is increasingly where people search, discuss and seek support for mental health issues. But according to new research from the Stanford Internet Observatory, platforms in many cases have no policies related to discussing self-harm or suicide.

In “Self-Harm Policy and Internet Platforms,” the authors surveyed 39 online platforms to understand their approach to these issues. They analyzed search engines, social networks, performance-oriented platforms like TikTok, gaming platforms, dating apps and messaging apps. Some platforms have developed robust policies to cover the nuances of these issues. Many, however, completely ignored them.

“There are huge inequalities in the comprehensiveness of policy-oriented policies,” write Shelby Perkins, Elena Cryst and Shelby Grossman. ‘Facebook’s policies, for example, speak not only of suicide, but also of euthanasia, suicide notes and lively suicide attempts. In contrast, Instagram and Reddit have no suicide-related policies in their primary policy documents. ”

Among the platforms surveyed, Facebook was found to have the most comprehensive policies. But researchers blamed the company for unclear policies at its Instagram subsidiary; technically, the parent company’s policies all apply to both platforms, but Instagram maintains a separate set of policies that do not explicitly mention posting about suicide, which creates confusion.

Yet Facebook is miles ahead of some of its competitors. Reddit, Parler, and Gab were found to have no public policy related to reports of self-harm, eating disorders, or suicide. This does not necessarily mean that the companies have no policy at all. But if it is not posted in public, we may never know.

In contrast, researchers have said that what they call ‘creator platforms’ – YouTube, TikTok and Twitch – have developed clever policies that go beyond simple promises to remove unsettling content. The platforms provide meaningful support in their policies, both for people recovering from mental health issues and for those who may be considering self-harm, the authors said.

“Both YouTube and TikTok are clear that creators can share their stories of self-harm to raise awareness and find community support,” they wrote. “We were under the impression that YouTube’s Community Guidelines on Suicide and Self-Injury provided resources, including hotlines and websites, for those thinking of suicide or self-harm, for 27 countries.

Outside of the biggest platforms, though, it’s a throw-up. Researchers could not find public policies for suicide or self-harm for NextDoor or Clubhouse. Dating apps? Grindr and Tinder have policies on self-harm; Scruff and Hinge not. Messaging apps also do not have such public policies – iMessage, Signal and WhatsApp do not either. (The fact that everyone uses some form of coding probably has a lot to do with it.)

Why do all these things matter? In an interview, the researchers told me that there are at least three major reasons. The one is essentially a matter of justice: if people are going to be punished for the ways in which they discuss online self-harm, they need to know it in advance. Two is that policy platforms offer the opportunity to intervene if their users are considering harming themselves. (Many provide users with links to resources that can help them in a time of crisis.) And three are that we can no longer develop effective policies to address online mental health issues if we do not know what those policies are.

And moderating these types of posts can be difficult, researchers said. There is often a fine line between reports discussing self-harm and those that appear to be encouraging.

“The same content that can show that someone is recovering from an eating disorder is something that can also lead to other people,” Grossman told me. “The same content can affect users in two different ways.”

But you can not be moderate if you do not even have a policy, and I was surprised when I read this research on how many companies do not.

It was a kind of policy week here at Platformer. We talked about how Clarence Thomas wants to inflate platform policies as they exist today; how YouTube changes (and discloses) the way it harms the platform; and how Twitch has developed a policy for the behavior of creators on other platforms.

What strikes me is how fresh it all feels. We’re been in the platform era for over a decade, but there are still so many big questions to find out. And even on the most serious topics – how to address content related to self-harm – some platforms have not even entered the discussion yet.

The Stanford researchers told me that they believe they are the first people to even try to catalog self-harm policies among the major platforms and make them public. There are undoubtedly many other areas where a similar inventory can serve the public benefit. Private enterprises still hide too much, even and especially if they are directly involved in questions of public interest.

In the future, I hope these companies work together more – to teach each other and adopt policies that make sense for their own platforms. And thanks to Stanford researchers, at least on one topic, they can now find all the existing policies in one place.


This column was co-published with Platforms, a daily newsletter on Big Tech and democracy.

Source