(Reuters) – In addition to disputing misleading allegations made by US President Donald Trump about this week’s consent, Twitter has added tags to the fact – check to thousands of other tweets since the warnings were released earlier this month, mostly on coronavirus reports. . .
The company does not expect additional staff for the business, Twitter spokeswoman Liz Kelley said Saturday. Nor is it a partnership with independent fact-checking organizations like Facebook and Google to outsource the debunking of user-marked viral posts.
Social media platforms have been scrutinized over how quickly they spread false information and other forms of abusive content since Russia used the networks to interfere in the 2016 US presidential election.
Fact-checking groups said they welcomed Twitter’s new approach, which adds a “get the facts” tab with more information, but said they hoped the company would set out its methodology and reasoning more clearly.
CEO Jack Dorsey acknowledged the criticism on Friday, saying he agreed that fact-checking should be “open source and therefore verifiable by all.” In a separate tweet, Dorsey said more transparency of the company is ‘critical’.
The company’s move to mark Trump’s allegations of ballots separates it from larger rivals such as Facebook, which declares its neutrality by leaving factual decisions to third-party partners and exempting the posts of politicians from review.
‘Fact-checking is to some extent subjective. It’s subjective in what you choose to control, and it’s subjective in how you judge something, ”said Aaron Sharockman, executive director of the US fact-checking website PolitiFact, saying Twitter’s process is opaque.
Twitter telegraphed in May that its new policy of adding fact-checking labels to disputed or misleading coronavirus information would be extended to other topics. It said this week – after noticing Trump’s tweets – that it was now naming misleading content regarding election integrity.
Twitter, Kelley, said the team continues to expand efforts to include other topics.
A Twitter spokesman said the company’s Trust and Safety division should carry the “legwork” on such labels, but that he did not want to give the size of the team. This week, Twitter defended one of these employees after being politically biased by Trump and his supporters over tweets from 2017.
Twitter also showed Trump’s anger over the warning over his tweet about protests in Minnesota over the murder of police on a black man for ‘glorifying violence’, an adaptation of a 2019 policy pursued by critics of the website is expected.
In the tweet, Trump warned the mostly African-American protesters that ‘when the looting begins, the shooting begins’, a phrase used during the civil rights period to justify police violence against protesters.
Facebook did not act on the same message.
The Twitter spokesman said decisions on the labels are being made by a team of executives, including Sean Edgett, Twitter’s general counsel, and Del Harvey, vice president of Trust and Safety. CEO Jack Dorsey is briefed before taking action.
The company’s board of trustees collects tweets about the disputed claims and writes a summary for a landing page. The team, which includes former journalists, usually pulls content together into categories, including Trending, News, Entertainment, Sports and Fun.
Twitter, whose executives at the time called it “the freedom of speech of the free speech party”, has been tightening content policy for several years after acknowledging that abuse was rampant.
Dorsey met privately with academics and senior journalists shortly after the 2016 U.S. election, calling former New York Times editor Bill Keller, who attended one meeting, an ‘attempt by the pack’ to fake news and abuse the head.
Critics say the company has been sluggish since then, but it has accelerated its efforts over the past year.
In March, he debuted his ‘manipulated media’ label on a video of Joe Biden, the alleged Democratic presidential candidate to run for Trump in the November 3 election, posted by the White House director of social media.
Twitter’s content rating action is small compared to its peers, with about 1,500 people. Facebook has about 35,000 people working on ‘safety and security’, including 15,000 moderators, most of whom are contractors, though it also dwarfs Twitter in size: 2.4 billion daily users compared to Twitter’s 166 million.
Facebook, which distanced itself from Twitter’s actions this week, is also putting together an independent oversight body to decide on a small number of controversial content decisions.
From January to June last year, Twitter said the company had taken action over 1,254,226 accounts for violating content rules. Twitter does work with independent organizations on content issues, but fact-checking groups, some of which are paid for by Facebook, have told Reuters they want more dialogue with Twitter about the new steps.
Baybars Örsek, executive director of the International Fact-Checking Network at the Poynter Institute, said the organization took to Twitter to recommend more transparency features in fact-checking, such as the use of timestamps.
Vinny Green, vice president of operations at the organization Snopes, said he had been tapping Twitter since 2017 to form partnerships, but he received lukewarm responses.
Since 2016, Facebook has operated a fact-checking program with dozens of external partners, including a Reuters unit.
YouTube, the video service of Alphabet Inc, in April began showing American viewers information from fact checkers such as FactCheck.org and PolitiFact, although it did not want to share a full list of partners.
Reporting by Elizabeth Culliford in Birmingham, England and Katie Paul in San Francisco; Edited by Greg Mitchell and Sandra Maler