25.1 C
Delhi
Friday, November 22, 2024

Twitter’s content moderation on trial in Paris

PARIS — A French court on Thursday will hear a case aimed at shedding light on Twitter’s best-kept secret: Just how much the social network invests in the fight against illegal content.

The social media platform, pitted against a group of four NGOs including also SOS Racism, SOS Homophobia and the Union of French Jewish students, will argue before the Paris court of appeal that it should not have to disclose detailed information about internal processes.

The case touches upon a core issue that has long haunted policymakers and researchers when it comes to platform regulation: The actual means — human and financial — allocated to the moderation of illegal and harmful content. So far, companies such as Twitter, Facebook and Google’s YouTube have been reluctant to make detailed and specific information public about the numbers of content moderators by country and/or language.

According to the French NGOs, Twitter is not doing enough against hate speech online. In July, a court ordered the company to share very specific information about how it polices content — a first in Europe.

The social media platform was required to provide “any administrative, contractual, technical or commercial document relating to material and human resources” to fight hate speech, homophobia, incitement to violence and apology for crimes against humanity among other forms of content, according to the court ruling, but decided to appeal the decision. 

In Brussels, the Digital Services Act — the EU’s content moderation rulebook currently under discussion in Brussels — is also seeking to increase transparency on moderation practices. 

The European Parliament would like platforms to report on “the complete number of content moderators allocated for each official language per member state,” according to a recent text obtained by POLITICO. EU countries want so-called very large platforms, with more than 45 million users in the bloc, to “detail the human resources dedicated … to content moderation.”

It’s unclear yet whether the final text ushered between the two institutions would actually force Twitter, which might not qualify as a “very large platform,” to provide the exact numbers of moderators.

Poster child

In Paris and Brussels, lawmakers have long complained about the lack of transparency surrounding the means deployed by online platforms to moderate content.

“Moderation: the opacity on the number of moderators and their training cannot continue,” tweeted Laetitia Avia, an MP from Emmanuel Macron’s La République en Marche party, when lawmakers were assessing national rules on platforms.

The Twitter case is not the only one targeting tech companies’ processes to fight against illegal and harmful material: In March this year, Reporters without borders filed a complaint against Facebook, arguing that the perceived lack of content moderation amounts to “deceptive business practices.”

But, in France, Twitter has become somewhat of a poster child for hate speech online.

In November, seven people were convicted after sending out anti-Semitic tweets about a Miss France contestant, while the civil party’s lawyer slammed “Twitter’s carelessness.”

According to Samuel Lejoyeux, president of the Union of French Jewish students, an experiment carried out by the four NGOs in 2020 — which led to the launch of the court case — shows that Twitter is the “black sheep” among online platforms.

“I’m not saying that the situation is perfect at Facebook and YouTube, but there is an effort made, there is a will to moderate,” he said. “At Twitter, there is a will to let the culture of clash, the culture of hate and insults [proliferate], it’s the founding of the business model.”

Twitter declined to comment for this story. 

Hate speech testing

The case heard Thursday started during the first lockdown related to coronavirus, in the spring of 2020.  

The four French NGOs decided in May last year to take legal action against Twitter, arguing the U.S. company was not doing enough to remove hate speech online. 

They said they found that the microblogging platform removed only 11.4 percent of the hateful, “obviously unlawful” tweets they flagged in an experiment conducted from March 17 to May 5 2020. In comparison, the organizations found that Facebook removed 67.9 percent of flagged content.

Under the e-commerce directive, Twitter is required to remove flagged illegal content “expeditiously.”

In July this year, after failed attempts at mediating the case outside of court, the court ordered Twitter to share documentation with the non-governmental groups on how it moderates content. 

Twitter, which usually does not provide information about content moderators, was required to provide the number, location, nationality and language of the people in charge of processing French content flagged from the platform, as well as the number of reported posts on grounds of apology for crimes against humanity and incitement of racial hatred; how many of them were removed; and how much information was transmitted to the authorities.

The U.S. company so far didn’t comply and decided to appeal instead.

Clothilde Goujard contributed reporting. 

This article is part of POLITICO’s premium Tech policy coverage: Pro Technology. Our expert journalism and suite of policy intelligence tools allow you to seamlessly search, track and understand the developments and stakeholders shaping EU Tech policy and driving decisions impacting your industry. Email pro@politico.eu with the code ‘TECH’ for a complimentary trial.

https://ift.tt/eA8V8J December 09, 2021 at 11:00AM
Laura Kayali

Most Popular Articles