Russia’s parliament backs new fines for insulting the state online

A view of the Russian Federation Council headquarters, the upper chamber of Russian parliament in Moscow, Russia March 13, 2019. REUTERS/Maxim Shemetov

By Maria Vasilyeva and Tom Balmforth

MOSCOW (Reuters) – Russia’s parliament on Wednesday approved new fines for people who insult the authorities online or spread fake news, defying warnings from critics that the move could open the way to direct state censorship of dissent.

The bills – which now require only President Vladimir Putin’s signature before becoming law – received broad support in the upper house, days after thousands rallied to protest at tightening Internet restrictions.

Putin’s approval ratings have slipped in recent months to about 64 percent but he faces little threat from an opposition-held back by tough protest and election laws and virtually no access to state television.

One bill proposes fining people up to 100,000 rubles ($1,525) for showing “blatant disrespect” online for the state, authorities, public, Russian flag or constitution. Repeat offenders could be jailed for up to 15 days.

The second draft law would give authorities the power to block websites if they fail to comply with requests to remove information that the state deems to be factually inaccurate.

Individuals would be fined up to 400,000 rubles ($6,100) for circulating false information online that leads to a “mass violation of public order”.

Lawmaker Andrei Klishas, from Putin’s United Russia party and one of the authors of the bills, said false reports that inflated the death toll at a fatal shopping mall fire in Siberia last year illustrated the need to tackle fake news.

“This kind of thing must be screened by the law,” he said.

Russia’s human rights council and a group of over a hundred writers, poets, journalists and rights activists called on the upper house of parliament on Tuesday to reject the law.

Council member Ekaterina Schulmann said the legislation, which the lower house of parliament approved in January, duplicated existing law and added that it could be applied arbitrarily because its wording was so vague.

Prominent cultural figures published an open letter describing the bills as an unconstitutional “open declaration of the establishment of direct censorship in the country”.

The Kremlin denied the legislation amounts to censorship.

“What’s more, this sphere of fake news, insulting and so on, is regulated fairly harshly in many countries of the world including Europe. It is, therefore, of course, necessary to do it in our country too,” Kremlin spokesman Dmitry Peskov said.

Tougher Internet laws introduced over the past five years require search engines to delete some search results, messaging services to share encryption keys with security services and social networks to store users’ personal data on servers within the country.

(Additional reporting by Polina Nikolskaya and Anton Derbenev; Editing by Mark Heinrich)

Social media companies accelerate removals of online hate speech

A man reads tweets on his phone in front of a displayed Twitter logo in Bordeaux, southwestern France, March 10, 2016. REUTERS/Regis

By Julia Fioretti

BRUSSELS (Reuters) – Social media companies Facebook, Twitter and Google’s YouTube have accelerated removals of online hate speech in the face of a potential European Union crackdown.

The EU has gone as far as to threaten social media companies with new legislation unless they increase efforts to fight the proliferation of extremist content and hate speech on their platforms.

Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe. Instagram and Google+ will also sign up to the code, the European Commission said.

The companies managed to review complaints within a day in 81 percent of cases during monitoring of a six-week period towards the end of last year, EU figures released on Friday show, compared with 51 percent in May 2017 when the Commission last examined compliance with the code of conduct.

On average, the companies removed 70 percent of the content flagged to them, up from 59.2 percent in May last year.

EU Justice Commissioner Vera Jourova has said that she does not want to see a 100 percent removal rate because that could impinge on free speech.

She has also said she is not in favor of legislating as Germany has done. A law providing for fines of up to 50 million euros ($61.4 million) for social media companies that do not remove hate speech quickly enough went into force in Germany this year.

Jourova said the results unveiled on Friday made it less likely that she would push for legislation on the removal of illegal hate speech.

‘NO FREE PASS’

“The fact that our collaborative approach on illegal hate speech brings good results does not mean I want to give a free pass to the tech giants,” she told a news conference.

Facebook reviewed complaints in less than 24 hours in 89.3 percent of cases, YouTube in 62.7 percent of cases and Twitter in 80.2 percent of cases.

“These latest results and the success of the code of conduct are further evidence that the Commission’s current self-regulatory approach is effective and the correct path forward.” said Stephen Turner, Twitter’s head of public policy.

Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 percent on Twitter.

The most common ground for hatred identified by the Commission was ethnic origin, followed by anti-Muslim hatred and xenophobia, including expressions of hatred against migrants and refugees.

Pressure from several European governments has prompted social media companies to step up efforts to tackle extremist online content, including through the use of artificial intelligence.

YouTube said it was training machine learning models to flag hateful content at scale.

“Over the last two years we’ve consistently improved our review and action times for this type of content on YouTube, showing that our policies and processes are effective, and getting better over time,” said Nicklas Lundblad, Google’s vice president of public policy in EMEA.

“We’ve learned valuable lessons from the process, but there is still more we can do.”

The Commission is likely to issue a recommendation at the end of February on how companies should take down extremist content related to militant groups, an EU official said.

(Reporting by Julia Fioretti; Additional reporting by Foo Yun Chee; Editing by Grant McCool and David Goodman)