Eliminate hate speech or pay, says Germany to social media companies
Every six months, companies will be required to publicly report on the number of complaints they have received and how they have handled them.
Daily business briefing
In Germany, which has some of the strictest anti-hate speech laws in the western world, a study released this year found that Facebook and Twitter failed to meet the national target of eliminating 70% of hate speech online in the next 24 hours alerted them to his presence.
The report notes that while the two companies ultimately erased nearly all of the illegal hate speech, Facebook only managed to remove 39% within 24 hours, as requested by German authorities. Twitter met this deadline 1% of the time. YouTube is doing significantly better, removing 90% of reported content within one day of notification.
Facebook said on Friday the company shared the German government’s goal of tackling hate speech and had “worked hard” to address the problem of illegal content. The company announced in May that it would nearly double the number of employees worldwide dedicated to removing reported postings from its site to 7,500. It was also trying to improve the processes by which users could report issues, a spokesperson said.
Twitter declined to comment, while Google did not immediately respond to a request for comment.
The deadlock between tech companies and politicians is most acute in Europe, where free speech rights are less extensive than in the United States, and where policymakers have often bristled against the domination of the Silicon Valley on people’s digital lives.
But advocacy groups in Europe have voiced concerns over the new German law.
Mirko Hohmann and Alexander Pirang of the Global Public Policy Institute in Berlin criticized the legislation as “misguided” for placing too much responsibility in deciding what constitutes illegal content in the hands of social media providers.
“Defining the rules of the digital public square, including identifying what is lawful and what is not, should not be left to private companies,” they wrote.
Even in the United States, Facebook and Google have also taken steps to limit the dissemination of extremist messages online and prevent the circulation of “fake news”. This includes using artificial intelligence to automatically remove potentially extremist material and prohibiting news sites suspected of disseminating false or misleading reports of making money through companies’ digital advertising platforms.