Facebook: The new censor’s office


Social media is usurping the role of the state

With great power comes great responsibility, but for Silicon Valley’s mega-corporations, that hasn’t quite set in. The companies that host the vast majority of our online expression – most only a little over a decade old – have amassed a tremendous amount of power, and the ability to influence everything from how we consume news to what we wear.

For many users of corporate platforms, social media feels a bit like a public square, where debate and trade occur and movements begin. We’ve come to treat platforms like utilities, but they are not – nor can they be – neutral. The laws and proprietary rules that govern these companies have a uniquely American flavor, as does our increasing reliance on them to do the job of the state for us.

The recent US Senate hearing, during which CEO Mark Zuckerberg was questioned by unwitting lawmakers about his company’s actions, addressed this issue. Senator Ted Cruz asked Zuckerberg directly whether his company deems itself a neutral platform, noting that Section 230 of the country’s Communications Decency Act (CDA 230) provides a freedom from liability to neutral platforms hosting speech (note: platforms needn’t be neutral to benefit from Section 230). The young CEO hedged, citing unfamiliarity with the law.

In recent months, the calls for corporations to impose or increase regulation of certain types of speech have reached a fever pitch. In the halls of governance, the opinion pages of major newspapers, and the policy recommendations of NGOs, the consensus is that Facebook, Google, and the like should take on the mantle of government and censor hate speech, regulate ‘fake news’, and fight extremism…all without significant (or in some cases, any) oversight from civil society.

In Europe, this is already happening. In 2016, the European Commission signed a ‘code of conduct’ with four major American tech companies – Microsoft, Google, Facebook, and Twitter – aimed at reducing illegal content online. According to the code, companies should review reported content within a certain time frame and delete hateful speech that goes against their own terms of service. The code does not refer to illegal content per se, but rather pushes companies to adhere to their own proprietary governance structures. Civil society groups were initially part of consultations, but resigned from them in 2016, citing a lack of transparency and public input.

Similarly, the German Netzwerkdurchsezungsgesetz (NetzDG) law, which went into effect in late 2017, requires companies to delete certain content (such as threats of violence and slander) within 24 hours of a complaint being received (or, in cases of legal complexity, within seven days). The law was roundly criticized by internet activists in Germany and abroad, and has already produced a great deal of false positives.

While Europe is understandably concerned with the rising tide of hate speech, deputizing American companies to determine what is or isn’t hateful is an odd way of dealing with it. After all, these are the same companies that elevate ‘civil’ hate speech above profanity, routinely censor counterspeech, and often fail to take white supremacist terrorism seriously.
These regulations give the illusion of safety and security, while in fact they are further eroding democracy by placing ever more power in the hands of unaccountable actors

Furthermore, these regulations give the illusion of safety and security, while in fact they are further eroding democracy by placing ever more power in the hands of unaccountable actors.

New Internationalist for more

Comments are closed.