Social Media Content Moderation and International Law
By Erin Howard
In a world of instant communication, the ability to use social media platforms can be a force for change on a revolutionary scale. Over the last decade the world has witnessed a massive shift in the use of social media; no longer a single platform for simply sharing favorite music, pictures, or relationship status, “revolutionaries” are using multiple platforms to send their message. From the #UmbrellaRevolution in Hong Kong to the #SudanUprising that finally toppled a decades-long dictator, social media platforms are being used to organize protests and revolutions in a way academics and innovators had not foreseen. Not only did these simple hashtags and platforms provide a stage for planning, they also provided real-time access to what was happening on the ground. In countries where the media is heavily restricted or where the news arrives slowly, access to real-time pictures and videos allow those watching to know what is actually occurring.
The impact of social media goes, as does any communication tool, both ways. In the 2016 U.S. presidential campaign, Russian agents were found to have purchased ads through Google and used profiles on Facebook and Twitter to “spread misinformation and sow division.” Facebook posts were also used by Mynamar’s military to incite genocide, posing as national heroes to share false stories and encourage anti-Rohingya propaganda. Lack of moderation on social media platforms has also led to spaces where violent extremists like incels and white supremacists are able to share information and justify their actions with each other. In rural areas with little access to information, grisly rumors spread on social media can even incite violence in the local community.
How can this content with its strange mixture of political protests, pop culture, porn, and propaganda be moderated? This area of regulation crosses international lines and involves potential human rights issues ultimately overseen by multinational institutions. Two extremes of moderation exist: one where freedom of expression is tolerated to any degree, allowing for political and ethnic hate propaganda, potential sex trafficking, the formation of terrorist networks, and the spread of “fake news.” The other, complete control over what is allowed to be posted, taking away the universal right of the freedom to “hold opinions without interference” and to “receive and impart information . . . through any media.”
In this “Wild West” of technology and freedom of expression, some governments have taken it upon themselves to regulate the content. In 2018, as part of an effort to curb sex trafficking online, the U.S. Congress passed the controversial HR 1865, known as FOSTA-SESTA, an exception to Section 230 of the 1996 Communications Decency Act, holding website publishers responsible for third parties who post ads for prostitution on their platforms. On another extreme, some countries, on the heels of massive calls to protest and revolution, respond with a heavy silencing force. After the Sudanese military violently broke up a massive sit-in in the late Spring of 2019, the ability to access to social media platforms for organizing protests was shut down in a nationwide internet blackout. A court ruling brought the blackout to an end, forcing telecommunications companies to restore internet access to Sudanese users.
How do governments and social media companies manage the charged atmosphere and balance local laws, the right to freedom of expression, and keeping hate speech or propaganda off of the platforms? Many countries are using features like Facebook’s take-down request to enforce their local laws. Facebook says that it removes such content “only in the country or region where it is alleged to be illegal.” Earlier this year, Facebook announced a ban on “praise, support, and representation of white nationalism and white separatism,” offering alternative organizations to those in search of these posts. Are these individual measures cumulatively going to be enough? One suggestion is for social media platforms like Facebook to “requir[e] governments to justify their take-down requests in keeping with the International Covenants on Civil and Political Rights (ICCPR).” These rights, which endorse freedom of expression, allow restrictions as are necessary to protect the rights and reputations of others, national security and public safety, or which restrict propaganda for war and incitement to discrimination, hostility, or violence. If the Oversight Board of Facebook chooses to exercise their power by enforcing these standards, it has the potential to effectively create a social change in internet speech, with a parallel effect of becoming the de facto global speech moderator.