What’s the Best Way to Keep Incendiary, Violent Content Offline?

In early April, in the anxious days of mourning after the
massacres at two New Zealand mosques, the Australian government passed what it
called the Sharing of Abhorrent Violent Material bill. The hastily drafted law
called for fines of up to 10 percent of annual revenue or three years of jail
time for technology executives whose companies fail to “expeditiously” take
down abhorrent content. According to The Guardian , objectionable
content would include “videos depicting terrorist acts, murders, attempted
murders, torture, rape or kidnap.” It’s unclear how quickly companies are expected to take
down content. Australian politicians have spoken of a “reasonable” timeframe
being perhaps an hour after a major event. A similar German law allows
for a 24-hour grace period before issuing fines of up to 50 million euro; the European Parliament is currently considering
a law that would allow for only an hour.
Industry groups fretted that the law was cobbled together too quickly, and
could implicate almost any employee of a tech platform. An opposition party
promised revisions to the law should it come to power.   Regardless of how Australia’s law is implemented, it could
lead to complex legal disputes. What constitutes “abhorrent” speech? And to
what extent can national governments exert sovereignty over our communications
platforms? Zoom out and you see a liberal democracy struggling with how to
respect speech rights while also protecting its citizens from harmful or
inciting material promulgated on tech platforms over which they only have
partial jurisdiction. How do we apply government protections to environments without
borders, entirely controlled by private corporations? The answer isn’t clear,
particularly since these companies style themselves as public squares, pseudo-political
entities unto themselves with a responsibility to offer opportunities for (some
kind of) free expression. What is clear is that we are a long way from being
able even to contemplate such a law in the United States.   In the U.S., calls for regulating inciteful or violent
content can run headlong into conservatives’ fears of being “ deplatformed ” or “ shadowbanned ”—terms
for when a company blocks certain users. One person’s efforts to eliminate hate
speech is another’s censorship. It doesn’t help that we know very little about
the standards and processes that Facebook, Google, and other big tech platforms
use to filter content. In the EU, at least, platforms have to publish reports
twice a year on their efforts to combat hate speech. Elsewhere, the
decision-making process is rather opaque, clues coming only from the occasional
leak or a carefully curated tour
through a content moderation facility. The United States has few laws governing social media
content. The most prominent and influential is undoubtedly Section 230 of the
Communications Decency Act (CDA). Passed in 1996, Section 230 has developed its
own mythology and has often
been cited as the key legal underpinning for the massive unfettered growth of
social media sites and other tech platforms. The law is open to some
interpretation, but essentially it does two things: It prevents companies like
Facebook and Twitter from being considered publishers (publishers are
responsible for all content they publish by individual writers), and it offers
a liability shield if companies perform voluntary, good faith efforts to
moderate “objectionable” content. Essentially, it immunizes internet companies
twice over, as a recent report by the Congressional Research Service noted. If
the United States were to adopt an Aussie-style law promising large fines or
even jail time for failure to address violent content, legislators would first have
to address Section 230.     There is in fact a growing bipartisan push to reform or
overturn Section 230, but not always for reasons related to objectionable
content. “It’s allowed a lot of these mega-companies to get really big, really
rich, and really powerful and to avoid competition. And it has allowed these
companies to exert editorial influence without being subject to the usual controls
on editorial activity,” Missouri Senator Josh Hawley told  The Verge in March. Anti-monopoly conservatives increasingly skeptical of large
social media companies are joined by conservatives—Senator Ted Cruz being
one —concerned about “political bias and censorship” at Facebook, Google,
and Twitter. Democrats like Nancy Pelosi have also spoken favorably about
addressing Section 230. The bipartisan concern, while perhaps founded in
differing motivations, speaks to the developing sense of urgency about curbing
the power of tech companies.   Regulating so-called Big Tech, however, could be approached
many ways, and not necessarily with the large penalties Australia has opted
for. Getting rid of the “lawless zone” created by Section 230, David Golumbia, the
author of The Politics of Bitcoin ,
wrote to me in an email, should be the first step in a more comprehensive
regulatory structure: Whether an independent agency or a division of an
existing one like the Federal Trade Commission, the United States also needs a
true information commissioner. “It needs to have real regulatory power,”
Golumbia wrote, “including fines, just as the FDA and FTC have. Possibly even
the ability to refer for criminal prosecution, and certainly for civil law
violations.” Carrie Goldberg, a lawyer known for representing victims of
revenge porn that has been posted online, places more emphasis on finding
justice in the courts. “The CDA is not a law that has anything directly to do
with speech,” Goldberg wrote in an email. “Rather, it is a law that eliminates
individuals’ rights to hold tech companies liable in criminal or civil court
for harms that are caused through the use of their product.” One of her clients, she told me, was a man whose
ex-boyfriend posted a fake profile of him on Grindr and encouraged men to come fulfill
his supposed rape fantasy. More than 1,000 men showed up over the course of a
month, while Grindr said it had no ability to deal with the fake profile. “So
we sued using product liability theories,” Goldberg said. “The court threw the
case out, saying Grindr was protected by the CDA.” The incentives for tech companies would be different
without the liability shield the CDA offers, she argued. “If Facebook were
afraid of being sued, it would do things to stop Russians from spreading fake
news and taking over our elections and being sociopaths with our personal data.
We would gain a great deal socially.” Responsible media regulation in the U.S., then, might be
less about passing new laws with harsh penalties, as Australia has done, and
more about repealing existing legislation that gives tech companies
unprecedented freedom from responsibility for the content they host. “The justice system is where we individuals get justice,”
said Goldberg. “We should all be terrified as fuck if the tech industry is
outside the reach of our courts.”

2019-04-23 | Foreign Affairs, Politics, Facebook | English |