Last summer, following the release of a civil rights audit critical of Facebook’s handling of hate speech on the platform, I wrote an essay exploring a different way to think about regulating hate speech and misinformation. Given the decision this week by Facebook’s Oversight Board upholding the suspension of President Trump’s account, it seems like an appropriate time to share this previously unpublished piece. -BZ
Standing Up For Free Speech?
Earlier this month, Facebook was in the news again, this time following release of a civil rights audit critical of the company’s laissez-faire approach to managing hate speech on the platform, and its uneven application of community standards to “newsworthy” individuals like Donald Trump. Not surprisingly, Facebook’s response was to pledge to do better. “We have a long way to go,” said Sheryl Sandberg, Facebook’s chief operating officer.
That same week, another assessment of free speech in today’s culture was published in Harper’s by over 150 mostly liberal, prominent intellectuals. “The free exchange of information and ideas, the lifeblood of a liberal society, is daily becoming more constricted,” the authors argued.
Although they criticized the censorious right, their real beef was with the growing strength of “cancel culture” on the left — that is the phenomenon of online mobs calling for (and often achieving) the ouster of prominent individuals from professional positions as a consequence of those individuals espousing views that others believe are grounded in bias or discriminatory intent. “The restriction of debate, whether by a repressive government or an intolerant society, invariably hurts those who lack power and makes everyone less capable of democratic participation,” they continued. “The way to defeat bad ideas is by exposure, argument, and persuasion, not by trying to silence or wish them away,” (emphasis added).
The release in the same week wasn’t just a coincidence. It reflects an active tension in culture right now between those who think the marketplace of ideas require more safeguards and those who think all “safeguards” risk the sanctity of free expression itself.
In this space, I don’t want to get into the irony of Sandberg’s statement that they’re still only at the “beginning.” Likewise, I don’t want to get into the debate about whether the Harper’s letter’s authors were truly motivated by a concern for free expression or their own status. What I want to ask instead is whether Facebook and the intellectuals’ position is naïve or incomplete, and whether it adequately accounts for the way information spreads in today’s digital society, the dangers of misinformation and hate speech in a democratic society, and whether we need a different framework for thinking about how best to safeguard and promote free speech.
The Problem with ‘Yelling Fire in a Crowded Theater’
About the only limit on speech we seem able to agree to in the U.S. is that “you can’t yell fire in a crowded theater.” The expression is paraphrased from Justice Oliver Wendell Holmes’ Supreme Court decision in Schenk v. United States (1913). Holmes invoked it to uphold the conviction of leftist agitators who distributed pamphlets against the U.S. involvement in World War I. A few years later, dissenting in Abrams v. United States (1919), he would argue that limits on speech should only apply if the speech posed a “clear and present danger.” And in 1969, the Court would adopt this logic and go further, codifying that speech can only be limited by the government if it poses a risk of “imminent lawless action.” Last October, Mark Zuckerberg invoked this legal history to justify his company’s hands-off approach to political speech.
The related belief that more speech is always better, meanwhile, can be traced to the Enlightenment and, more recently, the nineteenth century writings of John Stewart Mills. In his classic On Liberty, Mills laid out what today we take for granted: that free expression is necessary for progress, because it’s only by placing ideas in competition that we can be sure the “winning” idea is true. (In Abrams, Holmes would refer to this poetically as the “marketplace of ideas.”) By Mills’ logic, even the most reprehensible ideas have a right to be expressed because to suppress them runs the risk of suppressing other ideas that may have some element of truth. Applied to the present, and combined with the “imminent lawless action” test, this is what leads Facebook to justify leaving posts from holocaust deniers and blatant falsehoods from Donald Trump up on the site, and the Harpers’ intellectuals to criticize the firing of New York Times editor James Bennett for publishing an op-ed by Senator Tom Cotton calling for the military to suppress protests in response to the killing of George Floyd.
The problem is that this philosophy doesn’t give us a way to think about what responsibility, if any, arbiters of speech (e.g. social media company, broadcasters, newspapers, universities, etc.) have when making decisions about what speech to elevate and what to suppress. Stated simply: if someone is in a position to amplify another’s voice, what voices or what kinds of speech should they choose to elevate? Is all speech created equal? And what kind of consequences should they face for failing to abide by this standard? These are the questions that Facebook and the Harper’s intellectuals overlook or seem not to understand.
The Debate Over Radio
This is unfortunate since this is isn’t the first time these questions have arisen in American public discourse.
A century ago, similar questions arose in the wake of a different then-disruptive communications technology: radio. Because broadcast spectrum is finite, Congress recognized that to grant a broadcast license was tantamount to granting someone a monopoly and a disproportionate ability to elevate their voice over others. But was that fair? And was it healthy for democracy? In Europe, Hitler and Mussolini seized on radio to build support for their campaigns of annihilation. And in the U.S., figures like Chicago’s Father Tom Coughlin — “the radio priest” — were using the medium to broadcast explicitly racist, anti-immigrant and antisemitic views.
In 1927, Congress attempted to answer these questions by requiring radio license holders to show that they were using their licenses “in the public interest.” In 1934, Congress expanded on this and created the modern Federal Communications Commission to interpret the requirement. Over the following decade, this gave rise to a First Amendment philosophy that held that, while the government couldn’t interfere with the content of speech, it had a responsibility to ensure a flourishing of free expression and public debate.
This so-called “positive” interpretation of the First Amendment led the FCC to require radio (and later television) broadcasters to dedicate air time to the discussion of issues of public importance and, when doing so, to “fairly represent opposing points of view” on each issue. (This latter requirement was known as the “Fairness Doctrine.”) In 1969, the Supreme Court upheld the constitutionality of these requirements, ruling: “It is the purpose of the First Amendment to preserve an uninhibited marketplace of ideas in which truth will ultimately prevail, rather than to countenance monopolization of that market, whether it be by the Government itself or a private licensee.”
While Congress and the FCC would depart from the Fairness Doctrine and the philosophy behind it in the 1980s and 1990s, reasoning that with the rise of cable television and the internet broadcast monopolies were no longer relevant, as we enter the 2020s faced with internet monopolies, the death of local news outlets, rampant misinformation being spewed by political leaders, and deep social polarization, we should ask whether such positive First Amendment regulations are required again today. Is spreading misinformation the new “yelling fire in a crowded theater”? And even if not, should arbiters of speech be elevating it?
In considering these questions, two facts are worth bearing in mind:
- First is that just like a newspaper editor chooses which articles, ads, op-eds and letters to the editor to run, so too does a social media company decide what posts and ads you should see in your feed. Choosing this based on which posts you’re most likely to “engage” with and spend time reading reflects a decision to prioritize time-spent-on-site above other potential objectives. Such alternative priorities might be posts that are deemed educational, posts from those geographically close to you, or to showcase content in chronological fashion. It’s no coincidence that time-spent-on-site is highly correlated with profits. More time on site, more ads and more revenue. So it’s no surprise that sites prioritize this. But again, the point is that it’s a choice.
- The second important fact is that, if left unchecked, digital properties have a natural tendency towards monopoly. As observed by George Washington University’s Matthew Hindman in The Internet Trap (2018), when it comes to internet platforms: attention begets more attention and more attention begets more market share. The reason for this is that as users spend time on a site, they produce data that the site can use to personalize ads and services. As the site becomes more personalized and profitable, it can invest in additional services and strategies that capture more attention and data, creating a flywheel that gives popular sites a long-term structural advantage. This is how Google grew from a humble search engine to a global powerhouse, how Amazon grew from bookseller to e-commerce and cloud computing conglomerate, and Facebook from a place to post photos of yourself and friends to global town square. It’s also why national news outlets like the New York Times, Washington Post and Wall Street Journal are thriving online while local news sites shrivel.
The point is: similar to TV and radio, major digital platforms (and the biggest voices on them) have considerable power over what information gets elevated — and that with this power comes responsibility.
Given this, to frame today’s debate simply as a question of what voices to allow — as Facebook and the Harper’s intellectuals do — overlooks the question of what voices to elevate or amplify, when one has the power to do so. It overlooks the structural advantages that institutions like Facebook and the New York Times have, and by extension, the power of those at those organizations’ helms to raise and amplify certain voices. To put it simply, the question isn’t so much whether misinformation is akin to yelling fire in a crowded theater — that is, should it be allowed or not? It is: should those with power in the public square be elevating misinformation and hate speech? And: are they exercising their power in a way that benefits the public interest?
When considered this way, the debates over radio from a century ago take on fresh relevance and offer a different way of thinking about the Facebook debate and cancel culture. In choosing which voices to elevate, or in using an elevated platform to broadcast one’s own, is a person or institution positively advancing democratic discourse or are they undermining it? Are they contributing to an inclusive marketplace of ideas, or are they making it exclusionary? And if a viewpoint can be construed by others as potentially exclusionary — for example, like when JK Rowling questioned the nomenclature around transgender identity — is that person also endeavoring to show that they’re proffering it in the earnest pursuit of truth and understanding, and that they’re fundamentally committed to ensuring others can offer other points of view? Or not?
This standard, rooted in a positive First Amendment understanding, doesn’t offer a black-and-white way to resolve every dispute over alleged hate speech, misinformation, or fake news. But it does provide us with a framework that’s more constructive than ‘is this akin to yelling fire in a crowded theater?’ And critically, it allows us to account for the power of the speaker and institution when deciding what responsibility they bear for the quality and sanctity of democratic discourse.
Thinking Differently About Regulating Speech
Judged this way, it’s easy to see that Facebook’s responsibility to users goes beyond ensuring everyone can express themselves, political leaders included. As a communications platform serving more than 2 billion humans, Facebook owes it to its community to ensure it’s fostering constructive democratic discourse. In practice, this means holding those with large followings to a similarly high standard. Posts by anti-democratic leaders that undermine democratic processes — such as Trump’s numerous posts questioning the sanctity of mail-in ballots, without evidence — should clearly be flagged or taken down. Likewise, posts by public figures disparaging groups of people based on their sexual orientation, gender identity, color, language, disability status or creed — such as Trump’s posts disparaging Reps. Alexandria Ocasio-Cortez, Ilhan Omar, Ayanna Pressley, Rashida Tlaib by telling them “to go back where they came from” — should also receive strict scrutiny, since such racially-tinged comments can dissuade others from likewise participating fully in the marketplace of ideas. I might add this isn’t just about Trump. This standard could apply to the junta in Myanmar, who used social media to foment genocide against the Rohingya, or politicians in India who used Facebook to incite hate against Muslims.
This standard gives us a way to think about cancel culture as well. Should the New York Times’ James Bennett have been compelled to resign for publishing the views of a sitting U.S. senator, as abhorrent as those views might be? This one is more complicated. However, if that editor’s job is to showcase opposing points of view on issues of public importance, then we might think twice before asking him to relinquish his position of influence because he acted with the intention of supporting democratic discourse.
The case of Harald Uhlig, a professor at the University of Chicago and editor of the Journal of Political Economy, is more complicated and interesting still. On Monday, June 8, 2020, in response to the protests over the killing of George Floyd
at the hands of under the knee of Minneapolis police, Uhlig tweeted that Black Lives Matter movement had “torpedoed itself” by embracing calls to “defund the police.” After saying that the BLM leaders were akin to “flat earthers and creationists” and alleging that they were taking advantage of George Floyd’s family, Uhlig went on to sneer: “Time for sensible adults to enter back into the room and have serious, earnest, respectful conversations about it all.”
Beyond sparking an intense backlash, the episode raised questions over whether someone with those views could be a fair arbiter of submissions to the Journal of Political Economy, particularly if those submissions pertained to matters of race. And it played into bigger conversations about the economics profession as a whole which remains disproportionately white.
A day later, Uhlig tweeted an apology: “My tweets in recent days … have apparently irritated a lot of people. That was far from my intention: let me apologize for that.” In addition, he emphasized that the views were his own, and not those of his department or the Journal of Political Economy which remained a “bastion of free speech.” He further went on to quote someone who had criticized him directly, and to suggest that he supported police reform but took issue with the idea of “defunding” the police.
Was Uhlig’s apology sufficient? I don’t know. But gut tells me that someone who expresses himself in a way that makes others feel unwelcome in the same profession probably isn’t the best person to be an arbiter of submissions to an academic journal, particularly when that journal has a long history of not publishing much on the political economy of race. My argument isn’t that a Positive First Amendment standard gives us a clear answer. It’s that thinking about what speech to elevate and whether those with power over that decision are using it to advance inclusive democratic discourse gives us a more constructive way of having these debates.
Elevating Public Discourse
Such a standard can also give us a higher quality public discourse.
It’s no accident that the period from the 1950s to 1970s — bookended by Robert R. Murrow and Walter Cronkite, and interspersed with investigative journalism breakthroughs such as the Pentagon Papers and Woodward & Bernstein’s Watergate reporting — is today remembered as a “golden age of journalism.” According to the University of Pennsylvania’s Victor Pickard, attempts by the FCC to put teeth into the public interest requirements for broadcasters in the 1930-40s, coupled with major discussions about newspaper industry self-regulation, including notably the Hutchins Commission, in the 1940s, gave rise to a generation of journalists, pundits and industry leaders who took their power over public discourse serious. Broadcast news divisions were considered necessary cost centers and a public service, while newspapers conceived of themselves not just as profit-making enterprises, but as “common carriers of information and discussion.”
Arguably, this contributed to a politics that was — if no less passionate — than certainly less polarized than today. One wonders how much further George Wallace could have gotten in his presidential ambitions in 1968 had he had the backing of a one-sided broadcaster like Fox News or a platform like Facebook where, for public figures, anything goes, and if people could choose to only consume media they agree with — as is the case today. Instead, as Wallace advocated using federal police powers to crack down on the Vietnam War and Civil Rights protesters in Washington, and promoted the sanctity of “state’s rights,” he faced serious scrutiny from members and institutions of the press. During a spirited episode of Face the Nation in 1968, for example, CBS news correspondent Martin Agronsky pushed Wallace to admit that he remained steadfast in his call for “segregation today, segregation tomorrow, segregation forever.” Remarkably, at least when it came to Alabama, he said he did.
Contrast that level of scrutiny with Facebook’s hands-off approach to misinformation, race baiting, and fear mongering on the part of public officials. In March, Zuckerberg told Fox News: “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online. … [T]hese platform companies, shouldn’t be in the position of doing that.” This echoed his remarks last fall in which he claimed social media is a new institution in society entirely, a “Fifth Estate,” that provides ordinary citizens with a way to organize and have their voices heard.
And maybe he has a point. But why can’t we hold onto that positive aspect of digital media, while demanding that internet companies adopt policies that elevate public discourse? Why shouldn’t the fifth estate be bound by the same ideals of social responsibility that gave us journalism’s golden age? And why shouldn’t those with powerful voices be taken to task if they use their position to undermine inclusive democratic discourse, instead of promoting it?
Thinking about today’s free speech debates in terms of what voices to elevate, and how to hold those with power accountable for using it to advance inclusive democratic discourse might not give us all the answers. But at least it can give us a framework for a more constructive democratic debate.
What do you think?