Oct. 4, 2021

Who gets to speak freely?

Who gets to speak freely?

We know a few things to be certain about the freedom of speech here in the United States: It’s written in the Constitution, it’s fiercely defended by lawyers and laypeople alike, and we’ve got it better than a lot of other countries.


From my vantage point (this laptop), free speech is a complicated, deeply nuanced ideal.


It is that freedom, the freedom of speech, that allows us to disagree, to learn and share, to voice once unspoken thoughts, to engineer a more verbally and intellectually diverse world. But it’s also in no small part to blame for our unbelievable penchant for “well, actually…”


And for that reason, it has limits. Freedom of speech is not an absolute freedom, nor should it be. But most of the lines drawn in the sand were drawn centuries ago, long before Facebook was just a twinkle in Mark Zuckerberg’s eye.


And now...Pandora’s Box has been opened, and we have the receipts to show how wildly impactful free online speech can be. You were online this last year, right?


So how do we, citizens of a country famous for how much it loves freedom, interpret freedom of speech in a changing and increasingly online world? Who gets to speak freely on the internet? Well buckle up, because we’re about to start thinking it through.


Hello there and welcome everyone to Thinking Is Cool, the show designed to make your next conversation better than your last, even if that conversation happens in the comments section.


My name is Kinsey Grant and I’m the host of this show and, famously, the sometimes proud owner of one of those useless journalism degrees. My freshman year of college, which was eight years ago, I was forced to memorize the First Amendment for a grade in my intro to communications class. Let’s see if I’ve still got it.


[Try to do first amendment here → Congress shall make no law etc etc etc]


Okay, so I might not be able to recite the earliest words of this country’s Bill of Rights, but I can do this: Question the very integrity of those early words in the Bill of Rights. More specifically, the idea of freedom of speech. 


Even if you weren’t a journalism major who struggled through journalism law junior year spring, you know how much this country loves to talk about the First Amendment, which entitles us Americans to the freedom of religion, the freedom of the press, the freedom to peacefully protest, and the freedom to petition our government if and when it goes astray. But arguably the most important freedom handed to us by the First Amendment is the freedom of speech.


It is that freedom, the freedom of speech, that allows us to disagree, to learn and share, to voice once unspoken thoughts, to engineer a more verbally and intellectually diverse world. But it’s also in no small part to blame for our unbelievable penchant for “well, actually…”


And for that reason, it has limits. Freedom of speech is not an absolute freedom, nor should it be. But most of the lines drawn in the sand were drawn centuries ago, long before Mark Zuckerberg’s girlfriend dumped him and he decided it would be cool and smart to give everyone a giant, rapidly scalable megaphone.


And now...Pandora’s Box has been opened, and we have the receipts to show how wildly impactful free online speech can be. So how do we, citizens of a country famous for how much it loves freedom, interpret freedom of speech in a changing and increasingly online world? Who gets to speak freely on the internet? Well buckle up, because we’re about to start thinking it through.


Before we do, I have to say thank you from the bottom of my heart to our friends at Fundrise for being this season’s presenting sponsor and making episodes like this one possible. And thanks to you for all the feedback on my last episode about homelessness. It was a powerful episode made all the more special by how closely all of you listened and learned and thought. Thank you.


Now...you know the drill. Nothing is off limits. Everything is on the table. Take it anywhere. And remember, thinking is cool, and so are you.


*Fade out intro music*


If I told a blatant lie right now, what would you do? To which authority would you report me? If I said, “Stanley Tucci is not attractive,” a sentence we all know to be a flagrant mistruth...what would happen to me?


Honestly, in my position, not a whole lot. But luckily for you, I consider two things to be core to who I am and what I believe: 1) telling the truth is an enormous and important responsibility that I will never give up and 2) Stanley Tucci is really hot.


But I could get away with saying otherwise online, and that’s mostly because of that First Amendment I butchered in the first minute of this episode. Barring a few outlier circumstances, we are all given the freedom of speech the moment we’re born in or become citizens of the United States of America.


But freedom? Of anything, but especially of speech? Not always absolute. Sometimes, freedom is conditional, and sometimes those conditions are as necessary as they are complicated. Rarely have we seen such an obvious onslaught of that conditional freedom of expression than we have over the last year.


Our shared metamorphosis into beings as online as we are offline has been rapidly sped up by a pandemic. When we talk about expressing ourselves, we’re talking not about shouting in the proverbial town square but of expressing our thoughts to an audience of about 3 billion people online.


For me, and for countless others, this metamorphosis begets a question: We have a decent idea of how to appropriately govern speech without muzzling speakers in real life...but what about online? What happens when our means of speech is a private, capitalist company? How do we make rules that fit the modern Information Age?


The time has come to ask ourselves an important question: Has the ability to speak freely made the internet the dumpster fire that it is, or was it always going to be a dumpster fire regardless of what the Constitution says?


And perhaps more importantly, at the core of this all—who gets to speak freely online?




Of all the episodes I’ve published, this might be the most complicated. I’ve scoured the internet for days working on this one, reaching far back to my time in mock trial and my journalism courses and, believe it or not, my college admissions essay. I’ve become more proficient in constitutional law and terms of use, and I’m confident enough to have an idea of an answer to that question...who gets to speak freely online...but you should prepare yourself now to think yourself into a pretzel on this one.


Join me on this journey as we work to unravel the impact that some 230 years of free speech ideals have had on this little thing called the internet. Let’s jump in.


*Roll transition music*


My first thought when Donald Trump was deplatformed was “whew took ‘em long enough.” My second thought? “Wait...isn’t this kind of a slippery slope?” And then, my brain went haywire. It’s hard to conceptualize removing the biggest platform of the leader of the free world because, well, it doesn’t usually happen.


This—deplatforming an outgoing United States president, removing accounts for staging large-scale harassment campaigns against actresses in a Ghostbusters remake—this wasn’t always how the free speech conversation went.


So that’s where we start today—understanding what deplatforming means in context. To get who can speak freely online, we have to get what free speech really means.


It’s never been simple to understand free speech, but it has been simpler. The First Amendment entitles us to freely express ourselves, and it keeps the government from censoring that expression. That’s a key detail in understanding how we apply free speech principles to the internet. Free speech has been extended to the realm of the internet, and it’s been as such essentially since the Information Age dawned. Posting is speech. But...there’s more to unpack.


When you have a question about free speech, you Google it. When you Google it, it takes you to the ACLU. So I’ll let Vera Eidelman, Staff Attorney for the ACLU Speech, Privacy, and Technology Project, explain further:


KINSEY: I'm curious as to how our interpretation and our understanding of the freedom of speech has changed since we've applied these ideals to the internet. You know, when these ideals were first put forth first published first popularized, Facebook and Twitter were not a thing, obviously. So how do you think that our understanding of freedom of speech and of expression has changed because of the advent of the. 


VERA: So I think one thing that's important to keep in mind when we're talking about free speech or free expression as protected by the first amendment, is that what that really is looking at is government conduct, government regulation. So the first amendment is really about. Limiting what the government can do and enabling private speakers to speak freely. And I think that's still that concept is still very much alive and well and critical when we're thinking about online speech. I know that some of this, um, a lot of what you're thinking about today, People listen. And maybe thinking about is how do private companies, moderate content? How do private companies decide who gets to post on Facebook, who gets to tweet on Twitter, et cetera. But it's also really important to keep in mind that the government still is a very active censor online. 


Vera continued to tell me this:


VERA: Can we apply the first amendment, legally to Facebook or to Google or to Twitter, um, or to YouTube, which I know is part of Google. Um, I think that the answer there is generally, no, so generally legally the. Private platforms are protected by the first amendment, rather than constrained by it. Their choices about who to associate with what speech to publicize or distribute as a general matter will be protected by the first amendment. But that's not to say that normatively, they should be shying away from or moving away from free speech principles. And that's one of the things that I think is interesting to work through here, too. There's a difference between. First amendment law and requirements, and then free expression or free speech principles and values.


There are two important learnings to keep in mind as we keep going today—the first is that there’s a difference between a private company moderating content and a government infringing upon Constitutional rights first set forth in 1791.


Admittedly, 1791 was a good year for free speech. Back then, we had slightly more obvious ideas of what it meant to freely express oneself. And then 1792 happened.


My point is that, for as long as there has been an idea of free speech, there has been ample reason to limit it for our safety and the safety of this country. And that framework for tweaking free expression ideals to meet modern needs is the second important learning to keep in mind as we keep going today.


Free speech, online or otherwise, isn’t absolute. This is from The New Republic: “What is free speech? The First Amendment to the U.S. Constitution—perhaps the most explicit legal protection of the right to free speech in the entire world—does not say. It simply states that “Congress shall make no law … abridging the freedom of speech, or of the press.” It does not, as the saying goes, define its terms.”


Just 10 years after the Constitution and Bill of Rights were ratified, Congress tweaked them by passing the Sedition Act of 1798, criminalizing almost any criticism of the federal government. That Sedition Act eventually expired, but it illustrates an important point: free speech is not static.


In 1919, for example, the SCOTUS case Schenck v. United States declared that, “The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic.” When speech creates, as that case predicated, clear and present danger...it is not protected.


That’s good for you and me. It means that people can’t go around saying anything they want, anything that could put others in danger, and claim free speech.


Those norms were set forth 100 years ago, but the values behind them ring as true today as they did in 1919. When speech breeds danger, it cedes any claim to protection. It’s hard to argue with that logic...but it’s easy to argue with our modern interpretation of that logic.


Which brings us to deplatforming, or removing someone’s ability to post in an online forum. Think about the word “deplatform” and then think about how silly it would have sounded to us a decade ago. We didn’t even know the word then, and now it’s a singular verb around which countless hours of debate have revolved. Why is that? As in many cases of arguing the morality of the internet, this one has quite a lot to do with former President Donald J. Trump.


Of course, Alex Jones and Milo Yiannopoulos were deplatformed before him, but Donald Trump really brought the concept to the mainstream. What happened with Trump? Here’s the ACLU’s treatment of that very weird time in our shared history:


“Prior to the January 6 attack, the platforms experimented with various responses to posts by Trump that violated their community standards, from simply leaving them up, to labeling them, to restricting their distribution. On and after January 6, the platforms took action at the account level. Twitter permanently suspended Trump’s account “due to the risk of further incitement of violence.” YouTube suspended his account indefinitely, applying a sanction that didn’t appear to exist in its policies, also pursuant to the platform’s incitement-to-violence policy, and has since said it would end the suspension when it determines the risk of violence has sufficiently fallen.


“Facebook initially also suspended Trump’s accounts indefinitely, also without tethering the decision to an existing sanctions policy. In response, its Oversight Board ordered Facebook to impose a clear and proportionate penalty, and to explain where it came from. Recently, Facebook announced that it would suspend Trump for two years — until Jan. 6, 2023, which, it should be noted, is shortly after the next midterm elections.”


We can all agree that shouting “fire” in a crowded theater when there is no such fire is bad. But what we can’t seem to agree on is whether people who’ve been deplatformed like Donald Trump actually shouted fire.


And to take it further, who gets to decide what counts as shouting fire? In this example, it was businesses, not the law. Businesses optimized for...probably revenue more than anything. Businesses that no doubt benefit from loud voices like Trump’s and Alex Jones’s.


For a long time, platforms like Facebook and Twitter excused what could easily be labeled as harassment and threats from power users, many of whom call the cesspool in our nation’s Capitol home. 


But in January, something shifted


It’s important to note that we’ve almost gone nose blind to the persistent reek of government instability and online mud-slinging. We’re so used to it that we can no longer recognize how absolutely bananas the last several years have been. We’ve grown tragically accustomed to the Marjorie Taylor Greene’s of the world, and that’s cost us our radar for bullshit.


What we’re living through right now are completely unbelievable circumstances. A sitting president nearly refused a peaceful transfer of power to the next administration. A truly nonsensical Congresswoman from Georgia (ahem, Marjorie Taylor Greene) who does not have a medical degree is saying that it’s her opinion that vaccines don’t work. This is the twilight zone.


But it seems it’s also our reality. So we have no choice but to consider, with every ounce of thoughtfulness we can muster, who gets to speak freely online. How do we draw the lines? What equates to shouting fire on the internet, when it kind of feels like everything is shouting fire?


What counts as clear and present danger, that aforementioned line which we cannot cross, online? Does it count as clear and present danger if your uncle falsely claims vaccines don’t work? What about when Representative Marjorie Taylor Greene falsely claims vaccines don’t work?


Does it count as clear and present danger when your old college classmate says the January 6th Capitol insurrection was a spectacular showing of Americans opening a can of whoop ass on the libs? What about if the sitting president tells the angry mob “We love you. You're very special” and then Tweets “Go home with love & in peace. Remember this day forever.”


The point here is that all speech is not created equally. To understand who gets to speak freely online, we have to understand who gets to be amplified online. That’s the crux of all of this. And we’re going to go deeper with a fantastic guest in just a moment, after a short break to hear from our friends at Fundrise.




Before you heard from Fundrise, I told you we were nearing the crux of, to quote myself, “all of this.” And I mean it—the concept we’re exploring now, reach as a component of speech, is the notion upon which this entire question of who gets to speak freely online hinges.


Let’s go deeper with Charlie Warzel, writer of the Galaxy Brain newsletter and celebrated tech commentator...and also one of my favorite online smart people. Seriously, Charlie has the most incisive commentary on everything from Big Tech to WFH to Covid-related frustration. Charlie?


CHARLIE: What remains, um, kind of unsaid in that is, is the idea of distribution and ample amplification and reach, right? So, um, you know, a good sort of frame for the, the speech conversation that I see as pertain to Facebook, Twitter, YouTube, all these different platforms is this idea that, you know, it, it's not just organic. Uh, these, these companies. Take different forms of speech and they amplify them algorithmically to different audiences. What you're seeing is not just like the raw nature of, of everyone, you know, on Facebook and, you know, trying to reach you. And it's, it's it's, um, it's, it, it's amplified in a sort of opaque way. And so I think that this idea of reach is, is crucial, um, because. It's it's what the, the platforms actually control. Um, and it is unnatural in the sense of, um, you know, I think giving everyone a platform to speak is, is much more natural than the idea that, you know, the most incendiary voices are going to end up being the loudest because, uh, because you know, a certain proprietary, uh, Make make that. So, and, and so I think that that's like a very helpful way to frame the conversation because the tech companies tend to hide behind that. Right. And just say like, you know, we're, we're giving everyone, uh, equal access, um, which is true. Uh, but they're not treating every voice the same.


We could call it a day right there and say that we should all be able to speak freely, but tech leaders should be more thoughtful in terms of whom they amplify. But as you and I both know, tech leaders are rarely thoughtful in the ways we want them to be. So let’s take it upon ourselves to be thoughtful for them, yes?


*Roll transition music*


I’ve always tried to be as honest with you as possible in this show. It’s my job to go out, research, report, interview, reflect, and then come to you with a fully formed story that helps to illustrate how I reach my own conclusion on the issue at hand.


In the case of free speech online, trying to be thoughtful has never been harder. My brain is a swirling mess of thoughts and hypotheticals and ideas and morals, and I don’t really know how to organize all of it. So instead of trying to tie this impossible question up in a pretty bow, I’m going to lay it all out on the table for you and with you.


And I’m not going to script the next few minutes of this episode. I’m working from an outline, and we’re going to think this through together.


    1. Who gets to speak freely? Should anyone be barred from speaking? I don’t think that people should be completely barred from speaking. That’s not right. That’s not the solution to making people less asshole-y online. In fact I think it makes them more argumentative?


  • VERA: Basically someone doesn't get to speak online. This person no longer gets to exist as a voice, as a speaker, as a thinker in the online universe. And sure. Sometimes that might feel great. Sometimes we might think, oh, that person, the things they were saying were so hateful, the things they were saying were so upsetting. I so disagree with them. I don't want to have to see what they have to say. Um, but I think that that power. So enormous when we're talking about a pretty limited set of social media platforms, I should also say, so I think a lot of my own views on this depend on the size and the role of the platform, but those that are acting as gatekeepers that are really deciding who gets to exist as a thinking, being as a speaking, being online there, I do have a lot of. It gives me pause when those companies decide to entirely remove someone to entirely shut down an account, to entirely remove certain pieces of content. That's not to say that those things have to be heralded or easy to find even, but they certainly, I think should be able to exist online so that we can all. Develop our thinking so that we can all respond to each other and say, we don't agree. Um, I think that those values for me are invoked when we're thinking about pretty limited, but powerful set of, um, gatekeepers. Even if the law technically does not apply. 


    1. We’re increasingly online—that’s how we form who we are. What happens when we take that option away from someone? Should we be scared?
  1. Do we hold MGT and Trump and Antifa to a different standard than everyone else because they have a different platform? Yes. And the platforms do too—remember when they refused to remove some patently false information posted by politicians because it was “newsworthy”? When you’re a public figure, you’re held to a different standard. And you should be. But...
    1. Examples of people who have had their freedom of speech online taken away, whether for a short suspension or forever: MTG on Twitter, Trump on most platforms, Rand Paul on YouTube, Alex Jones on basically everywhere
      1. Liberal vs. conservative (it doesn’t affect us all the same way) -- At some point, we have to be honest with ourselves. And by we I mean liberals. This is a phenomenon that disproportionately affects the right.
  2. There’s a very real, very measurable difference between using the reach these tech platforms enable to, for example, make funny TikToks about your childhood crush...and to suggest that an insurrection on the Capitol was a lovable endeavor. My decision to post a selfie on my Instagram feed is light years different from Marjorie Taylor Greene suggesting that bloodshed was imminent unless the recent (and fairly executed) presidential election was recalled. 
    1. The difficulty there is that the scale of these platforms is so enormous that we can hardly parse through what’s good and bad, true and false.


And there you have it. My public rumination. Our public rumination.


*Roll transition music*


We started with one very big question: Who gets to speak freely online. The answer, unsurprisingly, is...I don’t know, at least not yet. But the broad, potentially also unanswerable sub-question in all of this is as follows: Should speech truly be free? Can we apply an absolutist set of free expression ideals to the internet world?


Those who say yes, free speech can be an absolute freedom, often say so for these reasons:

  • Free speech, regardless of who utters it and where and how, protects the voices and political rights of everyday people from the at times oppressive power of the government
  • Deplatforming people in the modern context has been referred to as a “censorship orgy” or a move to “one-party control of information distribution” or the ambition of Big Tech to “utterly erase you from modern existence.”


Those who say no, free speech can’t be an absolute freedom, often say so for these reasons:

  • Deplatforming and other stark content moderation typically only happens in response to equally troubling threats. You’re not being censored for saying Joe’s has better pizza than Bleecker Street.


The answer to that question—should speech be truly, absolutely free? It’s an incredibly fine line, and I’m not going to pretend to know what the perfect solution is. But at some point, we’re going to have to take a stand. We’re going to have to decide what counts as freedom of expression and what counts as dangerous, violent speech. The next question I can’t stop pondering? Who’s we in that scenario?


Who gets to (or has to) decide? Governing bodies that wrote the First Amendment? Social media platforms on which people are speaking? The people themselves?


Let’s take a short break to hear from our friends at Massican, because you’re going to need a drink after this episode is over.




Before you heard from Massican, we were talking about, and again, one of my favorite words, responsibility. Who gets to decide what counts as freedom of expression and what counts as dangerous, violent speech?


The typical roster of potential answers: users, regulators, or platforms. Let’s go through those, but with a little context, sooo...


In the United States, free speech might be part of what we consider the Bill of Rights, but what I’m thinking is this: Free speech is not a right—it’s a value and a privilege. It’s something we’ve been graciously given for no reason other than, in most of our cases, being born here. We get to say what we want, to speak out in protest, to believe what we choose to believe and do it publicly...because a very long time ago, our forebears made it so.


But free reach is neither right nor privilege. It’s a responsibility. And it’s one the framers of our Constitution might have never seen coming. Sure, they had their faults (I say as a woman who’s only been legally permitted to vote for a few generations now), but how could they know what today would look like? How could they know that all 330 million of us Americans would be handed megaphones designed to put us on our worst behavior? They couldn’t. 


And for that reason, we can’t look to the laws and standards set in 1791 to govern 2021. The Bill of Rights is only as good as our interpretation of it. Today, we’ve misinterpreted the idea of free expression, viewing it not as a privilege but as cover to say anything at any cost. It’s in no small part led us to where we are today—logging on is in many ways opting into a giant, engulfing dumpster fire. It’s bad out there, and I’m not the first to tell you. 


But I do think it can get better. That is...if we want it to. If we, whoever we are, take responsibility.


Earlier this morning, over our customary breakfast of overnight oats with berries and nut butter and cacao nibs, my dating app boyfriend and I were talking about what free speech means today. I was struggling to think through my own position, given my penchant for at times very vocal online criticism of some of this country’s most powerful people. If not for the First Amendment, I might be targeted for publicly hexing Mark Zuckerberg this often. 


But I recognize that my decision to do said hexing is both 1) punching up and 2) based in fact. I’m not saying anything on this podcast that isn’t the result of my own research and interviews and deep, deep reflection. Because I know my responsibility as someone with a platform.


As my dating app boyfriend pushed me to think about how fraught my relationship with free speech really was...I realized something. Or more accurately, I questioned something that I’ve been unable to stop thinking about since it first crossed my mind: Do I even believe anything anymore, or am I just an amalgamation of the inflammatory hot takes I see online?


The algorithm, which rewards that inflammation and gifts virality to anyone willing to say something outrageous, true or not...that algorithm has shaped us.


CHARLIE: it's hard to articulate how that transforms people, because it transforms everyone a little bit differently, but there's, you know, there's multiple ways to just think about that from the sense of an average person, you know, becoming sort of a creator or a publisher or, you know, a poster. And trying to access that audience in some way, you know, like some people are constantly striving, right? They're just trying to tap the algorithm to, you know, to give it what it wants so that they can gain access to that sort of unprecedented level of virality and audience and attention that pool of attention.


As our internet selves and our real life selves become one in the same every day...we’re as much products of these algorithms as the algorithms are products of us. We’ve always liked inflammatory speech—I mean, talking shit is fun and gossip spreading like wildfire is a thing for a reason. But in the era of these massive dopamine rushes every time something you do takes off online, I wonder if we’re leaning too heavily into inflammation—are we spewing hot takes just for the sake of it?




I don’t think so, at least not all of us. But it’s an important idea to note: The internet and the platforms that make it up are inextricably linked to us. To ourselves. As we’ve talked about before, it’s really hard to not be online. The influence of our online existence is heavy on our real world selves.


CHARLIE: we have to sort of, you know, will these. A little bit into, you know, into being in the way that we want. We want them to be. Um, we are, you know, ultimately sending lots of signals, uh, to these platforms, you know, all of us about what we like and dislike. Um, and, and, you know, I think a lot of times we're sending, we're sending signals that, you know, ultimately make us a little bit miserable.


So the responsibility, in some ways, falls upon us. It is our responsibility to not inflame for the sake of it in real life just because that’s what works online. We run the risk of destroying civility should we apply the logic of the internet’s algorithmic amplification to our real world selves. Imagine if you only ever started sentences with “did you hear what happened to so and so…?”


But at the same time...we as humankind are a complex, rarely homogenous ragtag group. To expect that we, as in we billions of internet users, can find consensus and self-regulate online? Never going to happen. Because of that, we have to hold platforms to higher standards.


Tech platforms have created the monster of online speech, and the responsibility to tame it falls to them.


I’m tired of Mark Zuckerberg telling the world he’s not “the arbiter of truth,” and it’s clear he’s not giving that up anytime soon. So instead of asking Zuck and his peers to be arbiters of truth, I instead ask them to be arbiters of reach. I don’t ask our technocrats to tell me what’s true and what’s not, but I do ask them to stop rewarding what’s said only in the heat of passion and the pursuit of virality.


People like Mark Zuckerberg have more power than they let on. For years, Zuck refused to take action against Holocaust deniers on his platform, despite the fact that he openly loathed their position as a Jewish man and person with a brain. For too long, he failed to acknowledge the difference between false and dangerous.


In another example cited by Vox: “The Pizzagate theory claiming a child sex ring was being run by Democratic politicians out of a DC pizza shop was an absurd conspiracy, but it led to someone showing up at the pizza shop with a rifle.”


We can’t permit that to happen again.


Rewrite your code. Do better. Free speech? Not going anywhere, at least not in this lifetime. But free reach can change, and it should change.


People? They don’t change unless they’re conditioned to. The laws rarely change, though they should be tweaked to better reflect modern free speech values in an online world as we know they’re capable of.


The algorithms, the platforms behind them—they have to be the ones to change. Freedom of speech should not be eradicated, and the decision to eradicate or not should not be up to the Mark Zuckerbergs of the world. They can’t rewrite the Constitution, but they can rewrite their code.


Reach...that is something we should ask our technocrats to consider more thoughtfully. We can ask them not to muzzle us or silence us, but to be more considerate about which of us gets the bigger megaphone.


They can better identify clear and present danger, just as we have for a century now. And when they do? They can stomp it out immediately. No more “flagging,” no more “identifying as potentially false but important to the news cycle.” They can control how many impressionable eyeballs see dangerous content.


They can preserve political speech and healthy debate and dissent without amplifying obvious, flagrant examples of shouting fire online. And in the rare cases they exhibit it, platform responsibility works.


According to the NYT’s examination of Trump’s 10 most popular written statements containing election misinformation, “Before the ban, Mr. Trump’s posts garnered 22.1 million likes and shares; after the ban, his posts earned 1.3 million likes and shares across Twitter and Facebook.”


If we want to tamp down on dangerous or patently false speech, all we have to do is tweak the systems that amplify that speech. We might not ever be able to solve misinformation or disinformation or frankly stupidity, but we can stop that kind of speech from exacting influence on the masses. And it starts with the platforms that gave those words voice in the first place.


I know the common retorts to the submission that tech should take more responsibility: They’re liberals, we can't trust them! They’re capitalist endeavors, we can’t trust them! They’re just vectors for technological progress, we can’t expect morality from them!


On that last one especially, I have to pause. And I have to reference a piece Charlie Warzel wrote in August, the piece that inspired this episode. “Technology isn’t good or bad, it just is. This is a standard line from the tech industry; They are arguing that their products are mostly neutral. Technology companies make tools and the tools end up in the hands of countless people who will all use the tools as they see fit. Some will do great things and others will do bad things. [Instagram’s Adam] Mosseri does argue that they are trying to mitigate the bad things but, frankly, he and the company are destined to fall short if they truly believe that, “technology isn’t good or bad, it just is.”


What a sad and uncreative excuse for a failure to protect your users. 


It’s the same logic, as Charlie pointed out, as “Guns don’t kill people; people kill people.” Yes, and perfectly reasonable people think we shouldn’t enable anyone suffering from mental illness or rage or derangement to buy an automatic weapon. The same can and should be said for social media—we cannot enable abusers and spreaders of misinformation and those who would foment violence with the biggest of megaphones. 


I’ll again reference the work of Jillian York: “Despite what Senator Ted Cruz keeps repeating, there is nothing requiring these platforms to be neutral, nor should there be. If Facebook wants to boot Trump—or photos of breastfeeding mothers—that’s the company’s prerogative.”


Tech already exhibits morality. It’s time they start caring about the right moral causes. 


*Roll transition music*


I started this episode by admitting that I’m not sure a conclusion on online free speech was possible. Despite the last half hour we’ve spent together, it still feels nebulous and complicated...but I do think we’re getting closer. I think we’re better armed to recognize the nuance in all of this. We’re better equipped to have, as Charlie calls them, second order conversations.


CHARLIE: what do we do? Well, you know, there's the idea of just completely de platforming or there's the looking at the speech from the side of, of reach right. And saying, okay, well, you know, maybe, maybe it's about the, the reach they get, they have access to, uh, to these tools, but they don't have access to that unlimited pool of attention because they're not following, you know, the spirit of the, of the, of the platform and that they're not adhering to.


The rules of the road and debate in that way in good faith. Um, you know, that's like the second order conversation. And, and so I think, you know, I think we're in the process of, of having, you know, some of that second order conversation now, and then there'll be a third order to it. And then, you know, it'll, it'll keep getting more and more sophisticated as we, as we, you know, We, you know, success. And also as we, as we fail here.


This isn’t a conversation with a beginning, middle, and end. This is a new state of being. A new understanding of what it means to speak freely as humans who live both online and off.


When abolitionist Frederick Douglass characterized free speech as the “dread of tyrants” in 1860, he wasn’t referring to PizzaGate or to proven false claims of election interference or to cyberbullying. He was speaking of free speech in its purest form—the freedom to speak for what’s right.


Not the freedom to say anything, true or not, just to go viral on the internet. If technology is a tool that can be wielded for good or for evil, it’s time we start demanding good. It’s time we start recognizing the difference between speech and reach and holding tech leaders to it.


I think that starts with getting a little more comfortable with our Report and Block trigger fingers. I know it’s unlikely I’ll get a private audience with Mark Zuckerberg anytime soon...but I also know that I have the freedom as a user to report the falsities of climate deniers and racists I see online. 


At the start of this episode, I told you I was ready to answer a question: Who gets to speak freely online? My answer...is everyone. I think we should, in most cases because again I don’t want to over-generalize, be free to express ourselves in truth online.


But that enormous privilege to express ourselves freely comes with responsibility. Responsibility to hold Facebook and Twitter and Instagram accountable. Responsibility to ignore the siren song of viral hot takes that contribute nothing important. And responsibility to know that freedom of expression means nothing without thoughtfulness.


So take some time today to consider…

  • What does it mean to freely express oneself online?
  • How have the imperfections of our perception of free speech principles played out along party lines, generational lines, even platform lines? Do you ever think about how the algorithm might shape your version of reality?
  • Does clear and present danger look different from user to user? If so, how often are you consuming content from the “other” side of the conversation?


As always, I’m eager to hear how your conversations go. You know where to find me. Remember, thinking is cool and so are you. I’m Kinsey Grant and I’ll see you next time.