I had an epiphany recently. It’s so basic that it’s almost laughable. It finally occurred to me that when we speak of “free speech,” we often don’t know what someone else means by it. Everyone has their own standard, and that leads to endless misunderstanding and conflict.
Take someone like me, for example. I consider myself a free speech absolutist—but perhaps not by someone else’s definition. For me, “free speech” means freedom from government prosecution: legal freedom. Even then, I acknowledge limits, as does the U.S. Constitution, particularly regarding incitement of violence. In fact, my personal boundaries are more restrictive than the law’s when it comes to that.
I also believe slander should have consequences, as it does under current legal frameworks. Beyond that, however, I think most speech should be permissible.
Private platforms, on the other hand, are a different story. I believe in maximizing the freedom to express opinions, but I don’t see it as a violation of “free speech” if platforms enforce rules against targeted harassment or hateful behavior. There’s a difference between expressing an unpopular or even offensive opinion and actively ruining discourse through abusive conduct, like, say, targeted harassment, death threats (which I get from time to time) and name-calling using racist or homophobic screeds (some creative Nazi-wannabes even like to photoshop my photos to exaggerate certain features and add special fonts saying “you’re a Jew” as if it’s some insult. I already I’m a Jew. I have no problem with that). And, if someone is especially keen, even racist ideas, for instance, can be shared without resorting to the kind of toxicity that drives meaningful conversation into the gutter.
That said, I think it’s up to each platform to decide how restrictive or liberal it wants to be—and such directions should never come from the government. Think of it as a dinner-party for a million of your closest friends.
Of course, some would call my perspective “pro-censorship,” arguing that anything short of a completely unrestricted free-for-all—like the chaos of a 4chan board—is an affront to free speech.
And we all have our points. Mine is this: When an environment is saturated with “noise” and personal attacks, it eventually drives away people with valuable contributions to make. What’s left is a breeding ground for toxicity and bad faith, a place where nothing meaningful can thrive. A verbal sewer.
But, as a counter to my own point, allowing “bad speech” to surface does serve a purpose. If you don’t see it, you can’t confront it. You can’t point someone else to it and say “see, this is what is going on.” (People used to deny my claims of rampant anti-semitism, for example, until they started seeing it for themselves, in its most brute form). You won’t know there are people who think that way, and you won’t be able to hold their ideas up to scrutiny.
Worse, suppressing such speech might drive these individuals into echo chambers where their views go unchallenged and fester, potentially radicalizing them further.
On the other hand, the opposite extreme—where everything is allowed—creates its own dangers. It emboldens bad actors, normalizes harmful behavior, and helps fringe groups find strength in numbers. This emboldenment can spill over into real-life consequences, especially as it becomes easier for people, like, say neo-Nazis, to find each other and organize at scale—something more challenging pre-Internet.
And while we might hope that good ideas will naturally win out in the marketplace of ideas, it’s a naive and idealistic hope—social media doesn’t work that way. The playing field is uneven. Algorithms amplify outrage and false claims, even smut, ensuring that the most sensational and harmful ideas often get the most attention, while reasoned counterarguments languish in obscurity. Sunshine might be a disinfectant, but only if it’s allowed to shine through.
Of course the elephant on the table is how do we even begin to define “bad ideas.” Everyone has different definitions. However, I believe we can draw the line at behavior, not opinion. Harassment is behavior. Name-calling is behavior. Challenging narratives, no matter how uncomfortable or controversial, falls squarely within the realm of opinion and should remain protected.
In the end, free speech is messy. And if we can’t even define it the same, how can we ensure that we protect it in a way that balances freedom with responsibility, allowing ideas to flourish while safeguarding against harm? How do we navigate the chaos in a way that respects fundamental freedoms while fostering an environment where ideas—good and bad—can be exchanged, examined, and debated? The ultimate goal should be to build a culture where the strength of ideas, not the volume of the noise, determines their impact. But that’s easier said than done.
How do you define “free speech”?
☕️ By popular request, you can also support my work by making a one-off donation via Buy Me a Coffee.
Upgrade to paid membership, get one week trial for free!
Order my book, No Apologies: How to Find and Free Your Voice in the Age of Outrage―Lessons for the Silenced Majority —speaking up today is more important than ever.
As writers of a newsletter with the power to ban a commenter, we each exercise our own version of free speech. So my red lines are personal attacks and bigotry.
I like the way free speech has been codified by the Supreme Court over the years. But clearly my restrictions are much more severe on my newsletter. And if I see personal hate on Notes, I will report the author of the Note even if it has nothing to do with me.
So I don't think I have a good definition because it's very forum and jurisdictionally dependent. That means I also am not going to be consistent. It's a great question, Katherine. I wish o had better answers.
💯
Thank you