The Price Of One Tweet
A year that would unleash a Pandora’s Box of eye-opening and systemic issues, the 2016 election of Donald Trump as president presented every tech platform with a gnawing conundrum. How do you balance the ideals of neutrality with the need for regulation on particularly dangerous speech touted by public figures, especially if the orginator is the President of the United States?
Up until this point, no amount of complaints could shake Silicon Valley’s resolve. Jack Dorsey, Mark Zuckerberg, and others could not be swayed even in the face of mounting harassment claims on and off their respective platforms. The prevailing thought was that trolls had existed since the Internet’s inception. Little could be done about that, but if you ignore them or deprive them of attention, than their ultimate effectiveness would be inconsequential.
Consumers enjoyed universal instant connection, and everything was fine for awhile.
And so disparate ideas on acceptable content were mashed together into sporadically enforced user policies. Facebook and others vacuumed up data in droves, with little oversight from any regulatory body, and specific unsavory instances would only act as a blip on the industry’s ascendancy.
CEO after CEO harped on the dangers of tech companies regulating acceptable speech. They argued passionately about the pros outweighing the cons. They reiterated their commitment to neutrality again and again, arguing that it really is better this way and that they were doing it for the good of society. It was a win-win. Tech companies of all sorts enjoyed explosive profits. Consumers enjoyed universal instant connection, and everything was fine for awhile.
In January of this year, Jack Dorsey, in an interview with Rolling Stone and in response to a question regarding harassment on Twitter, quipped that “We can’t be arbiters of truth.”
A crucial refrain that has underscored every move the tech industry has taken up until this point. Even as they were hit with scandal after scandal, from the careless to the malign, everyone wondered allowed what would you have them do? They are here only to provide a service. Nothing more. Nothing less. In reality though, it was a decade-long shirking of responsibility, equivalent to a shrug that fit neatly into a philosophy that has guided tech since its early days. We are neutral observers, not active participants.
On April 23rd, 2019, The President of the United States posted a tweet portraying Representative Ilhan Omar, a duly elected Muslim American congressional member, as supportive of the terrorists who perpetrated the attack on the World Trade Center in 2001.
This tweet spread like wildfire across the Internet, being shared and reposted tens of thousands of times, drawing condemnation from a wide range of circles and reportedly causing death threats to be made on Ilhan Omar’s life.
Twitter refused to take down the tweet, going so far as to have its CEO, Jack Dorsey, call Representative Omar and explain his reasoning on the decision. He argued that it was in the public interest for certain figures, even if their discourse violates Twitter’s terms of service, to not have their tweets taken down. He also pointed to the fact that the video and tweet had already been dispersed so far beyond Twitter that taking it down would have little to no effect.
Specific unsavory instances would only act as a blip on the industry’s ascendancy.
It’s hard to argue with him. Robust public discourse, no censorship, the marketplace of ideas, holding powerful groups to account, all of these things are crucial to not just Twitter’s founding ideals but to America’s beginning as well. It should never be in the government’s hands or Twitter’s hand to unilaterally decide what speech is acceptable, and the idea that anyone can get online and start writing their thoughts down is beautiful and integral to the modern world.
However, this reverance must be taken with the cold fact that the price of a tweet, the price of speech can be fatal. The toppling of governments, mass shootings, terrorism, political sabotage, assassinations, mob justice, and virulent propaganda have all either started online or been inspired by various online communities.
Groups have wielded these platforms to do immense good as well as immeasurable harm. Therefore, in my mind, the only workable defense for tech companies is action. Any alternative rings dangerously hollow and at worse results in deadly consequences. Twitter, Facebook, YouTube and all the others can no longer sit on the sidelines, because the spotlight grows on them every single day. Their platforms can cause real harm, enabling in some ways stochastic terror. It’s time to take action or risk being regulated into oblivion.
Thankfully, Silicon Valley has perked up to these issues. Dorsey, only a month or so after he gave that Rolling Stone’s interview, appeared on a podcast with Sam Harris. He’s quoted as saying, in response to a question about Twitter’s perceived bias, “Ultimately, I don’t think we can be this neutral, passive platform anymore.” This month Mark Zuckerberg, on stage at Facebook’s F8 developers conference, remarked, “The future is private…”
This is a good sign, but so much more can be done. Jack, Mark, and everyone else. Here are my thoughts on a few specific actions that can be taken to address these concerns.
Two seemingly opposing statements can be true. Tech platforms (Twitter, Facebook, YouTube, etc.) can work hard to uplift and diversify speech, and simultaneously contextualize, suppress, or if necessary ban problematic, dangerous speech.
People often simplify this fact into a binary choice. Either these companies ban every non-compliant account or no one gets banned. It’s the fair way of going about things.
The truth is that speech is complicated and always demands more context that is usually missing, either intentionally or unintentionally. Suppressing speech in any form shouldn’t be taken lightly, and each case deserves attention. These things are too important to accept only half measures. That’s something we can all agree on.
Now there are two major changes tech platforms can take, both based around transparency, that could go a long way in cracking the thorny issues that arise around suppression of speech.
First, acceptable use policies must be clearly defined and a usable, robust appeals process must be put in place. Recognizing that each situation is different, implementing an industry-wide appeals process that’s efficient and effective is key, because I am willing to bet that the source of frustration that arises when discipline is handed out exists in the fact that most companies’ venue for appealing a suspension or ban is severely lacking or even non-existent.
This is an industry-wide issue and the industry needs to come together to solve it. Ideally, this could even take the shape of an industry regulatory non-criminal “court” to settle these differences. Tech platforms could all pay into its budget and its sole mission could be studying and developing efficient and effective processes to handle large volumes of appeals in a fair matter.
This change would accomplish several goals. It would give frustrated users an avenue to make their case, lessen pressure on companies trying to always make the right call when it comes to suppressing accounts, and standardize an appeals process that is varied and ineffective across the industry. It could also cost less money depending on how it’s structured, especially compared to each platform hiring legions of fallible content moderators to make decisions for them.
My second recommendation would be the development of a simple enforcement policy. Both clarity and transparency are equally important in order for this policy to be effective and avoid too limiting an approach. The policy would cover three steps: contextualize, suppress, and ban, contextualizing being the least limiting and banning being the most oppressive. What this means specifically can vary a bit in each case, but the overall idea is to follow these three steps when handing out discipline.
The first step should always be looking to provide immediate context. That could mean assigning a button to President Trump’s tweets, anti-vaccine accounts, or conspiracy accounts, that gives immediate context to questionable posts or accounts. If there’s a strike system involved, this could also mean publicly displaying number of strikes against repeated offenders in the hopes of making visitors think twice before interacting or listening to an account. When the accounts specifically involves elected officials or even the president, issuing press releases that make clear the company’s values and their stance on moderating is crucial.
The second step is suppression of content. This strategy is similar to what YouTube and Facebook have done in the past to combat certain particularly problematic content. This must be used if content is particularity graphic, disturbing or pernicious in its reach. This would involve removing content or even accounts from search results, suggested followers, and public search engines, forcibly turning the account private in the hopes of minimizing reach. This option obviously needs to be weighed more heavily the more public and popular the figure, but no account should be necessarily exempt.
The final step, banning should be taken when all other avenues have been exhausted. The reasons motivating this drastic action should be clear, pointing to exactly why this action was taken and what steps the company took prior to this point to help mitigate the problem. This point is crucial and benefits both the user and the company, with the user not being left in the dark and the company being able to stand on solid footing concerning its action.
I say all this to emphasize the fact that our current system doesn’t work and isn’t emblematic of the ideals that we espouse as important. We want to promote robust conversation, but we also want to protect against dangerous hate speech.
No matter where you are on this, you must agree that something must be done, and I believe it all comes down to the fact that tech platforms can’t afford to be neutral. They also can’t be the ultimate arbiters of truth.
They must exist somewhere in the middle, not as neutral observers but as active informers.
— — — — — — — — — — — — — — — — — — — — — — — — — — — — —
If you liked this post, here are a few others I’ve written that you might enjoy: