Twitter: The Tragedy of #trending

Ever since the boom of the social media platforms, more and more people have been hoping onto these platforms to connect with their friends and family, follow their favorite celebrities, keeping up with the happenings around them and to speak freely about them.

Photo by Austin Distel on Unsplash

Well, that last part, is where things have been going wrong. For quite sometime now and the platforms aren’t doing anything about it, but hey, neither are we.

Section 230

Section 230 of the American constitution, as passed in 1996, has two primary parts both listed under 230(c) as the “Good Samaritan” portion of the law. Section 230(c)(1), defines that an information service provider shall not be treated as a “publisher or speaker” of information from another provider. Section 230(c)(2) provides immunity from civil liabilities for information service providers that remove or restrict content from their services they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”, as long as they act “in good faith” in this action.

A brief history of how Section 230 came to be

While the social media companies put commendable efforts into the second clause of Section 230, they more often than not reap the benefits of the first, quite unethically, shielding themselves from the implications of allowing bullying, harassments on their platforms. I believe it is not an impossible task for the companies to sort out ways to stop such activities on their platform, but I sometimes doubt whether they really want to.

I believe that any social system, be it virtual or real, has two sides to the coin of harmony, one side of it is the system, in this case, the social media platforms themselves, and the other side are us, it’s members. Now, to balance this coin, it needs to be unbiased, i.e. both the sides share equal responsibility for it’s communal harmony. So, let’s see how each side can stand up to their share of responsibilities.

  1. Accountability

The platforms can make the sign up process require a compulsory identity verification to put an end to fake accounts, and the existing accounts be instructed to do the same. If WhatsApp, for one, knows that it can push it’s now controversial privacy policy update as a mandate, without worrying about it’s userbase, I hardly doubt other platforms don’t. In case, you’re thinking of what happens to fan pages by implementing this condition, well you can easily connect the fan page to an existing verified user on the platform for accountability.

A Typical Social Media Sign Up Page

As an the effect of this, users will act more responsibly on the platform, think before taking actions. Imagine yourself in the real world, do you just go and bully someone, insult them or something? No. You don’t. Because you know, you can be held accountable for it. But, in today’s virtual world, it’s as easy as getting a glass of water, to create a fake email id and a fake account. It is easier to delete them too. You can stalk someone, insult them, bully them and be off with it. But, with a verified account, the platform can always get back to you.

The point is these platforms have virtually limitless tools to implement an acceptable code of behavior amongst it’s users, one only needs to code.

2. Moral Responsibility

Now, looking at the other side of the coin we have, ourselves, the users. And, we have all clearly been misusing this freedom from consequences of our social media actions. By now, you might be wondering why I wrote Twitter specifically in the title of this article. It was this Covid-19 Pandemic period of 20–21 and I had signed up on Twitter, but day by day, I saw more and more hate speeches and even hashtags trending on top filled with just that. It just made me sick, so much so that I had to quit the platform for for a peace of mind. But this isn’t even a not-so-touched-upon problem. Every now and then we come across this in pop culture, even Black Mirror highlighted this problem in their episode Hated In The Nation. And so the question arises. We don’t think before posting something out there.

The Messy Business of Content Moderation

Check out this video and you might have an idea of how disturbing the content can get, so much so that it is normal for the people working at the moderation of these platforms to develop mental illnesses. Some even end up with pre-mature deaths.

Oh! I almost forgot. How could we end this discussion without talking about “Fake News”? Countless people keep “forwarding” posts without even worrying about the legitimacy of their claims. Even in this pandemic these things don’t stop. I have got this cousin of mine, who keeps sending me these baseless posts. So many times have I tried to reason with her, but all in vain. I ultimately had to block her, for my own sake. So, yes! Do spread awareness about things, but if you share some information, it is your implicit responsibility to verify the information. Especially if it deals with a pandemic, vaccines, a disease or other sensitive stuff.

Think Before You Type

You know, I wouldn’t say much, this is not something I need to write pages after pages to make my point clear. So I will leave you with a question instead.

In the end, security or no security, do we not have the moral values to abstain from such practices? Aren’t we, in the end, the real culprits?

A curious thinker and a fiction writer with a penchant for mythologies, comics, philosophies and a tiny bit of politics. Check out my lists to read more!