Internet and Free Speech

Yash Dubey
4 min readFeb 26, 2023

The internet. A vast expanse of everything and nothing. Sometimes a solace, often a void. Few inventions are as ubiquitously present. Few have touched as many lives. None have upended our society in such myriad ways. Born out of DARPA’s research on packet switching to enable faster communication, the internet exploded like no technology had ever before. Ever since, the internet in many forms continues to set records in growth and adoption. Parts of the world with no electricity or roads boast smartphones and mobile internet. A maze of satellites is being put into place as we speak to beam internet to every corner of the globe. But the fundamental tenets of the internet are facing a reckoning as the US Supreme Court squares up to Section 230.

Section 230 — the what and the why.

Section 230 of the US Communications Act is a law that touches aspects of our lives every day. Yet, we know little and understand lesser about the significance of this law. Now, for the first time ever, the US Supreme Court will here arguments and pass potential landmark judgements on this law and its scope.

The law is remarkably short for an extremely important part of legislation. It simply absolves users and providers of “interactive computer services” of responsibility towards content published on such services. The implications are easy to see. Social media and search engines are most directly concerned with this law. The law does go further in allowing platforms broad moderation freedoms. Among others, Google, Twitter and Facebook have used the law to justify both light and heavy handed moderation policies. However, the law, passed in 1996 and widely acknowledged to be a key driver of the explosive growth in internet as a social engine, has recently come under increasing scrutiny for its vague language and dated assumptions. It has become a symbol of the struggle between social media platforms’ laissez-faire approach and concerned citizens’ demands for a more regulated online experience.

Why is the law so important?

Imagine yourself as the owner of an advertisement board for the community. You supply the board and the thumb tacks but do not control or vet the ads that go on the board. You cannot realistically be expected to control the intentions of every single ad that is displayed nor ensure that none are scams. It would simply be impractical and expensive. Eventually, you would take down the board altogether. The community management realises early on that the board is in fact quite useful. They pass a law allowing you to moderate the board as you see fit and absolving you of responsibility for nefarious actors using the board. Now, the service makes sense. You, as the owner, have an incentive to ensure people trust your board. Hence, you moderate it for content that is useful. The community management does not interfere. Everyone is happy. And the law works.

Why does the law not work anymore?

Soon however, you realise that some ads generate more interest. Some ads evoke powerful emotions. Say your community is a little racist. Maybe a little sexist. You notice this and begin to display those ads more prominently. Those who are not racist are angered and stalk by your board daily to vent their fury at the latest dog whistle. Your board is increasingly popular. It doesn’t matter that the ads are pushing the community apart. Your board gets more attention than ever before. The community management realises that the board may be harmful and tries in vain to force you to moderate it. However, you have become extremely powerful. You control information and you control what people see. You make promises to improve moderation. You “outsource” moderation to third-parties but retain all decision making power. You do not know who you are anymore.

Simply replace the board with any social media website or app and you have an explanation.

The law was designed in an era where recommendation engines where mere science fiction. Today, social media is exclusively driven by recommendation engines. Social media has evolved rapidly, leaving Section 230 in its dust. The crux of the arguments now is the responsibility companies hold for recommending content. For pushing misogyny(see Andrew Tate), racism, hate, bigotry and inflammatory content. The algorithms work exactly as designed. They amplify controversy. They target primal fight-or-flight responses. And the wors of all, nobody really understands why the algorithms do this. Not even the engineers who built them.

What’s Next?

The time of reckoning for Section 230 has been coming. Repeated failures to moderate and prevent harmful content have led to a frustrated body of lawmakers keen to curb the growing influence and power of internet giants. Facebook’s definitive roles in genocides, Instagarm’s responsibility for teenage mental health issues and TikTok’s worship of Andrew Tate are all likely to come under fire. The US Supreme Court may strike down the protections these laws confer upon these entities and potentially change the face of the internet forever. Will it necessary be better? It’s hard to say. There are likely to be fewer genocides perpetrated by hate on social media. There may not be the possbility of another election being hijacked and manipulated. Is this a cost too great to pay? Some such as Microsoft and Reddit argue that it is. Some, surprisingly including Facebook argue that it isn’t. Ultimately, making social media giants responsible, not for the content they host, but for the content they amplify may do us all a lot of good.

--

--