Social Media and Freedom of Speech

Author(s): Priyanshi Gupta

Paper Details: Volume 2, Issue 2

Citation: IJLSSS 2(2) 14

Page No: 153 – 167

ABSTRACT

The internet truly constitutes a world of its own which has very different definitions and means of expressing an individual’s ideas. With the pervasive influence of social media in modern society, the way individuals communicate, express opinions, and access information has been significantly transformed.  However, this transformation has also raised concerns about the limitations of freedom of speech in the digital age. For instance, we can now voice our concerns about unexplored topics which are in dire need of attention, on a much larger scale through social media platforms, but what if the cause itself is immoral or illegal? How do social media platforms define and regulate hate speech, misinformation, and other forms of restricted speech?

Apart from answering these questions, the paper will delve into legal frameworks that govern freedom of speech online.

Additionally, how social media algorithms impact the visibility of certain types of speech which amplifies or silences marginalized voices. Efficient laws around the world, the role of governments including concerns about authority overreach and the potential to use regulation as a tool to suppress dissenting voices or control the flow of information along with policy recommendations would be discussed. The paper will attempt to analyse challenges that are likely to impact the future of freedom of speech on social media platforms, their consequences in various domains and provide viable solutions to find a balance between fostering free expression and ensuring responsible communication in the digital era.

Keywords- social media, freedom, censorship, regulation, hate speech.

INTRODUCTION

Digital technology has significantly changed the way we socialize, study, work, and communicate. For example, in terms of communication, technologies like social media, email, and video conferencing have completely changed the way we communicate with each other and made the world a smaller place. The internet has revolutionized the way we learn and acquire information by making a tremendous amount of knowledge easily accessible through search engines and online databases. The advent of e-commerce platforms has revolutionized the way we shop by facilitating digital payments and online transactions.

With the advent of virtual classrooms and online learning environments, digital technology has also completely transformed education, increasing its flexibility and accessibility. Furthermore, the workplace has changed as a result of digital tools, which have made remote work possible and altered the conventional office setting. New avenues for social engagement and amusement have been made possible by social media, internet gaming, and streaming services. Telemedicine, wearable technology, and health-tracking applications have completely changed the way we may get medical care remotely and monitor our health. Digital technology will probably have an even greater impact on how we live, work, and interact in the future as it develops.

Louis Rossetto, self-proclaimed “troublemaker” and founder and former editor-in-chief of Wired magazine, summed it all up this way: “Digital technology is so broad today as to encompass almost everything. No product is made today, no person moves today, and nothing is collected, analyzed or communicated without some ‘digital technology’ being an integral part of it. That, in itself, speaks to the overwhelming ‘value’ of digital technology. It is so useful that in short order it has become an integral part of all of our lives. That doesn’t happen because it makes our lives miserable.”[1]

Traditional means of communicating ideas have been significantly altered by the development of the internet and digital technology. Digital technology has altered how oral and written communication, artistic expression, cultural practices, physical expression, and symbols are utilized and interpreted, even though these modes are still important.

Internet-based digital communication is almost instantaneous and can be sent worldwide, unlike older techniques that frequently take longer and involve more work to reach a smaller audience. Furthermore, compared to traditional communication, digital communication offers instantaneous response and interaction through features like shares and comments, allowing for increased participation.

The durability and accessibility of digital communication is another distinction. Online messages are easily saved and recovered, while handwritten letters or oral conversations are more transient and need to be physically stored. Furthermore, older ways of communication could require more resources, such as paper or postage, whereas digital alternatives are frequently more economical and efficient, requiring only a device and an internet connection.

Moreover, traditional means of communication are usually restricted to particular media types, but digital communication offers a wide range of expressive media, including text, photos, audio, and video. Another belief is that digital communication, which can come out as impersonal and prone to misinformation, lacks the authenticity and trustworthiness of conventional forms, particularly face-to-face contact.

Furthermore, the way that people choose to express themselves can vary depending on which conventional or digital forms of communication are valued more in a given culture or community. Even with the advances in digital communication, traditional modes of communication are still important because they have special traits that digital approaches might not be able to fully capture.[2]

HOW HAS SOCIAL MEDIA PROVED TO BE A BOON FOR US?

People now have a tremendous instrument to communicate their worries about undiscovered topics on a far larger scale than was previously possible thanks to social media platforms.

  • Global Reach: Social media gives users the ability to instantaneously connect with people around the world. This implies that issues that are underrepresented or esoteric can nonetheless become popular and visible outside of traditional or local media outlets. For instance, social media campaigns like #MeToo[3] and #BlackLivesMatter[4] began as grassroots movements and spread throughout the world, igniting debate and bringing about change on significant social issues.
  • Amplification of Voices: People who might not have access to traditional platforms can have their voices heard more loudly thanks to social media. By sharing their opinions and experiences on social media, members of underrepresented groups or people with little money can inspire empathy and support from a larger audience. This has been especially clear in movements supporting environmental action, Native American rights, and LGBTQ+ rights.
  • Engagement and Interaction: Direct engagement and interaction between people and communities is made possible by social media. This promotes a feeling of community and makes it possible for knowledge and ideas to be shared. Social media sites like Facebook and Twitter, for instance, have played a significant role in arranging protests, rallies, and other types of activism, bringing people together to support change. In the Middle East, social media has proven crucial in igniting activism and social movements. For instance, social media sites like Facebook and Twitter were used to plan demonstrations, disseminate news, and coordinate attempts for political change during the Arab Spring upheavals.
  • Freedom of Expression: People can now openly communicate their thoughts and opinions on social media, frequently getting around limitations on traditional media. This has been especially crucial in nations with restricted freedom of expression, enabling people to talk about touchy subjects and push for reform.
  • Information Dissemination: Social media makes it possible for information to spread quickly. This can be especially helpful in bringing new issues or incidents to light that need to be addressed right away. Social media, for example, can be used to coordinate relief efforts and provide affected people real-time updates during natural disasters or humanitarian crises.

Social media has been instrumental in elevating voices and bringing attention to critical issues in the Middle East, such as the recent protests in Iran that followed the killing of Mahsa Amini[5]. Mahsa Amini was a young woman who was arrested for not donning a headscarf in public and died while in police custody. After learning of her death, many Iranians took to social media to voice their grief and demand justice. This led to protests and outcry.

Demonstrations and rallies were planned, and news and updates about the protests were disseminated via social media sites like Telegram, Instagram, and Twitter. Hashtags like #MahsaAmini and #NoForcedHijab were employed to raise awareness and get support from people in Iran and throughout the world.

Social media’s importance as a potent instrument for activism and social change in the Middle East is shown by its use in the Mahsa Amini protests. It gives people a forum to voice their opinions, disseminate information, and organize others—often in the face of governmental control and limitations on conventional media. Social media can amplify voices and raise awareness about key issues, and this makes it a powerful agent for change in the region, despite its limitations and challenges.

But every coin has two sides, similarly, every amazing tool can be misused for a number of reasons, and that is where the need for regulation arises.

HOW CAN SOCIAL MEDIA PLATFORMS BE EXPLOITED?

Terrorist organizations have been more adept at recruiting and radicalizing new members using the internet, taking advantage of its anonymity and worldwide reach. Here are a few examples that highlight this issue:

  • Online ISIS Recruitment: It is well known that the Islamic State of Iraq and Syria (ISIS) actively recruits new members using social media sites like Facebook, YouTube, and Twitter. To get people from all across the world to support their cause, they have employed highly skilled online propaganda tactics.
  • Al-Qaeda Online Presence: The internet has been used by Al-Qaeda and its affiliates to recruit new members. They have spoken with possible recruits and spread their extreme ideology through chat rooms, encrypted messaging applications, and internet forums.
  • The purpose of “Inspire,” an English-language online magazine issued by Al-Qaeda in the Arabian Peninsula (AQAP)[6], is to radicalize and recruit people in Western nations. The journal urged people to act and offered guidelines for carrying out terrorist actions.
  • Online Radicalization of Individuals: There have been several documented instances when people have been radicalized online and motivated to commit acts of terrorism. For instance, it has been stated that jihadist propaganda found online served as inspiration for the perpetrators of the 2015 San Bernardino incident[7] in the United States.

 Terrorist organizations have made use of social media to interact with potential recruits and disseminate their message. Although social media sites like Facebook, Twitter, and YouTube have made efforts to eliminate terrorist information, the problem still exists.

In order to combat online extremism while upholding freedom of speech and privacy rights, governments, tech corporations, and civil society must work together. These incidents demonstrate how the internet plays a role in terrorist recruiting and radicalization.

SO, HOW DO SOCIAL MEDIA PLATFORMS DEFINE AND REGULATE HATE SPEECH, MISINFORMATION, AND OTHER FORMS OF RESTRICTED SPEECH?

Social media platforms use a combination of policies, technology, and human moderation to regulate hate speech. Some of them are discussed below:

  • Community Standards: There are guidelines defining what is and isn’t permitted on social media sites such as Facebook, Twitter, and YouTube. These rules frequently forbid hate speech, which is described as expressions that denigrate or encourage violence against someone on the basis of traits including sexual orientation, race, or ethnicity.
  • Reporting Mechanisms: Content that users feel goes against the community standards of the platform can be reported. Usually, after reviewing these reports, the platforms take appropriate action if the content is determined to be in violation of their regulations.
  • Automated Tools: Machine learning algorithms and other automated tools are used by platforms to identify and eliminate hate speech. These programs search for wording in posts and comments that might be against the platform’s rules.
  • Human Moderators: Human moderators are employed by platforms to examine content reports and determine whether or not they breach community standards. Automated technologies could overlook the context and subtleties that human moderators can offer.
  • Transparency Reports: A lot of platforms release reports detailing the content they remove and their reasoning. Users can hold platforms responsible for their activities and gain insight into how policies are being enforced by using these reports.
  • Collaboration: In order to create guidelines and procedures for preventing hate speech, platforms frequently work with authorities, non-governmental organizations (NGOs), and specialists. By working together, platforms can stay informed about new dangers and best practices.

These strategies can lessen hate speech on social media, but there are still issues that must be resolved, like striking a balance between users’ right to free speech and their desire to be shielded from offensive material. To solve these issues and provide safer online spaces, platforms are always improving their procedures and regulations.

WHAT LEGAL FRAMEWORKS GOVERN FREEDOM OF SPEECH ONLINE?

There are certain significant ways in which the United States is different from the other jurisdictions under review. “Restrictions on free speech by the government and public authorities are prohibited by the First Amendment of the US Constitution.”[8] Hate speech, which is defined as communication that is likely to incite impending violence, has certain limited limitations. However, private actors like social media platforms are still able to impose their own speech limits despite the First Amendment. Because social media platforms are not regarded as publishers of the content uploaded to their websites under section 230 of the Communications Decency Act of 1996[9], they are further shielded from private action.

Hate speech is illegal in the UK in a number of ways, both online and offline. Speech that is disparaging on the basis of race, ethnic origin, religion, or sexual orientation is prohibited by the Crime and Disorder Act[10], Public Order Act[11], Malicious Communications Act of 1998[12], and Communications Act of 2003[13]. Proposals to regulate online media through the establishment of a regulator to enforce the duty of care and the imposition of a duty of care on social media platforms are contained in a recent White Paper. A significant worry is the White Paper’s open-ended list of online harms and wide spectrum of businesses covered, which run the risk of overwhelming the regulator and resulting in extremely selective enforcement.

The e-Commerce Directive[14], which was enacted by the European Union, forbids the monitoring of material on websites before to publication. This clause influences and moulds the evolution of regulatory activities throughout Europe. The European Union is investigating other avenues for social media regulation. It has so far signed a Code of Conduct on Countering Illegal Content Online with Facebook, Twitter, Youtube, Instagram, Microsoft, Snapchat, Google+, and Daily Motion. It has also released a Communication on Tackling Illegal Content Online – Towards Greater Responsibility of Social Media Platforms. According to the Code of Conduct, these businesses promise to remove any unlawful content in a day or two.

Social media companies are required by the German Network Enforcement Law[15], which was passed just over two years ago, to set up complaint-handling procedures that are efficient, transparent, and timely. If content that violates the German Criminal Code is found, it has to be blocked or removed within a certain amount of time. Whether or not the content is blatantly illegal will determine the timeframe, as will the social media platform’s cooperation with an established body of industry self-regulation. Systemic flaws in the complaints management system, such as not regularly fulfilling the deadlines for deletion and disregarding the requirements for reporting and openness, can result in fines of up to 50 million euros.[16]

WHAT ABOUT INDIA?

India has a long history of controlling hate speech, dating back to its independence movement, at which time the right to free speech and expression was regarded as an essential freedom. Laws and rules against hate speech were introduced as a result of the perception that societal disputes stemmed from the freedom of expression. To stop the development of intercommunal violence, Section 153A of the IPC [17]was passed during the colonial era and became the first statute regulating hate speech. The promotion of communal discord was outlawed under Section 153A, which also charged the perpetrator with inciting animosity between various religious groups. India adopted the Indian Constitution in 1950 following its independence and emergence as a sovereign state in 1947. Under Article 19 of the Constitution, the right to freedom of speech and expression was protected; nevertheless, there were some justifiable limitations on this right. The limitations placed on the right to free speech and expression were put in place to stop incitement to commit crimes, contempt of court, and slander. Kedar Nath Singh v. State of Bihar[18], a seminal decision from 1969, tackled the problem of striking a balance between the government’s power to justifiable limits and the fundamental right to freedom of speech. The Indian Supreme Court ruled that speech that was simply hurtful or unpleasant did not qualify as hate speech and that it should only be outlawed if it included inciting violence or public disturbance.

The government has also added several sections to the IPC to control hate speech on the internet in order to stay up with the times. Section 505 was added to restrict speech that creates public disorder, and Section 295A was adopted to forbid communication that deliberately insults a specific religious community. To counteract hate speech on the internet, the government later developed the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.[19]

India has long struggled to control hate speech because of its diverse population and society. Hatred has historically been disseminated through a variety of mediums, including social media. In response, the Indian government passed legislation governing this kind of speech. For example, speech that incites violence fosters animosity amongst various groups on the basis of religion, race, place of birth, domicile, language, etc., and causes discord amongst communities is forbidden by the Indian Penal Code (IPC), 1860. Social media content is also regulated by the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. But even with these laws and rules, Indian courts have a number of difficulties when it comes to controlling hate speech on social media.

The difficulties in identifying and prosecuting offenders, the subjectivity of the definition of hate speech, the limited culpability of intermediaries, and the intricacy of enforcing laws in varied nations such as India are some of the obstacles to social media regulation of hate speech. Hate speech is directed towards particular groups on the basis of their political views, gender, sexual orientation, race, or religion. In India, the right to free speech is considered fundamental, although it can be restricted, especially when it comes to hate speech. Despite their differences, hate speech and defamation can both be harmful to people or communities.

The most recent regulation to control social media and digital media platforms was recently introduced by the Indian government and is known as the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The regulations mandate that social media companies designate grievance officers, a nodal officer, and a compliance officer to oversee adherence to the standards, as well as remove or disable access to content that is considered illegal under Indian law within a specified time limit. Nonetheless, the regulations have drawn criticism from a number of sources for endangering people’s right to privacy and free speech. It has been suggested that the regulations give the government undue authority to control social media material and allow intermediaries an unreasonable amount of discretion in removing content.

Conflicts with the government have resulted from high-profile examples in which social media giants such as Twitter and Facebook have violated the new regulations. Additionally, social media businesses have received a warning from the Indian government to abide by the regulations or risk legal action. In the meantime, hate speech on social media is still a problem in India. Social media platforms have seen multiple incidents of hate speech and propaganda that have sparked conflict across religions and even acts of violence. In an effort to find the violators and take appropriate action, the government and law enforcement organizations are closely watching these sites.[20]

There have been widespread concerns regarding the new rules. The broad definitions of “unlawful” and “harmful” information raise worries about India’s New IT Rules 2021, since they may result in the banning of lawful content. Additionally, because intermediaries must actively monitor and remove information, they raise worries about a potential chilling impact on free speech. Concerns regarding privacy and security are also raised by the requirement for intermediaries to gather and keep user data, which leads to increased surveillance. Finally, because there is no provision for appealing judgments about disablement or material removal, the guidelines do not provide users with sufficient due process protections.

The IT Rules 2021, according to critics, could restrict free speech and expression by granting the government extensive authority to filter internet content. Social media companies must remove content under the guidelines within 24 hours of receiving a complaint. This requirement has raised questions about censorship and the laws’ vagueness. Furthermore, tiny firms are burdened by the restrictions, which apply to all social media intermediaries, regardless of size. The regulations are perceived by critics as adding more red tape for social media businesses that operate in India and as potentially having a detrimental effect on foreign investment in the nation.[21]

HOW DO SOCIAL MEDIA ALGORITHMS IMPACT THE VISIBILITY OF CERTAIN TYPES OF SPEECH WHICH RESULTS IN THE AMPLIFICATION OF MARGINALIZED VOICES OR OTHERWISE?

 This is determined by largely what ideology the organization follows and what ethics it seeks to promote. However, this concept is also not free from loopholes because ideals are subjective and can sometimes lead to the promotion of ideas not consistent with public morality. It can be understood from the instance of Afghanistan’s government- the Taliban. Twitter and YouTube are open platforms for the group to disseminate content, but Facebook has labelled the Taliban as a “dangerous organization” and often deletes sites and accounts linked to them. Facebook has declared that it will keep removing anything associated with the Taliban from its networks.

Tweets extolled the group’s recent successes, albeit somewhat prematurely, and promoted several hashtags, such as #kabulregimecrimes (appended to tweets charging the Afghan government of war crimes); #westandwithTaliban (an effort to rally support from the public); and #ﻧَﺼْﺮٌ_ﻣِﻦَ_اللهِ_ﻭَﻓَﺘْﺢٌ_ﻗَﺮِﻳﺐٌ_ﻗَﺮّ⁷ﺐٌ_ﻗَﺮِﻳﺐٌ (help from God and victory is near). At least in Afghanistan, the first hashtag became popular. The internet was outlawed and video cassettes, cameras, and television sets were either seized or destroyed when the Taliban initially took control of Afghanistan in 1996. Al-Emarah, the official website of the Islamic Emirates of the Taliban, was established in 2005 and currently publishes information in English, Arabic, Pashto, Dari, and Urdu. The Islamic Emirates of Afghanistan (IEA) cultural committee, led by their spokesperson Zabihullah Mujahid, is in charge of the audio, video, and writing content.

The firm suspended Zabihullah Mujahid’s first Twitter account, although he has over 371,000 followers on his current account, which has been active since 2017. A committed group of volunteers spreading the Taliban’s message online is underneath him. Even though the Haqqani Network has been classified as an international terrorist organization by the US State Department, several of the group’s members, including head Anas Haqqani, have thousands of followers on Twitter.[22]

Speaking to the BBC under the condition of anonymity, a member of the Taliban’s social media staff said that the group resolved to utilize Twitter seriously in order to publicize an opinion piece that the deputy leader of the group, Sirajuddin Haqqani, had written for the New York Times in February 2020. That’s when the majority of the Taliban’s current Twitter accounts were established.

WHAT IS THE ROLE OF GOVERNMENTS?

Government regulation of internet content is a complicated topic that gives rise to worries about the excess of authority and the possibility that regulations may be used to silence critics or restrict the flow of information.

Governments, for example, have a right to make sure that content on the internet conforms with laws and rules against hate speech, terrorism, and child exploitation. In addition to providing users with protection from hazardous information, regulation can also guarantee fair competition for online enterprises.

There are worries, meanwhile, that governments might utilize regulations to suppress dissenting opinions or prohibit lawful communication. Governments, for instance, have the authority to ban any content they find offensive by enacting laws that define criminal content broadly. This could erode democratic values and have a chilling impact on free expression.

Governments can also sway public opinion and manage the flow of information by regulating it. For instance, they might mandate that content that supports opposing views or criticizes the government be removed from platforms or have access to it restricted. This may restrict the variety of voices that are heard online and make it more difficult for people to access a broad range of data and viewpoints.

The Taliban government in Afghanistan has a track record of utilizing regulations to silence dissident voices and manage information flow. The Taliban imposed severe limitations on communication and the media during its previous regime, which lasted from 1996 to 2001. They outlawed music, television, and the majority of internet access. In addition, they suppressed anything they considered to be anti-Islamic or critical of the government, depriving the Afghan people of their right to free speech and knowledge.

Fearing that the information could be used against them, a large number of Afghan people who worked for foreign forces, organizations, the media, and others and who publicly criticized the Taliban on social media have suddenly deleted their accounts. Human rights groups Amnesty International and Human Rights Watch claim to have already heard about Taliban members purportedly killing victims in retaliatory strikes after looking for them.

The Iranian regime has also come under fire for imposing regulations to stifle dissent and manage information flow. The internet is severely restricted by the Iranian government, which prevents users from accessing thousands of websites, including news and social media sites. Additionally, they keep an eye on internet activity and detain those who voice opposition to the government or criticize it. Furthermore, the government has enacted laws that define criminal content broadly, granting them considerable authority to prohibit online speech they find undesirable.

These instances draw attention to worries about the excess of government power and the possibility that regulations could be utilized as a means of stifling dissident voices and limiting the free exchange of information. It has also been reported that the Iranian regime targets internet opponents. For instance, the Iranian authorities repressed demonstrators who utilized social media as a means of communication and organization during the 2009 Green Movement demonstrations[23]. For their internet actions, many people were harassed or detained, and the government employed sophisticated techniques to find and apprehend regime critics.

There are several obstacles facing free speech on social media platforms in the future. For instance, content moderation is difficult since it can be difficult to define what is and is not hate speech or false information. For example, social media sites like Facebook and Twitter have come under fire for their uneven moderation policies. Another issue is algorithmic bias, which occurs when certain voices or points of view are unintentionally amplified by algorithms. For instance, the recommendation system on YouTube has come under fire for feeding conspiracy theories. Another problem with government regulation is that it tries to control social media, which raises questions about censorship. For example, the IT Rules 2021 in India have drawn criticism for perhaps restricting free speech. Since the gathering and use of personal information can impede self-censorship, privacy concerns also have an impact on the right to free speech. Another issue is the “digital divide,” which arises from differences in people’s access to digital tools, which might restrict their capacity to engage in online conversation. Political polarization is another issue, and social media sites have come under fire for feeding into it by promoting extremist content. It will take a sophisticated strategy to address these issues, striking a balance between the right to free expression and the need to remove dangerous content and encourage constructive online conversation.

CONCLUSION

There are significant ramifications for freedom of speech on social media platforms in several different fields. Political discourse can be distorted by disinformation and algorithmic prejudice, which can polarize people and undermine democratic processes. Hate speech and online harassment can cause animosity and division between groups, which is detrimental to societal cohesiveness. The financial sustainability of social media platforms can be impacted by regulatory uncertainty and the difficulty of moderating material, which can stifle innovation and competitiveness in the digital sector. Furthermore, users’ engagement with online platforms may be hindered by worries about privacy and data security, which could negatively affect their revenue streams.

The digital divide can exacerbate disparities in access to civic engagement, work opportunities, and education by restricting access to information and online discourse. By restricting or silencing dissident voices or censoring lawful communication, government regulations and content moderation policies can have a chilling effect on free speech and democracy. The dissemination of false information and extremist content on social media platforms can affect international relations and security on a worldwide scale. A multi-stakeholder approach involving governments, social media platforms, civil society, and users will be necessary to address these repercussions. Strategies for effectively promoting freedom of speech while tackling the issues of the digital age must be developed.

In the digital age, striking a balance between promoting free speech and guaranteeing responsible communication calls for a multidimensional strategy. Social media companies need to set up explicit and open content moderation guidelines, update them frequently, and notify users of these changes clearly and concisely. Algorithmic transparency is essential, requiring platforms to reveal how their algorithms work and impact the visibility of information. Users should be given the resources to report offensive content and the media literacy to spot false information. Governments, platforms, civic society, and users must work together to share resources and best practices for efficiently combating harmful information. Effective educational initiatives aimed at fostering digital literacy and raising awareness of online hazards are crucial, as is the creation and implementation of legislative structures that safeguard free speech while tackling offensive information. Platforms must be held responsible for their deeds, and frequent audits and reporting are necessary to ensure transparency in their enforcement procedures. All of these steps work together to create a digital space that respects freedom of speech, encourages responsible communication, and protects users from danger.[24]


[1] The positives of digital life, available at: https://www.pewresearch.org/internet/2018/07/03/the-positives-of-digital-life/ (Visited on March 31,2024)

[2] Coronavirus disease (COVID-19), available at: https://www.who.int/health-topics/coronavirus#tab=tab_1 ,(Visited on March 31,2024)

[3] MeToo movement, available at: https://en.wikipedia.org/wiki/MeToo_movement ,(Visited on March 31,2024)

[4] Black Lives Matter, available at: https://en.wikipedia.org/wiki/Black_Lives_Matter ,(Visited on March 31,2024)

[5] death of Jina Mahsa Amini, available at: https://www.britannica.com/biography/death-of-Jina-Mahsa-Amini ,(Visited on March 31,2024)

[6] Inspire (magazine), available at: https://en.wikipedia.org/wiki/Inspire_(magazine) ,(Visited on March 31,2024)

[7] San Bernardino shooting, available at: https://edition.cnn.com/specials/san-bernardino-shooting,(Visited on March 31,2024)

[8] Permissible restrictions on expression, available at: https://www.britannica.com/topic/First-Amendment/Permissible-restrictions-on-expression ,(Visited on March 31,2024)

[9] Communications Decency Act and Section 230 (1996), available at: https://firstamendment.mtsu.edu/article/communications-decency-act-and-section-230/#:~:text=The%20Communications%20Decency%20Act%20was,included%20fines%2C%20imprisonment%20or%20both. ,(Visited on March 31,2024)

[10] Crime and Disorder Act 1998, available at: https://www.legislation.gov.uk/ukpga/1998/37/contents,(Visited on March 31,2024)

[11] Public Order Act 1986, available at: https://www.legislation.gov.uk/ukpga/1986/64,(Visited on March 31,2024)

[12] Malicious Communications Act 1988, available at: https://www.legislation.gov.uk/ukpga/1988/27/contents  ,(Visited on March 31,2024)

[13] Communications Act 2003, available at: https://www.legislation.gov.uk/ukpga/2003/21/contents,(Visited on March 31,2024)

[14] e-Commerce Directive, available at: https://digital-strategy.ec.europa.eu/en/policies/e-commerce-directive,(Visited on March 31,2024)

[15] Network Enforcement Act, available at: https://en.wikipedia.org/wiki/Network_Enforcement_Act#:~:text=The%20Act%20obliges%20social%20media,fine%20of%2050%20million%20Euros. ,(Visited on March 31,2024)

[16] Hate speech regulation on social media: An intractable contemporary challenge, available at: https://researchoutreach.org/articles/hate-speech-regulation-social-media-intractable-contemporary-challenge/ ,(Visited on March 31,2024)

[17] Section 153A: its use and misuse, available at: https://www.nextias.com/ca/current-affairs/25-02-2023/section-153a-its-use-and-misuse ,(Visited on March 31,2024)

[18] Kedar Nath Singh v State of Bihar, available at: https://globalfreedomofexpression.columbia.edu/cases/nath-singh-v-bihar/#:~:text=The%20Supreme%20Court%20of%20India,for%20the%20Forward%20Communist%20Party. ,(Visited on March 31,2024)

[19] The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, available at: https://prsindia.org/billtrack/the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021 . ,(Visited on March 31,2024)

[20] Regulating Hate Speech On Social Media Platforms: Challenges Faced By Indian Courts, available at: https://aklegal.in/regulating-hate-speech-on-social-media-platforms-challenges-faced-by-indian-courts/ (Visited on March 31,2024)

[21] Information Technology Rules 2021 – Provisions, Penalties, Guidelines and More!, available at: https://testbook.com/ias-preparation/information-technology-rules-2021#:~:text=It%20gives%20the%20government%20broad,intermediaries%2C%20regardless%20of%20their%20size. (Last modified Nov 24, 2023)

[22] The Taliban embrace social media: ‘We too want to change perceptions’, available at: https://www.bbc.com/news/world-asia-58466939 (Visited March 31,2024)

[23] A decade after Iran’s Green Movement, some lessons, available at: https://www.atlanticcouncil.org/blogs/iransource/a-decade-after-iran-s-green-movement-some-lessons/(Visited March 31,2024)

[24] Regulating Hate Speech On Social Media Platforms: Challenges Faced By Indian Courts, available at: https://aklegal.in/regulating-hate-speech-on-social-media-platforms-challenges-faced-by-indian-courts/ (Visited on March 31,2024)

Scroll to Top