Skip to main content
Blog

Social Media Legislation – A Global Overview

By November 9, 2024November 15th, 2024No Comments

Most eyes are on the Supreme Court in the US as they present arguments and discuss the level of content moderation that should be required by platforms. 

Looking at the overall legal landscape, we wanted to kick off a series of blog posts around the legislation that is out there regarding content moderation and the reporting requirements required as part of the law. 

Many of these laws are new, and therefore 2024 will see them being interpreted for the first time, which will be very interesting.  

Emerging Trends in Social Media Regulation

We are all aware that in recent years the digital world has changed due to increased social media use, with social media platforms becoming integral to our daily lives. As the influence of these platforms has grown, so too has the concern for the psychological safety of their users. 

Governments around the world are responding to these challenges by proposing and enacting legislation aimed at regulating social media to ensure a safer online environment. 

This article provides an in-depth review of social media usage, trends, and landmark legislations. We’ll explore the Digital Services Act in the EU, the Online Safety Bill in the UK, and relevant legislation in the US and Australia. 

Digital Services Act in the EU 

The European Union’s Digital Services Act is aimed at creating a safer digital space for users within the EU. It introduces a dual-tier system, categorizing platforms as either very large online platforms (VLOPs) or online intermediaries. 

VLOPs are subject to more stringent obligations, including increased content moderation measures and transparency reporting. 

Non-compliance with the Digital Services Act can result in fines of up to 6% of a company’s annual global turnover. This highlights the EU’s commitment to ensuring robust enforcement of online safety regulations. 

Key Provisions of the Digital Services Act

  • Dual-Tier System

Platforms are categorized based on size, with very large online platforms (45 million users in the EU) facing more stringent obligations. This includes measures to combat the spread of illegal content and ensure user safety. 

  • Content Moderation Standards

The act outlines standards for content moderation, requiring platforms to employ measures to prevent the dissemination of specific types of content, including terrorist content, child sexual abuse material, and hate speech. 

  • Transparency and Accountability

Platforms are mandated to provide transparency reports detailing their content moderation actions. Authorities are empowered to assess the effectiveness of content moderation measures.  

  • Based on EU and national laws

Illegal content will be based on the national content moderation laws in the country and the overarching EU law. For example, removing photos of Nazi symbols will be required in Germany, but would not be required in Denmark. 

What is the Impact of the Digital Services Act  on Content Moderation?

  • Stricter Requirements for Larger Platforms

Very large platforms face more comprehensive obligations, necessitating advanced content moderation technologies to meet the prescribed standards. 

  • Collaboration with Authorities

Platforms are required to cooperate with national authorities to address illegal content. This collaboration enhances the effectiveness of content moderation efforts by leveraging governmental support. 

  • Financial Penalties

Fines of up to 6% of global turnover serve as a substantial deterrent for non-compliance. The financial impact underscores the EU’s commitment to robust enforcement. 

  • Operating Ban

In exceptional cases of serious harm, the legislation allows for a temporary ban on operations, where platforms could be shut down. The Commission will have enforcement powers similar to those it has under anti-trust proceedings.

  • Enhanced Cooperation Across the EU

An EU-wide cooperation mechanism will be established between national regulators and the Commission. Already, we have seen the EU open a formal probe of TikTok under the DSA, citing child safety, risk management and other concerns.  

Online Safety Bill in the UK 

The Online Safety Bill in the UK represents a comprehensive effort to tackle online harm and protect users. It requires social media companies to take responsibility for the content on their platforms, particularly content that may cause harm. 

The bill proposes the establishment of a regulatory framework with the power to impose hefty fines on platforms that fail to adhere to prescribed standards. 

Companies can face fines of up to 10% of their global turnover for non-compliance, providing a significant financial incentive for adherence to the regulations. 

Key Provisions of the Online Safety Bill

  • Duty of Care

The bill establishes a legal duty of care on social media companies to protect users from harmful content. Platforms are required to take proactive measures to ensure user safety. 

  • Regulatory Oversight

The legislation proposes the creation of an independent regulatory body with the authority to enforce compliance. This body will set codes of practice outlining the expectations for online safety. 

  • Harmful Content Definition

The bill defines harmful content broadly, encompassing not only illegal content but also content that may be legal but harmful, such as cyberbullying, hate speech, and misinformation. 

What is the Impact of the Online Safety Bill on Content Moderation?

  • Increased Accountability

Social media companies are held accountable for the content on their platforms. The duty of care places the onus on platforms to implement robust content moderation policies to detect and remove harmful content promptly. 

  • Transparency Reporting

Platforms are required to publish transparency reports outlining the actions taken to address harmful content. This fosters transparency and allows users to gauge the effectiveness of content moderation efforts. 

  • Fines for Non-Compliance 

Hefty fines, up to 10% of global turnover, act as a strong deterrent for non-compliance. This financial penalty incentivizes companies to invest in advanced content moderation technologies and practices. Senior management could also face jail time if the non-compliance is egregious enough.  

US Social Media Legislation 

In the United States, the regulatory landscape for social media is more nuanced, with various state-level initiatives and ongoing discussions at the federal level. 

The proposed SAFE TECH Act, for instance, seeks to reform Section 230 of the Communications Decency Act, making platforms more accountable for harmful content. 

Additionally, individual states such as California have introduced the Silenced No More Act, allowing users to sue platforms for content moderation decisions. While federal legislation is still pending, the trend towards increased scrutiny of social media platforms is evident. 

Key Provisions of US Social Media Legislation

  • SAFE TECH Act

This proposed legislation seeks to reform Section 230, making platforms more accountable for content moderation decisions. It removes immunity in cases involving illegal content and ensures platforms act in good faith when moderating content. 

  • Silenced No More Act (California)

Allows users to sue platforms for content moderation decisions, promoting transparency and accountability. 

  • Regional Nuances

Since 2021, 38 states have introduced over 250 bills to regulate content across digital services’ platforms. Many of which are unconstitutional, conflict with federal law and would place major barriers on platform’s abilities to restrict dangerous content.  

  • New York

SB 9465 establishes a task force on social media and violent extremism, and AB 7865/SB 4511 requires social media networks to provide and maintain mechanisms for reporting hateful conduct on their platform(s). 

  • Minnesota

SF 3933/HF 3724 regulate algorithms that target user-generated content for users under the age of 18. 

  • Ohio

The purpose of HB441 is to prohibit social media platforms from censoring a user, their expression, or a user’s ability to receive the expression of another user.  

What is the Impact of US Social Media Legislation on Content Moderation?

  • Section 230 Reform

If enacted, the SAFE TECH Act would prompt platforms to reevaluate their content moderation practices aligning with the proposed changes to Section 230, potentially leading to more cautious moderation policies. 

  • Increased Accountability in California

The Silenced No More Act introduces a legal avenue for users to challenge content moderation decisions, fostering increased accountability and transparency. 

Australian Social Media Legislation 

Australia has taken steps to address online safety concerns through the Online Safety Act 2021. This legislation grants the eSafety Commissioner the authority to issue removal notices for harmful online content and imposes fines for non-compliance. 

Companies failing to remove specified content within the designated timeframe may face financial penalties. The legislation emphasizes the need for swift and effective content moderation to protect Australian users. 

Key Provisions of Australian Social Media Legislation

  • Online Safety Act 2021 

Empowers the eSafety Commissioner to issue removal notices for harmful content. Failure to comply with removal notices can result in fines. 

  • Definitions of Harmful Content 

The legislation specifies types of harmful content, including cyberbullying, image-based abuse, and other online harms. 

What is the Impact of Australian Social Media Legislation on Content Moderation

  • Swift Removal Requirements

The legislation mandates the prompt removal of specified harmful content, necessitating robust and efficient content moderation processes. 

  • Financial Penalties

 Fines of up to AUD 555,000 per day for individuals and AUD 2.77 million per day for corporations provide a significant financial incentive for platforms to prioritize effective content moderation. 

The Future of Social Media Legislation

The global surge in legislation targeting social media platforms underscores the growing recognition of the impact these platforms have on society. The diverse approaches taken by the UK, the EU, the US, and Australian authorities reflect the unique challenges each region faces.  

The imposition of fines and prison time for non-compliance is a common thread, signaling a collective determination to ensure that social media companies prioritize the safety and wellbeing of their users. 

The Implications for Social Media Companies

As these legislative frameworks continue to evolve, the impact on companies – both in terms of financial penalties and potential legal consequences – serves as a powerful incentive for the industry to proactively address the challenges posed by harmful online content. 

These legislative acts collectively emphasize the importance of user safety and place substantial responsibility on social media platforms to implement effective content moderation measures. 

The combination of duty of care, regulatory oversight, transparency reporting, and financial penalties underscores a global commitment to creating a safer online environment.  

The Impact of Social Media Legislation for Content Moderation Wellbeing 

With regulators focusing on the safety and wellbeing of users and providing distinct requirements around corporate social accountability and legal liability of the platforms, we believe that they must also consider the impact that these regulations will have on Trust & Safety teams and Content Moderator wellness

As a next step after launching and enforcing new legislation, regulators will need to ensure platforms are adequately resourced to support their Trust & Safety teams to adhere and comply with their regulatory requirements, while balancing the need to protect the psychological health and wellbeing of their employees. 

Protecting Both Users and Moderators – A Balanced Approach

At Zevo Health, we are working with our customers and regulators to design and build the absolute best clinical solution for Content Moderators and Trust & Safety teams, to ensure their protection, and the reduction of risk for the organization.  

If you are interested in supporting your Content Moderators’ wellbeing, get in touch with our experts today.

Free Webinar | Tailoring Psychological Support to Different Roles in Trust and Safety

Register Now