Skip to main content
Blog

Navigating the Storm: Social Media Platforms, Elections, and the Psychological Toll on Content Moderators

By November 14, 2024No Comments

2024 is an unprecedented year for elections as more than 50% of the world’s population go to the polls. This is one of the largest election years in history, with elections in India, France, Germany, Ukraine, Indonesia, and Taiwan.  

These elections pose significant challenges for platforms in managing disinformation and misinformation – especially as many of them have reduced their Trust and Safety and content moderation teams. 

In this article, we’ll look at the psychological impact on Content Moderators tasked with safeguarding users from misleading content. We’ll also look at the impending challenges, the role of social media platforms, and strategies to support Content Moderators dealing with disturbing content and desensitization. 

The Impact of Social Media on Election Information

Recent reports highlight the significant impact of social media on the dissemination of election-related information and misinformation. 

According to a 2023 Ipsos survey, 68% of internet users believe that disinformation is most widespread on social media platforms, surpassing other sources such as online messaging apps and media websites. 

This prevalence of misinformation has raised concerns about its potential impact on elections, with 87% of respondents expressing worry about the influence of disinformation on upcoming elections in their countries. 

Prevalence of Misinformation on Social Media

Social media platforms have implemented various measures to combat misinformation. One widely adopted strategy is the use of misinformation warning labels. 

A 2023 review found that these labels are generally effective in reducing the spread of false information, although they can also lead to an “implied truth effect,” where users may assume that unlabeled content is accurate. 

This phenomenon underscores the complexity of content moderation and the challenges platforms face in ensuring the accuracy of information. 

Strategies to Combat Misinformation

Additionally, the concept of “information gerrymandering,” as discussed by David Rand and other researchers, highlights how social media algorithms can create echo chambers that reinforce users’ existing beliefs and biases. This can significantly influence voting behavior and election outcomes. 

The European Union’s Digital Services Act (DSA) has introduced new regulations requiring social media companies to employ fact-checkers and implement robust content moderation systems to address these issues. 

These measures aim to enhance transparency and accountability, ensuring that platforms take proactive steps to mitigate the spread of misinformation during election periods.

What Are the Primary Types of Disinformation and Misinformation? 

There are three primary forms of disinformation and misinformation, they are:

  • Manipulative actors – They have a clear intent to disrupt democratic processes and information around the election.
  • Deceptive Behaviors – Tactics and techniques used by manipulative actors.   
  • Harmful Content – Used to undermine and hurt individuals, organizations, processes and influence the public debate. 

Identifying Manipulative Actors, Deceptive Behaviors, and Harmful Content

But looking at the forms of disinformation is just the start. Trust and Safety teams need to understand who is behind it, and why they are acting in this way.  

There are generally four actors: 

  • Foreign influence – Usually this is carried out through very professional, coordinated, well-prepared campaigns and aims to polarize voters over sensitive issues.  
  • Political Disinformation Using fake identities, websites, amplifying elements of the debate and manipulating content are just some of the ways this can used.  
  • Issue-Based Disinformation – This happens when actors mobilize around a specific issue and using deceptive behaviors try to target online groups.  
  • Lucrative Disinformation – This is about profiting from disinformation, often using clickbait to encourage people to go to a specific website, where the actors earn profit from online ads.  

To counter these tactics, comprehensive measures must be in place, including protecting content moderators through policy and well-being strategies.

Real World Examples of Disinformation and Misinformation Impacting Elections 

Social media has had a profound impact on elections around the world, shaping political discourse, influencing voter behavior, and providing a platform for the rapid dissemination of information.  

Here are some examples that highlight the multifaceted impact of social media on electoral processes: 

1. Deepfakes and Misleading Content 

In the 2020 Taiwanese presidential election, deepfake technology was used to create misleading videos purporting to show President Tsai Ing-wen making false statements. The intent was to damage her credibility and influence public opinion. 

2. False Claims of Rigged Elections

Following the 2020 U.S. presidential election, false claims of widespread voter fraud and a rigged election spread rapidly on social media platforms, particularly on Twitter and Facebook. This misinformation contributed to the storming of the U.S. Capitol on January 6, 2021, by a mob of supporters of Donald Trump. 

3. WhatsApp Misinformation in India

Misinformation circulated on WhatsApp contributed to a series of lynchings in India in 2018. False rumors about child abduction and organ trafficking spread rapidly on the messaging platform, leading to mob violence and several deaths. 

The incident prompted concerns about the role of private messaging apps in disseminating unchecked information. 

4. Russian Disinformation Campaigns 

Russian influence campaigns, such as those during the 2016 U.S. presidential election, have involved the creation and dissemination of false information on social media platforms. 

Fake accounts, manipulated images, and divisive, disturbing content were used to exploit existing social and political tensions, fostering a climate of mistrust. 

5. Brazilian WhatsApp Misinformation

Misinformation on WhatsApp played a significant role in the Brazilian presidential election. Falsehoods and misleading content were disseminated through private groups, making it challenging for fact-checkers and authorities to address the issue promptly. 

6. Facebook and Myanmar’s Rohingya Crisis 

Facebook has been criticized for its role in the spread of hate speech and misinformation contributing to the Rohingya crisis in Myanmar. False narratives and anti-Rohingya content circulated on the platform, exacerbating ethnic tensions and violence against the Rohingya minority. 

7. YouTube Algorithm Amplification

During the COVID-19 pandemic, YouTube faced criticism for its algorithm’s role in amplifying misinformation. 

False claims about the virus, its origins, and potential treatments gained traction due to the platform’s recommendation algorithms, leading to increases in illness, public health concerns, lack of trust in the health system and likely deaths. 

Social media platforms offer unprecedented opportunities for political engagement and information sharing, but they also pose significant challenges in terms of the spread of misinformation, erosion of trust, and the potential manipulation of public opinion. 

As elections continue to evolve, understanding and addressing the impact of social media on democratic processes remain critical.  

Addressing these challenges requires a multi-faceted approach involving platform accountability, user education, and collaboration with fact-checkers and authorities. On the front line of this work is your Trust and Safety team, including your Content Moderators.  

Supporting Content Moderators 9 Strategies for Mental Health and Wellbeing

Content Moderators serve as the frontline defense against the proliferation of disinformation and misinformation during elections. 

These individuals face an immense psychological burden, sifting through a relentless stream of disturbing or graphic content, while making real-time decisions about its credibility. 

The pressure to maintain a safe online environment puts them in a unique position, one that demands a nuanced understanding of the human psyche. 

How can you help? Below are 9 of the most effective strategies:

  • Addressing Cognitive Fatigue and Decision-Making

Content Moderators must make swift and accurate decisions, distinguishing between genuine political discourse and deceptive content. The cognitive load associated with this task can lead to decision fatigue, compromising the ability to consistently apply moderation policies. 

Leadership needs to implement regular breaks, mindfulness practices, and stress-relief measures to mitigate the impact of prolonged cognitive exertion. 

  • Mitigating the Emotional Toll of Content Moderation

The nature of the Content Moderators’ work exposes them to distressing and emotionally charged material. Continuous exposure to harmful narratives, hate speech, and explicit content can lead to emotional exhaustion, compassion fatigue, and other psychological difficulties. 

Regular breaks, peer support sessions and offering employees rotation through less egregious content workflows will help them maintain their wellbeing. 

  • Empowering Content Moderators with Training and Support

Recognizing the challenges ahead, platforms can implement proactive strategies to empower Content Moderators and enhance their overall wellbeing. 

We recommend strategic planning of digital wellbeing interventions, inclusive of mental resilience and mental health awareness training for Content Moderators and Mental Health Champion Training for team leads and management to appropriately support their moderation teams.  

  • Implementing Robust Training Programs

Platforms should invest in comprehensive training programs that equip moderators with the skills necessary to discern nuanced forms of misinformation. 

This includes understanding cognitive biases, recognizing psychological manipulation tactics, and staying updated on evolving political narratives. 

Providing ongoing education fosters a sense of competence and confidence among moderators and gives them a break from the content.

  • Maintaining Transparent Communication Channels

Maintaining open communication channels between policy teams and Content Moderators is crucial, ensuring regular updates on policy changes, interpretation and emerging disinformation trends. It is especially important during times of potential political unrest globally to ensure Content Moderators can rely on their cross-functional teams for support. 

  • Remember Why we Do the Work

A sense of purpose can contribute positively to their psychological wellbeing, however we need to avoid placing the stress of defending a democratic election on their shoulders, as this causes more stress.  

  • Recognizing and Managing Personal Biases

Everyone needs to recognize that Content Moderators – like all other human beings – will have an opinion about the election, the candidates and the issues raised. 

We need to be aware of the impact of our own biases on our interpretation of policies, especially when we personally experience an emotional reaction to a post. 

Going through unconscious bias training can be a helpful way to prepare Content Moderators to identify their biases and learn how to mitigate the risk of personal biases on decision-making.  

  • Establishing Peer Support Networks

Establishing peer support networks allows Content Moderators to share experiences, coping mechanisms, and emotional well-being. Peer-led initiatives, moderated discussion forums, and mentorship programs create a sense of community and resilience, helping moderators navigate the challenges they face collectively.  

  • Focusing on Comprehensive Wellbeing Programs

Proactively focusing on Content Moderators’ psychological health and safety through a very personalized and specific wellbeing program can help protect their wellbeing. And remember, a proactive approach to wellbeing is an investment in the long-term performance and effectiveness of the moderation team. 

For more insights on supporting moderators, see our article featuring content moderator counselling support.

Prioritizing the Mental Health of Content Moderators

Social media platforms must take proactive measures to support content moderator wellness

By recognizing the unique challenges posed by the influx of disinformation and misinformation, implementing robust training programs, fostering transparent communication, and prioritizing mental health support, platforms can empower people to navigate this critical period successfully. 

As an expert psychologist and Trust and Safety specialist, advocating for the holistic wellbeing of Content Moderators is not just a strategic imperative but a moral responsibility in the age of digital information.  

Talk With Us

Get in touch with us today to learn more about how we can help your content moderation teams, leadership and the entire organization to ensure they are able to focus on their work, and not their declining mental health. 

Free Webinar | Tailoring Psychological Support to Different Roles in Trust and Safety

Register Now