Skip to main content
Blog

Content Moderation – The Unseen Frontline During Conflicts

By November 12, 2024November 14th, 2024No Comments

Amid the recent Israel-Hamas conflict, a parallel conflict unfolded on the digital battleground of social media platforms, placing immense stress on content moderators and users alike. The surge in content related to the conflict across various social media platforms has posed unprecedented challenges for content moderators, whose responsibility it is to sift through the deluge of graphic images, hate speech, and disinformation. This has invariably raised questions about the role of market-leading social media companies and the pressures these moderators face, along with the implications for end users.

The Unseen Frontline: Content Moderators 

Content moderators are constantly exposed to distressing content, from violent imagery to divisive opinions. Their task is not merely technical; it’s profoundly psychological. 

Mental Health Challenges for Content Moderators

This continuous barrage can lead to severe mental health challenges, including PTSD, anxiety, and depression. While their role is to protect the end users line, they are frequently met with the dilemma of upholding free speech while preventing the spread of harmful content. 

Disinformation and Its Impact on Social Media

The challenge is amplified by a surge in disinformation – unlike misinformation, this involves the deliberate creation and sharing of false or manipulated information with the intention to deceive or mislead. 

How Social Media Content Moderators Debunk False Information

For instance, The New York Times recently reported on a video showing Israeli children as Hamas hostages which was later debunked, having been circulated in other contexts related to Afghanistan, Syria, and Yemen.1 

This instance is just a glimpse into the extensive information warfare campaign where graphic content is strategically used to incite fear, influence views, and engage in psychological manipulation.

Social Media’s Controversial Response 

Several social media companies have faced criticism for their inconsistent and sometimes opaque content moderation policies. In the context of the Israel-Hamas conflict, these platforms have been accused of bias, either by over-zealous removal of content or allowing the spread of disinformation. 

Flood of Violent Content on Social Media

Since the onset of the conflict, platforms, have been flooded with violent videos and graphic images. Images of dead Israeli civilians near Gaza and distressing audios of Israeli kidnapping victims all made their way onto these platforms, racking up countless views and shares. Disturbingly, much of this content has reportedly been systematically seeded by Hamas with the intent to terrorize civilians, capitalizing on the inadequate content moderation on certain platforms.

Despite claims by one major platform about its special operations centre staffed with experts monitoring content, there’s a growing call for transparency in content moderation practices.

Shadow Banning and Allegations of Bias

Platforms are criticized not only for their lack of accurate monitoring but also for algorithmically curtailing the reach of certain posts, known as shadow banningAccording to Vox, shadow banning is an often ‘covert form of platform moderation that limits who sees a piece of content, rather than banning it altogether.

User Allegations and Platform Responses

Numerous users on Instagram, Facebook, and TikTok allege that these platforms restrict the visibility of their content. However, these tech giants attribute such occurrences to technical glitches, denying any form of bias.  

According to Time Magazine, the reason why it is so difficult to ensure content is accurate is due to several factors.  For starters, it is hard to disprove a general negative, most platforms want to make sure important content is not censored.

it is hard to disprove a general negative, most platforms want to make sure important content is not censored. Users are motivated to share content that shows that their side is in the ‘right’ and people want access to the latest information. All of these elements mean content moderation policies can fail at a time of a major conflict.  

Such issues not only undermine the public’s trust in these platforms but also place undue pressure on content moderators who must navigate these murky waters. 

Implications for Social Media Users – Exposure and Information Deprivation

For end users, the implications are two-fold. On one hand, they might be exposed to distressing and traumatic content that is not picked up during the moderation process. On the other hand, they may also be deprived of critical information if it is incorrectly flagged and removed. 

  • For the average internet user, knowing what information to trust online has never been more challenging and more critical. 
  • The challenge amplifies when unverified news rapidly surpasses the speed of verification, ending up in mainstream news and even statements from leaders.
  • This complexity was evident when, according to Vox, US President Joe Biden remarked on unverified claims about Hamas militants’ actions on children during the initial attack.

The Way Forward: Robust Moderation and Psychological Support 

The intensity of the recent conflict underscores the need for stronger content moderation on social media teams that are well-equipped to handle such crises. 

This not only means employing more moderators but also providing them with the necessary tools and training to discern harmful content based on platform policy, artificial intelligence, and genuine news versus disinformation. 

Impact of Trust and Safety Team Layoffs

In an attempt to try to balance community standards with the principles of free speech, many social media platforms let go of many of their Trust and Safety team members, including content moderators. 

In an article in May 2023, CNBC reported that Meta, Amazon and Twitter all laid off members of their Trust and Safety teams. This means that less people have to do more work, to try to ensure in-platform content is accurate.

Balancing Ethics and Platform Policies for Content Moderators 

For many moderators, it can be personal too. Moderators must balance their personal ethics with platform policy enforcement which requires self-awareness and psychological distancing to manage ethical misalignments or dilemmas they may face.  

Dr. Michelle Teo, Health and Wellbeing Director at Zevo Health shares her insights from working directly with moderators, stating that “one of the most unique issues facing moderators is how they process the emotional impact of actioning a ticket when they feel the action doesn’t align with their own values – but it follows platform policy regulations.

The Psychological Impact of Policy Enforcement

This can be as simple as not being able to revoke the account of a platform user who has been reported for scamming other users. 

The moderator may recognize the scam within the materials they are viewing but, without the evidence as per the policy, they have no option but to allow the user to remain on the platform. 

When a moderator is told that their role is to safeguard users and they feel they cannot do this, therein lies the dilemma. This can bring about feelings of guilt, shame, and anger – all emotions that can have a deep impact on someone’s sense of worth and their sense of meaning and purpose in their work. 

This impact becomes more profound when we’re talking about content like child and animal abuse, graphic violence, or revenge porn.”

Equipping the Moderation Ecosystem for Ethical Challenges

The entire moderation ecosystem including policy developers, AI, people managers, workforce management teams, etc. needs to be equipped to respond to these crises. 

For example, with increasing disinformation campaigns, companies may query if their current AI models can accurately determine whether a video was taken years ago versus in the moment.

 From the operational lens, workforce management teams need to forecast overtime or increasing headcount on moderation teams’ responsible for monitoring hate speech and graphic violence workflows as the influx of content rises exponentially. 

Importance of Psychological Support for Content Moderators

Equally important is the psychological support that these moderators and their support teams require. Companies must recognize the mental toll that content moderation can take on the people surrounding the content moderators. 

Particularly during ongoing crises such as the Hamas-Israel conflict, it is imperative that long-term impacts are considered and addressed. This may necessitate weeks, months, or even year-long support avenues for moderation teams. 

Investing in Mental Health Support Systems

In times of crisis, it is insufficient to simply rely on day-to-day support. Holistic approaches must consider all stakeholders in the ecosystem, the potential risks inherent to each role and how roles interact with one another, and timely and effective risk mitigation measures. 

Companies must invest to improve mental health support systems that ensure the psychological safety of their people, especially content moderators and their trust and safety teams.

Free Webinar | Tailoring Psychological Support to Different Roles in Trust and Safety

Register Now