Skip to main content
Blog

The Mental Health Impacts of AI-Driven Content Moderation

By October 10, 2024October 16th, 2024No Comments

The Rise of AI in Content Moderation 

As AI and automation revolutionize content moderation, social media companies are at the forefront of these changes. Platforms like Meta rely heavily on AI to filter vast amounts of content, making rapid decisions on what should be flagged, removed, or sent for human review. While AI promises efficiency, it also raises serious mental health concerns for both users and Content Moderators. 

This blog will explore how AI-driven content moderation impacts mental health, particularly in the social media sector, with insights drawn from the Oversight Board’s latest findings and Zevo Health’s expertise in supporting Content Moderators’ psychological wellbeing.  

The Psychological Toll of Automated Content Decisions 

The report highlights the growing reliance on automated systems to make moderation decisions, often without understanding the nuances of context. This has led to a common problem: over-enforcement, where AI flags benign content, and under-enforcement, where harmful content is missed. This inconsistency leaves users feeling alienated and powerless, especially when their content is wrongly flagged. 

From a psychological perspective, this over-enforcement creates frustration and feelings of perceived injustice. Users, especially content creators, may feel disillusioned or unjustly punished, triggering feelings of anger. Zevo Health has seen similar impacts in industries where poorly implemented automated processes erode trust and contribute to workplace stress. In the context of social media, these effects are amplified due to the global reach of content and the personal stakes involved in online engagement. 

Example: Meta’s AI removed a breast cancer awareness post because it featured images of uncovered female nipples, despite the clear medical and educational context. This highlights how over-reliance on AI moderation can lead to harmful errors, causing distress to users whose well-meaning content is unfairly penalized. 

Rise of Deepfakes and AI-Generated Harm 

Generative AI, while a powerful tool for creativity, is also a source of harmful content, such as deepfake sexual imagery. This new wave of AI-manipulated media disproportionately affects women and can lead to severe mental health consequences, such as trauma, anxiety, and depression. The Oversight Board report emphasizes how non-consensual AI-generated content has devastating psychological effects on individuals, particularly young women targeted by deepfake harassment​. 

Research on identity theft and deepfakes shows that individuals targeted by these AI manipulations can experience a psychological phenomenon referred to as “doppelgänger-phobia,” where they feel threatened by seeing AI-generated versions of themselves. This fear can lead to profound emotional distress, including feelings of powerlessness, loss of control, and paranoia, as individuals struggle with the idea of their image being used without consent. 

Content Moderators tasked with reviewing such harmful media are also at risk of mental health issues. Regular exposure to violent or abusive deepfake content can result in conditions like PTSD and shift worldviews – skewing them towards the negative which is already hard-wired into our brains through our unconscious negativity bias. Furthermore, moderators may then become more susceptible to confirmation bias, where you unconsciously seek, interpret, attend to, and favor information that reinforces your existing beliefs. These cognitive distortions can, over time, lead to enduring mental health difficulties such as Generalized Anxiety Disorder, Major Depressive Disorder, and even further traumatization. 

Example: In 2024, South Korean actress Shin Se-kyung became the victim of deepfake pornography that was widely circulated on social media platforms. The explicit videos were created without her consent and quickly spread across multiple networks, amplifying the emotional trauma she experienced. Shin described feelings of violation and helplessness, and the case sparked public outrage in South Korea, leading to calls for stricter laws. This incident led the South Korean government to introduce harsher penalties for the creation and distribution of AI-generated pornographic content.

The Mental Health Burden on Content Moderators 

AI-driven content moderation has transformed how platforms manage the overwhelming volume of harmful material online, but the human cost of this shift has become increasingly evident. Content Moderators, tasked with reviewing the most egregious content flagged by AI, are bearing the brunt of this technology’s limitations. While AI efficiently handles repetitive tasks at scale, it often struggles with context, pushing the most disturbing and complex cases to human reviewers. 

In AI-driven environments, Content Moderators are frequently exposed to harmful material, including violent, explicit, and abusive content. This is where the limitations of AI become particularly problematic: while machines can quickly filter through mass amounts of data, they often lack the nuance to fully understand what should be flagged, leading to a disproportionate amount of distressing “grey area” content being passed on to human moderators. 

AI-driven moderation systems exacerbate this issue by increasing the pace and volume of flagged content that needs human review. Moderators are often left with little time to emotionally process the disturbing content, leading to long-term psychological distress. For example, a moderator working for a major social media platform reported handling hundreds of violent videos daily, often feeling overwhelmed by the sheer volume of disturbing content flagged by AI. 

Inequities in AI Moderation: Global and Cultural Impacts 

One of the key concerns raised in the Oversight Board’s report is the uneven application of AI moderation across different languages and regions. AI systems are often trained predominantly on English-language content, leading to disproportionate enforcement in non-Western regions. This inequity can have mental health implications for users, particularly in conservative societies where AI-generated or flagged content can result in social stigma, emotional distress, or even physical harm. 

Cultural and linguistic biases in AI moderation exacerbate these challenges. In regions where language nuance is not well understood by AI, under-enforcement of harmful content can foster unsafe environments for vulnerable groups, while over-enforcement of benign content can lead to social or emotional fallout. 

Example: The Oversight Board noted how a deepfake image of a public figure in India was not treated with the same urgency as a similar image in the U.S. This discrepancy in AI enforcement highlights the risks posed to users in regions with less media coverage and fewer resources for moderation​. 

Best Practices for Managing AI-Driven Mental Health Risks 

While AI will continue to play a critical role in content moderation, there are steps companies can take to mitigate the mental health risks associated with these systems: 

  • Transparency and Empowerment: Companies must provide users with clear explanations when their content is flagged and offer pathways for appeal. This helps alleviate the frustration and helplessness that often accompany automated enforcement errors. 
  • Regular Audits and Bias Reduction: AI systems should be regularly audited to identify and reduce biases, ensuring fair and accurate enforcement across all regions and user demographics. 
  • Mental Health Support for Content Moderators: Companies should implement mental health support tailored to the needs of Content Moderators, including resilience training, psychological assessments, and access to therapy. 
  • Human Oversight: While AI can manage high volumes of content, human oversight is critical in preventing over- and under-enforcement that can harm users. Social media companies, in particular, must ensure that their systems are balanced with sufficient human involvement to protect mental wellbeing. 

Conclusion 

As AI-driven content moderation becomes more prevalent, it is essential to recognize the mental health risks it poses to both users and employees, particularly Content Moderators. While AI offers efficiency and scalability, it lacks the ability to fully understand context and nuance, especially in relation to specific languages or regions in the global majority world, resulting in significant psychological strain on those tasked with reviewing harmful content. 

By implementing best practices—such as transparent moderation processes, regular audits to reduce bias, and robust mental health support for Content Moderators—companies can mitigate the psychological toll of AI-driven moderation. It is also vital to ensure that human oversight complements automated systems, providing balance and safeguarding the mental wellbeing of those most affected. 

Ultimately, as technology evolves, so too must our strategies for protecting the people behind the screens. The future of content moderation relies on a balanced approach that values mental health as much as operational efficiency. 

 

Free Webinar | This webinar will take place on September 26th, 4:00 pm GMT

Register Now