Skip to main content
BlogMental HealthMichelle

Online Harassment – Mental Health Challenges for Content Moderators

By January 28, 2025February 4th, 2025No Comments

What is Online Harassment?

The Digital Trust and Safety Partnership (2023) defines online harassment as “unsolicited repeat behavior against another person, with the intent to intimidate or cause emotional distress.” 

They highlight that it may occur over any medium, including social media, email, and online services, with the potential to result in real-world abuse or vice versa. 

Perpetrators of online harassment may target one specific individual or a group of individuals and vice versa

Prevalence of Online Harassment 

In 2024, the Anti-Defamation League (ADL) reported a rise in online harassment, with 22% of Americans experiencing severe harassment on social media, up from 18% in 2023. Physical threats also increased from 7% to 10%.

Individuals with disabilities faced higher rates of harassment, with 45% reporting any form of harassment, compared to 36% of non-disabled individuals. The rate of severe harassment for disabled people rose from 19% to 31%, and harassment based on disability spiked from 4% to 12%.

The report also highlights how LGBTQ+ individuals saw a significant rise in harassment, with physical threats increasing from 6% to 14%. Jewish adults also faced higher rates, with 34% reporting religious harassment and 41% altering their online behavior to avoid being identified.

Types of Online Harassment

Online harassment can take many forms and impact various people based on their demographic characteristics, and the prevalence rates of various online harassment behaviors indicate that a concerted effort is needed by platforms to reduce potential harm to users. 

According to the Canadian Women’s Foundation (2024), there are 10 types of online hate, harassment, and abuse. These include the following:

  • Harassment – Hateful, threatening, or harmful messages often repeated or coordinated to target individuals.
  • Cyberstalking – Persistent, deliberate online harassment designed to intimidate or disrupt lives.
  • Image-Based Sexual Abuse – Non-consensual sharing of intimate images, including creepshots, sextortion, or deepfakes.
  • Digital Dating Violence – Use of technology to bully, stalk, or intimidate partners, often amplifying harm due to online anonymity and permanence.
  • Hacking – Unauthorized access to systems for theft, defamation, or control over victims’ devices.
  • Impersonation – Falsely portraying someone online to harm reputations, including spoofing messages.
  • Doxing – Publishing private or identifying information about someone without consent.
  • Flaming – Posting insults or personal attacks online to provoke or harm.
  • Gendered and Sexualized Disinformation – Spreading false narratives to target and silence women in public spaces.
  • Online Sexual Offenses Against Children – Luring minors or distributing, possessing, and accessing child pornography.

What are the Psychological Impacts of Online Harassment?

Extensive research has explored the psychological impacts of online harassment perpetrated against children and youth (primarily cyberbullying and child sexual abuse). Additionally, research involving adults has largely examined gender-based violence online and its facilitation of offline or “real-world” harm. 

For example, this publication by UN Women (2024) discusses a process to identify a set of research priority recommendations for addressing the global problem of Technology-Facilitated Gender-Based Violence (TFGBV).

However, comprehensive research investigating the impacts of multiple forms of online harassment across the lifespan is lacking. This proves how challenging it is to review the impact of online harassment on the psychological health of human moderators.

Similarities Between Content Moderation and Mental Health Professions

Examining online harassment, including exposure to toxic content, content leakage, disturbing material, or information overload, can lead to mental health symptoms comparable to those experienced by mental health and allied health professionals.

This observation aligns with findings based on our literature review of factors contributing to vicarious trauma

Burnout and Fatigue in Mental Health Professions

Mental health professionals regularly hear accounts of traumatic experiences from their clients during counseling sessions, and the literature suggests that the amount of time spent counseling these victims was the best predictor of trauma scores. 

Additionally, ethnicity and race were a contributing factor in the development of compassion fatigue and burnout. Specifically, African Americans and Asians were significantly more likely to report burnout than Caucasians, and Hispanics were significantly more likely to experience compassionate fatigue than whites. 

To counteract these effects, applying principles from positive psychology can be an effective tool for managing the emotional wellbeing of moderators and promoting healthier coping mechanisms.

Parallels Between Counseling and Content Moderation

In considering the parallel to moderation work, we can take these learnings and make an educated assumption that regular secondary exposure to trauma can significantly affect those providing support. 

Similarly, Content Moderators regularly exposed to online harassment may develop symptoms of compassion fatigue, burnout, and other symptoms of vicarious trauma.

Common symptoms include:

  • Lingering feelings of anger or sadness about users’ victimization.
  • Bystander guilt and shame.
  • Hopelessness and pessimism result in a negative worldview.
  • Preoccupation with users’ experiences beyond work hours.

Considerations of Collective Trauma 

The experience of collective trauma may be a potential impact of regular exposure to online harassment perpetrated against online platform users. 

According to the American Psychological Association (APA), collective trauma refers to “an event or series of events that impact not only one person but also a group of identified or targeted people.” 

It highlights how “healing the wounds of collective trauma is a challenge that requires supporting each other and fighting together to achieve social justice.” 

Example of Collective Trauma

We have seen this occur during the COVID-19 pandemic, where the collective trauma experienced was the pandemic itself, and how society functioned was deeply altered. 

Furthermore, there were significant changes in societal norms, including the use of face masks, hand sanitizing, and extended isolation for vulnerable groups

Intersectional Identities and Psychological Harm

Similarly, Content Moderators face a collective traumatic experience, especially due to the diversity of these professionals. They may experience online harassment experiences targeting specific identities, which can amplify psychological harm.

For example, a Content Moderator who identifies as non-binary is likely to encounter bullying and trolling as part of their job, which may exacerbate the harmful psychological impact due to their personal connection to the story. 

Content Moderators’ intersectional identities must be deeply considered, therefore, to mitigate risks of psychological harm when reviewing online harassment. 

Platforms and Groups Most Affected by Hate Speech

The UNESCO-Ipsos Survey on the Impact of Online Disinformation and Hate Speech (2023) highlights that “67% of internet users have encountered hate speech online (including 74% of those under 35). They overwhelmingly believe that hate speech is most prevalent on Facebook (58%), followed by TikTok (30%), X (18%), and Instagram (15%).” 

They also share that according to citizens, LGBT+ individuals (33%), ethnic or racial minorities (28%), and women (18%) are the primary targets of online hate speech in their countries. Keep in mind that these figures vary significantly between countries.

These statistics highlight how individuals in minority or historically marginalized groups who are working as Content Moderators are likely to view more online harassment that reflects their own identities. 

The Mental Health Challenges of Content Moderators

There is no doubt that Content Moderators moderating online harassment as part of their roles may experience mental health challenges, including acute secondary stress and vicarious trauma. 

Exposure to harmful content, such as graphic content and traumatic content, is a key risk factor. Collective trauma may affect moderators based on their intersectional identities.  

Zevo Health takes a tailored approach to our content moderation wellbeing services, including psycho-educational training on the psychological impacts of online harassment. Our training focuses on teaching Content Moderators effective coping skills to manage their psychological health. 

Speak to our Trust & Safety Solutions Directors to learn more about our services and how we can collaborate with you to protect your Content Moderator workforce.

Free Webinar | Building Resilient Teams: Systemic Approaches to Content Moderator Wellbeing in 2025

Register Now