Skip to main content
BlogMichelle

Preventative Tooling for Content Moderator Wellbeing

By May 31, 2024August 26th, 2024No Comments

Existing Tools for Content Moderator Wellbeing 

The online sphere is rife with harms, spanning from innocuous spam to violent and egregious content. The most harmful content includes child sexual abuse and exploitation imagery, suicide and self-injury content, and terrorism and violent extremism. Human moderators are tasked with removing such harmful content daily – with hundreds of thousands of pieces of content that are audio-visual, text-based, and even live-streamed across platforms. Given the proliferation of harmful online content and the potential psychological risks posed to human moderators, tooling systems have been put in place to protect them from overexposure to egregious materials. 

Typically, companies will partner with tooling companies that specialize in developing automated modifications to content which reduce the risk of harm to moderators. For audio-visual content, this may include automated grey scaling or blurring of imagery, reducing image or video size, and automatically muting audio. Text-based tooling may include the likes of profanity filtering, or automated detection of harassment and abuse that requires less human intervention. In the live streaming context, moderation becomes more challenging however, automation through artificial intelligence and machine learning as a first course of response has aided, with human moderators acting as a second review mechanism. 

More advanced tooling that exists for moderators may include risk or toxicity scoring. This is done through AI tools that can consider context and score harmful content in real-time, allowing teams to identify the most harmful content to take immediate action and to quickly detect unknown violations of platforms’ community guidelines or terms of service. 

While all these tools and increased automation reduce the risk of harm to moderators, is there more that can be done? 

Why Tooling Falls Short 

While tooling and automation has indeed reduced the risk of harm to moderators over the years, there remain reasons that tooling falls short of meeting the needs of this population of employees. The industry hears time and time again that moderators are under extreme pressure to remove violating content as quickly and accurately as possible. This results in moderators opting out of protective tooling options like image blurring and audio muting because it slows down their productivity. Whilst there is a benefit to opting in – including reducing the element of shock of suddenly being exposed to egregious materials throughout the day which can active the flight, fight, freeze/fawn response – many moderators are likely to regularly opt-out of using these options to maintain their productivity.

Secondly, in our experience, moderators have cited that these tools are impractical as they hinder the ability to make accurate decisions. In many cases, moderators will switch off the tool so they can view the materials as a whole, understanding context and nuance, ensuring accuracy in cases of violating content. For example, if a moderator is reviewing a piece of content tagged as child sexual abuse and the tooling system has automatically blurred the visual, the moderator may still need to unblur the visual to verify the age of the child – particularly if the abuse is being perpetrated against an older adolescent.  

Furthermore, moderators who are tenured and have been exposed to egregious content for long periods may be susceptible to desensitization. In psychology, desensitization is a process whereby individuals experience a diminished emotional response to aversive, negative, or even positive stimuli after repeated exposure. This may also be known as habituation. In these cases, moderators who experience a diminished emotional response to egregious content may opt out of protective tooling options because they feel they no longer need them. 

If these protective tooling options are under-utilized by moderators for reasons relating to productivity, accuracy, and desensitization, this poses the question of what types of tooling would better suit their needs to safeguard their psychological health and wellbeing. 

The Future of Preventative Tooling 

While it is imperative that protective tooling measures such as grey scale, audio muting, profanity filtering, and automation remain in place for moderators, the reasons these are under-utilized present a strong case for consideration of different tooling safeguards. 

We have already seen some companies tackling this through technology solutions such as integrating Tetris-like games into the wellness suite of moderators’ systems or sending out wellness break reminders throughout the day. However, these tools rely on moderators’ individual ability to notice when their sympathetic nervous systems are being activated, inducing an FFF(F) response. They also rely on the organizational structure to allow time for moderators to pause during work and engage in these activities, which can be challenging when productivity and quality metrics are key to keeping a platform safe. 

Rather than placing the onus on individual moderators, we advocate for a holistic approach that considers all factors of the moderation role and their impact on moderators’ psychological health. This not only includes exposure to egregious content but, systemic challenges such as team dynamics, leadership compassion, psychological safety in the workplace, workforce management, poor and ineffective tooling, and more.  

The future of tooling should be preventative by nature – like existing tooling – but should focus heavily on ensuring moderators are provided agency and autonomy to utilize the tools effectively.  

The Need for Cross-Industry Partnership 

We strongly believe in the need for cross-industry and professional partnership to enhance tooling for content moderation teams’ wellbeing. Key stakeholders need to be involved to ensure that these tools meet the needs of moderators, ensure their engagement with these tools, and address existing gaps. We believe that mental health professionals, researchers, product teams, technology experts, and Content Moderators should come together to tackle this issue. What might this look like? 

  1. Data science that underpins why moderators are under-utilizing existing tools and their specific needs for future tooling 
  2. Product development that deeply considers point 1 which is led by a “safety by design” mindset 
  3. Exploration of existing technology solutions that can accurately measure acute stress in real-time without infringing upon data privacy 
  4. Development and implementation of new tooling features and a stepped-care approach to intervention.  

Conclusion 

In conclusion, while existing tooling systems have made significant strides in protecting Content Moderators from harmful online materials, there are clear shortcomings that must be addressed to safeguard their psychological wellbeing effectively. Despite the implementation of features such as automated content modification and risk scoring, moderators often opt out of these protective measures due to concerns about productivity, accuracy, and experiencing desensitization. 

Moving forward, it is crucial to adopt a more holistic approach to moderator wellbeing that not only addresses exposure to harmful content but also considers systemic challenges within the moderation environment. This includes fostering a culture of psychological safety, promoting leadership compassion, and providing moderators with the agency to effectively utilize protective tools. 

Furthermore, collaboration across industries and disciplines is essential to enhance existing tooling and develop new features that better meet the needs of content moderation teams. By involving mental health professionals, researchers, product teams, technology experts, and moderators themselves, we can ensure that future tooling solutions prioritize moderator wellbeing while effectively mitigating the risks associated with content moderation. 

Ultimately, the future of preventative tooling lies in a collaborative effort that prioritizes the psychological health and autonomy of Content Moderators, ensuring a safer and more sustainable online environment for all. 

Free Webinar | Tailoring Psychological Support to Different Roles in Trust and Safety

Register Now