Recently, I had the privilege of attending the Trust and Safety Foundation (TSF) event, where four distinct sessions provided a platform for open and agenda-free discussions among content moderators, academics/researchers, and civil society experts. The primary aim was to foster an environment where participants spoke as individuals rather than representatives of their respective organizations, generating valuable insights and addressing the challenges faced in content moderation. Following Chatham House rules, this article will not identify any participants of the Research Collaborative to ensure confidentiality and anonymity as agreed during each session.
As Clinical Director for Zevo Health, my role is to conduct research that then informs the design, implementation, and evaluation of our proactive and reactive interventions for Content Moderators, Trust and Safety Teams and User Generated Content impacted organizations. I was excited to hear from everyone and, unsurprisingly, learned a great deal from all the experts at each session. Here are some of the highlights.
- The Moderators’ Perspective – The initial two sessions focused on moderators, both paid and unpaid. Moderators expressed a desire for in-depth research, specifically emphasizing the need for longitudinal studies on exposure to graphic content. They highlighted the importance of considering the impact of moderation on communities, content, and moderators themselves. Additionally, moderators called for research into the socio-demographics and lived experiences of those engaged in moderation work. Other interventions and tools beyond traditional removal, flagging, and blocking mechanisms were also explored, alongside concerns about access and the need for safety practices guides, especially for unpaid moderators lacking equal training and support.
- Academics/Researchers and Civil Society – In the third session, academics and researchers with civil society participants delved into the challenges they face, emphasizing the necessity for increased collaboration across platforms and stakeholder groups. Overcoming obstacles such as limited opportunities for connection, non-disclosure agreements (NDAs), and budget constraints were key discussion points. Researchers expressed interest in exploring the relationship between human intelligence (HI) and artificial intelligence (AI), studying long-term exposure to egregious content, and investigating the impact of cognitive load on moderators. The varied nature of moderation across platforms, influenced by factors such as content type, platform maturity, and budget constraints, sparked conversations around the need for comprehensive research in these areas.
- Bringing Everyone Together –The final session brought all parties together, offering moderators an opportunity to voice concerns about power dynamics with researchers. Moderators highlighted the influence of these power dynamics on their willingness to engage in research. To ensure both sides get what they need from the research, we need to ensure that there is clear communication on the study’s goals, tangible outcomes, and projected timelines. The need for transparent language, both technical and cultural, was stressed, making it easier for moderators to understand and participate in ethical and validated research studies.
- Challenges in AI – Use of AI was a concern to everyone, but for different reasons. Some highlighted the limitations of AI models, particularly in languages with insufficient datasets or cultural nuances; others voiced concerns about biases within AI datasets. Moderators stressed that the relational nature of moderation, reliant on human communication about violations, cannot be replaced entirely by AI.
Conclusion: Building Bridges for Inclusive Research
The TSF’s four Research Collaborative sessions provided a unique space for diverse perspectives, shedding light on crucial research needs and challenges within the content moderation landscape. Bridging the gap between moderators, academics, and civil society, these sessions emphasized the importance of transparent communication, ethical research practices, and collaborative efforts to navigate the complex terrain of content moderation. As we move forward, the insights gained from these sessions should serve as a foundation for fostering positive change and advancing the conversation surrounding content moderation research.
We are planning on undertaking new research in 2024 with a third level university. I look forward to telling you more about that soon and how you can get involved.
My team is always seeking to enhance our research partnerships also, so if you are a researcher, academic, or civil society expert interested in undertaking research in collaboration with Zevo Health in relation to any of the below topics, we would love to start a conversation!
Topics of interest:
- How everything other than content impacts moderator psychological wellbeing (I.e., policy, productivity targets, tooling, etc.)
- Diversity, equity, and inclusion in Content Moderators: interventions that promote engagement considering cultural factors, gender identities, religious affiliation, etc.
- The role of organizational culture in moderator psychological health and safety
- Benchmarking wellbeing in moderation teams through meaningful measurement