2024 Zevo Group. All Rights Reserved. | Privacy Policy | Terms and Conditions | Sitemap
Information
Join us for an insightful discussion as we delve into the intricate relationship between policy, content moderation, and the overall wellbeing of content moderators. Zevo Health’s own Health and Wellbeing Director, Dr. Michelle Teo is joined by Suzannah Fischer, a distinguished expert in policy development within trust and safety.
Suzannah brings her extensive experience in trust and safety, having been deeply involved since 2017, managing global policy organizations for leading industry vendors in the content moderation sphere. Her expertise spans various domains, including fraud prevention, child safety, and e-commerce policies. She offers a unique firsthand perspective on how policy decisions impact the mental and emotional health of content moderators.
Summary points of discussion:
- Understanding the Challenge: Explore the unique stressors faced by content moderators when implementing evolving policies.
- Coping with Cognitive Overload: Discover how moderators manage the constant influx of policy changes and its impact on their mental well-being.
- Policy Teams’ Role: Learn about proactive steps that policy teams can take to prioritize the wellbeing of content moderators.
- Scheduling and Frequency of Updates: Find out how scheduling adjustments and the frequency of policy updates can make a difference in reducing moderator overwhelm.
- Empowering Moderators: Understand the importance of providing moderators with the necessary support and feedback mechanisms.
Sign up Below
Speaker 1
So welcome everyone to our webinar today where we will be exploring how policy impacts content moderator well being. So I’m Dr. Michelle Tio, the health and well being director for Zevo Health. And I’m thrilled to be joined today by Susannah Fisher, who is an expert in policy development in trust and safety.
So Susannah has been working in trust and safety since 2017, running global policy orgs for multiple industry leading vendors in the content moderation space. She has worked touching on policies related to fraud, child safety, e-commerce and more. And she’s experienced firsthand how the policy function can have impacts on content moderator well being.
Speaker 2
Thanks so much for the introduction. I’m really excited to get to join you to talk about policy, my favorite thing.
Speaker 1
Wonderful. So if we could maybe just start by sharing even from a very high level overview, what we mean when we say policy development in the trust and safety industry.
Speaker 2
Sure, absolutely. So it seems like it would be this extremely complicated process. It sounds kind of scary if you’re not familiar with it. But ultimately, from trust and safety, when we’re talking about policy development, we’re talking about setting the rules of engagement that are specific to an individual space. We see this happen in the real world all the time.
We just don’t think about it as like a policy development. So a perfect example is we all know how we’re supposed to behave when we go into a library, right? We know that it’s a shared public space and that it’s supposed to be quiet. Trust and safety policy development is doing that same thing individualized for specific spaces.
So for example, if you work in e-commerce, part of your rules of engagement on your platform might be that you are not allowed to sell products that are illegal. If you work in a social space, part of your rules of engagement and your policy development may be putting forth a rule about how we cannot harass other members of a community.
So that’s really what policy development is, is setting those rules. It’s really interesting because it kind of involves two different sets of rulemaking. The first is that we’re outlining the idealistic behavior within our space. In a perfect world, what kind of behavior would we see from our community? How would they interact? How does an ideal social commerce, how do those things work? Gaming spaces.
On the other hand, we know that it’s not an ideal world and people do not always behave in the ways that we wish they would. So the second part of the policy development is responding to platform abuse and crafting policies around the different ways that we see people maybe being sort of like bad actors and bringing things into our communities that we don’t want to see.
This looks different for every platform because every platform is different. It creates this balance of trade-offs. So some platforms may have a really high value on freedom of speech where anything goes because that’s their primary value. On the other end of the spectrum, the value may be placed a little bit higher on community safety or social responsibility.
So you see fewer things allowed on the platform. That changes from space to space. That’s why you see some places with rules that don’t apply in other places. And because it’s this living and breathing group of users who are in the community, this policy development is an iterative process that happens over and over and over again. It’s almost never one and done.
There’s always something new to be thinking about of what’s going on in the platforms.
Speaker 1
Yeah, and look, I think that’s, that’s exactly it. You know, part of the conversation that we’re having today is just really helping people understand how that iterative process impacts content moderators who are the people responsible for enforcing those policies. So I guess in that vein, let’s dive into our first discussion point.
As a sort of well being service provider to content moderators, we often hear in conversations that policy impacts the well being of the content moderators in various different ways. When we were sort of prepping for this webinar, we discussed a number of areas where policy development might have that impact on content moderators.
So I wonder if you could share your thoughts with our audience based on your experience, what are maybe some of the key stressors facing content moderators in relation to enforcing that policy.
Speaker 2
Absolutely. So I’ll tell you up front that the answer might be a little bit surprising, because what I’m not going to talk about is the nature of the content that they’re working on. I have to mention it, or else people will obviously think like, well, obviously there’s content involved. And that is true. But that is the most common thing that people think about.
So for the purposes of this discussion, like I want to talk about things that people might not think about as commonly. So we acknowledge and we do not diminish that content can have an impact on people. But let’s talk about some of the alternative factors that don’t come up in discussion quite as frequently.
So one of those things, I touched on it in our overview of policy development, that it’s an iterative process. And because that’s the case, that means that there are updates that get made. The policy lives, breathes, changes. And one of the things that we don’t think about quite as often, but that does have an impact on reviewers, is frequency of updates.
We know that things happen and we need to push updates out quickly. But when they’re too close together, it creates a sense of instability. So people don’t have the opportunity to really absorb and understand changes that are being made before they’re hit with a new set of changes.
So it kind of makes it hard to overcome the learning curve that is automatic with any sort of new information that a person takes in. When they’re too far apart, it seems like those updates to policies kind of like stack up on top of each other. And when we wait too long, we then see these huge updates that create a cognitive overload for the information that people are trying to learn.
So there really is a sweet spot. I can’t tell you exactly what that is. It changes for different organizations, just like policies change. But it is definitely something that I’ve seen impact moderators where they feel unsure, they feel uncertain.
And if there’s too much change too frequently or too far apart, they start to doubt their own abilities to actually apply the policies, which is what they do every day. So we want to make sure that we can think about that when we’re looking at this because we don’t want that extra level of stress on them.
Speaker 1
Yeah.
Speaker 2
Another thing that we don’t talk about all the time about how it affects the moderators, because we do talk about how it exists, are gray areas within the policies. We have tons of conversations about how hard it is to solve for certain problems, and how that creates some areas where maybe there’s some ambiguity or multiple interpretations.
And when we talk about it, we think about how it applies to the content. But we don’t often think about how that affects the moderators. So there’s stress that’s involved with the idea of lack of confidence and being able to take a decision. And that stress gets compounded because we think about how are moderators measured? How do we know if they’re doing a good job?
It’s whether or not they take the right action. If you enter a situation where there’s more than one right action or there’s no right action, people understand that this is something that will show up in their KPIs and how they are measured. And it creates this job performance stress just because there’s an inherent risk in there that agreement doesn’t come about what the correct answer for that is.
Speaker 1
Hm.
Speaker 2
that’s something to think about too, actually constructing the policies has these downstream impacts. And the last thing I’ll talk about is ethical dilemmas.
Obviously, there’s so much thought that goes into policy development about the impact that may have on a user or experts, teams of experts are working on solving these really sometimes sensitive problems about what is or is not permitted in a certain space. But there are still times when maybe your moderator doesn’t necessarily agree with the policy.
In my personal experience working with tons of moderators, they do a great job in actioning according to the policy, not their feelings, but the feelings don’t just stop because they actually have the policy. So you might see moderators who feel stress because they left something on the platform, according to the policy that they personally feel is harmful.
It causes stress knowing that something is out there that they could have prevented that they feel may harm other people, or just the cognitive dissonance that affects them where I know I have to do one thing, but I feel something else very strongly.
That’s a feeling that I think we can all sympathize or empathize with in certain scenarios, but not at the scale that content moderators may experience it. Content moderators action thousands of pieces of content within like a week’s time frame. So the amount of opportunities for this kind of stress comes up quite a lot. And also, there’s this pressure I feel like is put on content moderators.
It’s inadvertent and it’s absolutely well-meaning, and I am guilty of doing this myself about telling them just how important their job is.
Speaker 1
And.
Speaker 2
to them that they’re like the superheroes of the internet out here, like Batman cleaning up Gotham City, cyberspace. And it’s meant to make people invested in their work because a sense of purpose absolutely reduces burnout.
But when the actions you’re taking don’t line up with how your job has been messaged to you, and you feel like you’re not cleaning it up, or something is slipping through the cracks, that can be a really stressful experience.
Speaker 1
Yeah. And, you know, this reflects exactly what our wellbeing specialists often hear when they’re working with content moderators and they’re coming to us saying that, you know, it’s, it’s an action I took that the policy says I did it right. But personally, I just feel a little bit uncertain with that action or with that decision.
And I don’t really know how to cope with all the feelings that come up with that. And I think you’re absolutely right. The, the additional pressure of being regarded as the superheroes of the internet and keeping all of the users safe, you know, it’s a, it’s a huge pressure to put on people. Absolutely.
So like you’re saying, you know, it’s not just the policy itself, in terms of making a decision, it’s that combination of all of these different factors, you know, all of the updates and the frequency of updates, how those are implemented, the cognitive overload that it takes to understand and absorb all of the changes, and then the performance pieces that go along with it.
And all of this is, you know, subject to human evaluation, human error, and sometimes that sort of really personal piece around the morals and the values. So in your expert opinion, what do you feel that policy development teams could be doing differently or perhaps better, knowing that these might be some of the impacts on content moderator well being?
Speaker 2
Sure, and so I want to preface this by saying when possible, right? Because all of these are practical tips that I think are really important, but we have to recognize that things move fast in the business that is policy development. There may be something that is a world event that happens in real time and there is no opportunity for lag or there is no opportunity to find certain examples.
It just has to be addressed in that moment. And we know that this happens all the time. There are things that are high priority and urgent. So please understand that all of these tips are when possible. But the first thing I think is really helpful is to just be extremely conscious of scheduling changes.
So for example, some things that might help is providing a roadmap, maybe not necessarily a super detailed plan with all of the individual details of each policy change, but a high level view. Maybe it’s quarterly or even in the half, but to say, you know, within this quarter or this half, we’re looking at making changes to the following policies within our sort of basis of our documentation.
We’re thinking about or we’re planning to change, for example, like bullying and hate speech. You know these changes are coming. It seems like it’s not a lot just to name the policies, but having any kind of heads up can be extremely helpful. It helps people prepare. We know that part of managing change well is like the mental preparation that comes before the change occurs.
So even having a little bit of heads up can alleviate some of the uncertainty of what’s going to happen. Now, if that’s not possible, if you can’t plan that far in advance or provide something like that, then thinking about the frequency of updates I think is really helpful too.
So for example, if you’re updating your policies like once a year, chances are there’s a good chance that it will be a little bit overwhelming because so much happens in a year. Policy teams work all year. All those changes being like dumped in at once, that can be overwhelming. That’s a lot of change to take in all at the same time.
On the other hand, if you’re updating your policy every week, then there’s really no opportunity for anyone to catch up and feel stabilized in their role. They’re constantly having to question something that they already knew and it just compounds. It just keeps adding up over time.
I cannot say what is exactly the right way to do it, but putting really conscious effort into when we’re changing things makes a difference. Also, lumping similar changes together.
So for example, if you’re working on a policy around illegal or regulated goods or services, if you know that you’re going to change four different things about this policy, it makes sense to try and launch all those changes at the same time so that you don’t have a change here, a change in a month, another change in another month. People learn well when they’re learning similar types of content.
So if you can categorize changes and make an attempt at least to group them together by theme, it creates a little less cognitive stress and a little smaller of a learning curve.
Speaker 2
So all of those things, again, if possible, sometimes it’s not. You may go through and you may think about how a platform does things and they may determine that that’s exactly right. Again, there’s no right or wrong specific, but it is important to think about why we do the things we do other than just perhaps it’s the way we’ve always done it or it seems like a convenient workflow.
There are those downstream changes. I think outside from scheduling, another thing that I’ve heard a lot in my work with content moderators is this resounding need for support when these changes launch.
So an explanation of what caused this particular change or detailed documentation that’s clear and easy to implement that avoids those gray areas, providing training materials whenever possible that have multiple examples or even a step further practice content sets, things where people can go through and apply these changes.
to new content without it being in a live situation where they go through and make sure they understand. And if a mistake is made, it’s not causing harm to the actual platform. Just like any other change or new thing that any of us learn, we like to have details, we like to have practice, we like to have examples. And then also a quick resolution workflow.
So there’s always going to be questions when things change, when things are new, people are going to have questions. And so having any kind of sort of feedback framework or workflow that allows people to ask those questions and get clarity quickly.
The sooner we can clarify misunderstanding that a content moderator might have, the less opportunity it is for that to be mishandled in live content that’s affecting the platform.
Speaker 1
Yeah, you know, I love all of those thoughts and ideas. And as you’re saying, we know, sometimes these changes have to come in very quickly.
But the more that we can plan ahead and give people transparency, ensure that the content moderators know what’s coming, that, like you said, that mental preparation will just aid in alleviating any of that stress or anxiety that comes with knowing policy changes are going to happen.
But at least they sort of have, you know, a vision in front of them and that they’re not just reacting in the moment. You know, I feel very much, you know, in the conversations that we have had, you know, policy teams could be supporting content moderator more directly through some of those avenues. And there’s naturally going to be policy teams that are more acutely aware of these challenges.
And they’ve probably already put steps in place to address them. But then there may be policy teams who are newer, or even platforms that are newer. And therefore, some of these issues and solutions may not necessarily be at the forefront of their minds. So I love that you’ve shared that.
And hopefully, people can take away some of those tips, particularly if they’re they’re new to the kind of trust and safety space. So maybe just to finish off, is there maybe one thing that you could leave with our audience a final takeaway from this conversation?
Speaker 2
Absolutely. So if you remember one thing I’ve said, it’s that there’s an importance of anticipating the downstream impacts of policy decisions. A lot of times the main sort of focus, the primary focus is users. How does this affect users? How does this affect content or metrics? How does this affect a report rate or an action rate or things like that?
But a key stakeholder in content moderation is the moderator themselves. It’s the content. It’s the way it’s delivered. However the decisions of what policy changes are being made, they do impact the moderator. And so we’re thinking about everyone and how they’re impacted. Think about the moderators too. These decisions are super important. We know they’re important.
We know that policy is an absolutely crucial function of trust and safety, but we kind of forget who all is being impacted. So when thinking about them, think about the moderators as another one of your stakeholders.
Speaker 1
That’s brilliant. I think just echoing that thought, the more we think about how policy development impacts content moderators, the more that we can do to just safeguard them. This is a very specific workplace stressor in the trust and safety industry that you don’t necessarily see in other industries.
I think it requires a lot more of that open communication between policy teams and content moderators, not just necessarily providing them the training and the updates so that they can do their jobs, but also just general conversations like we’re having. How did a recent update impact how you did your work and how did it maybe impact your wellbeing?
Then also, I think even just as a service provider, companies like Zevo Health, for us to understand that there’s a wider ecosystem at play here, it’s not just the moderator and their wellbeing, it’s also the policy teams. How can we implement solutions that support all of those key stakeholders that are part of that ecosystem and that includes the policy teams?
Speaker 2
I’m sorry.
Speaker 1
Thank you so much for joining me today and for giving our audience such great insights into policy development and the sort of impacts that it can have on the content moderators downstream and then a couple of tips and tricks and maybe some practical solutions that they could implement.
I think in this short time we’ve probably covered a lot for our audience and given them a great food for thought and ensuring that they maybe have some tools that they can use as next step to support their content moderator teams.
So if you guys are looking too higher in the trust and safety space then you can find Susanna’s LinkedIn profile details in the description and if you’d like to learn more about how Zebel Health can implement supports for your content moderation teams then you can find us on our socials or at our website. Thank you so much.