The Psychosocial Impact: Mental Health Considerations for Moderators

In this month’s insightful episode of Zevo Talks, Colm Lannon-Boran, Graduate Research Assistant at Zevo Health, joins our host, Dr. Michelle Teo, Clinical Director for Zevo Health, to discover mental health considerations for Content Moderators.

Content Moderators are at the forefront of everything we see online and is under no doubt that they could experience a strong psychosocial Impact. This comes from many factors from working conditions, the demands of the job and the actual content itself.

Takeaways:

  • Discover psychological Impacts and Trauma
  • Explore ethical Considerations and Societal Impacts
  • Challenges accessing support

You can also listen here:

Spotify 

Soundcloud

Youtube 

Speaker 1

Welcome, everyone, back to Zevo Talks. I’m Dr. Michelle Tio, and I am your host for this podcast as the clinical director at Zevo Health. And I’m thrilled today to have Colum Lanham-Boren joining us.

 

Speaker 1

He is our graduate research assistant with Zevo Health. Colum is currently completing a PhD in cognitive psychology. He has a certificate in mindfulness-based stress reduction, as well as a diploma in cognitive behavioral therapy.

 

Speaker 1

And alongside that, he brings a wealth of experience in applied research methods and teaching and lecturing psychology students, as well as clinical training in neuropsychological assessment. So you are very welcome, Colum.

 

Speaker 2

Thank you, Michelle. Delighted to be joining you today.

 

Speaker 1

Great! So I guess the topic that we’re exploring today is the psychosocial impact of mental health of content moderators, so I’m going to jump in with my very first question for you, Colum. So what are some of the most common psychological symptoms that content moderators experience due to their work?

 

Speaker 2

Well, as you know, Michelle, I’ve been doing a lot of research in the last few weeks and months around this area and main impact, the most concerning one as well, seems to be a vicarious trauma. So idea of exposure to other people’s trauma and then experiencing symptoms of trauma and PTSD due to that exposure.

 

Speaker 2

Um, so that’s really, that’s really the biggest, the most significant impact that there seems to be, there’s a lot of, you know, the recurring theme of compassion, uh, fatigue as well, sort of, um, is a scene in like, uh, healthcare professionals and primary carers and things like that as well.

 

Speaker 2

Um, it’s sort of just running out of compassion and then it’s, you know, it’s harder to share that then with your friends, your family, and then self compassion as well. So that’s quite concerning. Yeah.

 

Speaker 2

And burnout is the other big one too, probably related to the vicarious trauma and the long hours, the constant cognitive engagement with the material. It’s very demanding role. There’s a lot of concerns about burnout and, um, you know, some of the papers that I’ve been reading around content moderation, there’s not too many, but there’s a great paper by Ruth Spence.

 

Speaker 2

Um, Stieger has a great paper as well. Um, Ruth Spence talks about how vicarious trauma and burnout, especially you’re very cumulative and they build up slowly over time. So it’s not, it’s not sort of an instant thing.

 

Speaker 2

A lot of the time it can creep up on people. And, um, if there’s a lack of awareness around it, it can be really hard to catch. So I think then there’s sort of the main impacts, the symptoms that come out of that can present very differently from person to person, anxiety, depression, like low mood, um, increased loneliness, sleep difficulties.

 

Speaker 2

It depends on the individual, the impact can be had. Some people cope quite well, but, um, in the majority of cases at some point in time due to the accumulation, um, there is, there is some concerning impacts.

 

Speaker 1

Yeah, absolutely. And, you know, it’s a lot of what we see when we’re engaging with content moderators, whether that’s in like one-to-one sessions, or, you know, through the kind of peer support groups that we offer, or even the training and education that we offer, right?

 

Speaker 1

It’s like you’re saying, those cumulative effects that people maybe don’t really realize are impacting them. And then all of a sudden, it’s like, oh, I’m absolutely exhausted. I have no motivation to go to work.

 

Speaker 1

You know, I am finding it really hard to get through my tickets during the day. I don’t really want to socialize with my colleagues. And those are all kind of like the key indicators that someone’s feeling burnt out, or experiencing compassion fatigue, or even approaching vicarious trauma.

 

Speaker 1

So like, I wonder from your perspective in the research that you’ve seen, like, how might some of these impacts be mitigated or prevented?

 

Speaker 2

Yeah, so there’s not too much. It’s not really any, as we know, literature that’s directly on what we can do for content moderators. A lot of it is taken from related fields, related industries. And, you know, there’s only two or three papers maybe in the last few years that have actually been studies that have analyzed data directly from content moderators.

 

Speaker 2

And it does seem from those studies that have been done on content moderators and also from related industries like journalists who are, you know, my experience like various trauma or like emergency service workers.

 

Speaker 2

There’s primary intervention, which would be before you begin the work, you know, building resiliency skills and mental health awareness, even just being aware of the impact that can be had. Like we said, especially for content moderators, it’s cumulative.

 

Speaker 2

So if you’re not aware that these impacts are these symptoms, you could have them. You know, you’re less likely to notice them in the first place and go seek the support that you need. So that’s primary intervention, which Stieger in their 2021 paper, like highlight as the most important for mitigating risk and it’s the most effective level of intervention.

 

Speaker 2

And there’s secondary intervention then, which will be after the onset of symptoms and probably just building further resiliency, maybe some mindfulness skills, cognitive behavioral therapy skills to sort of cope.

 

Speaker 2

And so not gonna be as effective as the primary intervention, but obviously will be needed in many cases. And then finally, tertiary intervention, which would be, you know, severe cases or where the secondary intervention doesn’t work and there’s like the therapy needed or some sort of clinical intervention.

 

Speaker 2

That seems to be the three levels of intervention. What else I’ve noticed that is recurring that comes up again and again, is this the concept of self-care and self-caring in the right way where you’re caring for your mental health and your mental wellbeing.

 

Speaker 2

And it’s very simple. You know, with some people don’t know that doing this one activity is self-care maybe or you might perceive it as self-care, but you’re not already doing anything for your mental health, your cognitive health, your psychological health.

 

Speaker 2

It might just be pure distraction, which has its place, but a lot of the time you want some self-care ritual that is specifically for caring for your mental health, for your cognitive health, for your brain, you know, for your body.

 

Speaker 2

So that’s supposed to be recurring quite a bit. And it seems to be quite an important element in mental health, especially for content moderators.

 

Speaker 1

Yeah. And I think like, you know, the important thing that people maybe don’t realize is as much as the content itself can really impact a content moderator. When we’re talking about psychosocial risks, we’re also talking about everything that kind of surrounds them in their work environment.

 

Speaker 1

So when you’re saying things like their physical health and their cognitive health, like, you know, we’ve spoken a lot about like that sense of cognitive dissonance sometimes in the work that they’re doing.

 

Speaker 1

So could you even speak a little bit about that just in terms of like the elements that aren’t necessarily the content itself that might be impacting people?

 

Speaker 2

Yeah, I think the cognitive dissonance is a sneaky one. You know, people sort of think about the vicarious trauma, they think about the burnout, and they are significant and maybe have more of an obvious impact on mental health.

 

Speaker 2

But cognitive dissonance is definitely a big issue for content moderators, because you know, they have to look for this content, especially around maybe, you know, we see a lot of trauma and burnout coming from very egregious content like violence, abuse, assault.

 

Speaker 2

You might get more cognitive dissonance from like disinformation, misinformation, political rhetoric, world news, sort of free speech, ethical dilemmas, more areas like this, which challenge, you know, beliefs, where content moderator has to hold a belief in their own, and this is what cognitive dissonance is, that they hold a belief or an idea or a value, and might clash with their own beliefs or values,

 

Speaker 2

and back and pause intrusive memories that can linger on your mind. And, you know, it can be difficult to grapple with, and then you’re doing that multiple times a day over and over and over again. And so I think for that sort of thing, you know, we’ve talked a lot about as well transition rituals after work, and sort of, you know, switching off rituals to go from work to home life, that’s so so important.

 

Speaker 2

And that’s sort of for all content moderation, but especially for if you’re suffering from cognitive dissonance, you need to do that recess for your thinking and for your brain.

 

Speaker 1

Yeah. And, you know, it’s it’s one of, I suppose, the big markers of vicarious trauma is that kind of shift in your worldview that like it becomes quite negative. You know, you kind of have that perception that, you know, everyone might be a bad actor out there, you know, that there’s not a lot of good people left in the world.

 

Speaker 1

And we used to say this to moderators all the time, you know, they’re sitting in front of the screen and they’re seeing like 80 percent of all of the terrible things that are happening in the world. And it can really narrow your focus so that you you have this kind of negativity bias, whereas there’s probably another, like, 120 percent of people outside of that online sphere that are actually, you know,

 

Speaker 1

just living their everyday lives, doing things as normal people, not doing anything to harm other people. But it’s really easy, I think, to kind of get get kind of siloed and look only at the negative when you’re faced with every single day.

 

Speaker 1

So, you know, I think it’s important for people to realize that as much as content moderators are aware that it’s not all black and white in the world and that, you know, not everything that they’re seeing is terrible, that it’s just the nature of the job when they’re constantly exposed to it, that there might be a shift in that kind of worldview that they have.

 

Speaker 1

And that that experience of cognitive dissonance is exactly that. It’s that this discomfort of feeling like the things that I believe are true or the things that are important to me or like the belief systems that I have are no longer true based on what I’m seeing in front of me.

 

Speaker 2

Yeah, yeah, yeah. And they’re going to be, you know, exposed to the most extreme ideas I would wager as well, you know, so it’s definitely disproportionate to the ideas that are actually out there in society.

 

Speaker 2

And that is something that shows up again and again in the literature that we do have, like in Ruth Spencer’s paper, there’s negative changes in worldview, I think, from all the moderators that they collected from, you know, here’s a line, more general changes in cognition or evidence with the exposure to disturbing content leading to the development of a more negative worldview.

 

Speaker 2

That’s directly from her paper. And, you know, you think about sort of hypervigilance, hyperarousal, the fight or flight response is sort of activated a lot. And this can just transfer over to your everyday life, you know, to being overly concerned about your loved one’s safety, because you’re witnessing violent content all day, and you’re afraid that, you know, they’re going to go out and get mugged or robbed,

 

Speaker 2

because you’ve seen that 100 times in the last week. Or, you know, if you’re engaging with child exploitation material, there’s some excerpts as well from Ruth Spencer’s research, where a content moderator found it difficult to be around their their niece or their nephew, because they were concerned for their safety, you know, and maybe a lesser example from that paper as well, I think it was someone had a pet dog,

 

Speaker 2

and they loved their dog, and they couldn’t, they couldn’t deal with animal cruelty content. And then out about it as well, it’s like you have this hyper vigilance towards the safety of, you know, other humans or animals even.

 

Speaker 2

So that’s a negative change in worldview. And that’s not pleasant to be to be constantly on that sort of high alert. And so I don’t know what’s the answer to that? Or how can that be mitigated? Do you think like psycho education, I believe, like awareness, you know, like re the cognitively restructuring, like refocusing on the fact that you’re doing good, that you’re protecting people?

 

Speaker 2

I’m not sure. What do you what do you think there?

 

Speaker 1

Yeah, I mean, like, I guess the one of like, I guess the the biggest pieces that maybe we’ve learned over the last six years working with moderators is, you know, there’s a lot of kind of confidentiality and secrecy around the job, right?

 

Speaker 1

And so, you know, they, they may not be able to talk to like friends who don’t work in content moderation about the work that they’re doing, they may not be able to talk to family members and maybe also don’t want to, because they don’t want to expose other people to the harms that they’re seeing, even vicariously.

 

Speaker 1

So, you know, having things like peer support groups where they can sit together as content moderators, even with their management teams, or, you know, with like, their HR teams, or, you know, anyone who’s kind of supporting them in the business, and having it facilitated by a mental health professional, it allows them that kind of group therapy space to process everything that they’re viewing, and to kind of tackle some of those beliefs in that group setting,

 

Speaker 1

and to, to really, I think, just kind of let it out. Because if, like, we know this with all psychological processes, right, the more that you hold it in, the more that you try and suppress it, the more that you try and minimize the impacts, the more it’s going to actually impact you in the long term.

 

Speaker 1

And so giving them that space where they can really air it all out, where, like you’re saying, if it’s part of a transition ritual at the end of the day, at the end of a shift to have like a decompression session, or like a closeout group, where they can just end their day to let it all out before they go home, you know, that can be really beneficial in helping people manage that kind of negative worldview and kind of reset.

 

Speaker 1

And like you’re saying, you know, do a little bit of that cognitive restructuring before they get home for the end of the day and need to go, you know, spend time with their children, spend time with their families, spend time with their friends.

 

Speaker 2

Absolutely. I think that’s so important for a role like this. You want to try and go home feeling good and if you can have your debrief reorient or reinterpret any sort of negative feelings or triggers that you’ve had over the day, imperative to enjoying the rest of your day and to being there if you have a family or whatever.

 

Speaker 1

Absolutely. With seven years of industry experience, Zevo Health brings a wealth of expertise to the table, ensuring top-tier care at every stage of your moderator’s careers. Our network of licensed mental health professionals ensures that your moderators are in capable hands, equipped to handle even the most challenging situations they may encounter.

 

Speaker 1

From recruitment to lever intervention, Zevo Health provides comprehensive support, personalized therapy sessions, and immediate care for distressing content encounters. But our commitment doesn’t stop there.

 

Speaker 1

We conduct regular check-ins and surveys to track changes in well-being, refining our services based on feedback and data-driven insights. Compliance and care are paramount to us. We stay ahead of global regulations and regularly update our clinical protocols to ensure the best possible care for your team.

 

Speaker 1

By partnering with Zevo Health, you’re not just investing in your moderator’s well-being. You’re safeguarding your brand integrity and reputation. Our standards create a safe, respectful environment for moderators, reducing the risk of negative publicity and reflecting positively on your company.

 

Speaker 1

Our intuitive therapy management platform makes it easy for moderators to book and manage sessions with seamless integration for both online and in-person sessions. Real-time support is available anytime, anywhere, with access to mental health resources and flexible solutions that fit their demanding schedules.

 

Speaker 1

Zevo Health believes in continuous learning and engagement, offering a wealth of educational resources and fostering a supportive network of moderators empowered to thrive in their jobs. Join the growing number of companies who trust Zevo Health to support their content moderators and elevate their company culture of care.

 

Speaker 1

Contact us today to learn more about how we can tailor our services to meet your team’s unique needs. With Zevo Health, your moderators will feel valued, empowered, and ready to take on any challenge.

 

Speaker 1

ZevoHealth.com. So I suppose, you know, like when we think about content moderators trying to access support, they often face significant barriers when trying to seek support from the platforms that implore them.

 

Speaker 1

So what are maybe some of the key challenges that they encounter and do you feel that there’s any policy changes that could help address these issues?

 

Speaker 2

Well, yeah, the content moderation, as we know, is a fairly new role. It’s a fairly new role. If I hadn’t been around for too long, maybe with when Facebook first started up or even like MySpace or something, we started that content moderators.

 

Speaker 2

But then the research on content moderators wouldn’t have started for quite a while after that. You know, companies policies can be even slower to follow any research that might come out. So I don’t I don’t think that current policies are maybe specific enough for content moderators.

 

Speaker 2

They might be sort of for general, for the company in general. I could be wrong. You have more knowledge than me, Michelle, around policy and content moderator policy, a wealth more than me, I would say.

 

Speaker 2

But it does seem like they’re a little bit vague. There may be for the whole like TNS area or maybe for the whole organization. And they’re not specific for content moderators who have a very unique role.

 

Speaker 2

So they need a policy and they need like really clear guidelines for how they should seek support and what they should do in certain scenarios. And, you know, a lot of the feedback from from the qualitative data that’s been collected as well sort of indicated that those structures weren’t in place, at least when this when this data was collected.

 

Speaker 2

And, you know, they felt moderators felt like they they couldn’t seek support. Some of them did anyway, because it might reflect poorly on them that they’re not able to do their jobs, that it’s having these adverse effects are actually they’re not being able to do their jobs, which which is not going to result in, you know, any sort of good mental health or mental well-being.

 

Speaker 2

And often I think maybe non-disclosure agreements are signed. Maybe this this prevents moderators from discussing or sharing any concerns that they have publicly limits transparency. And it might even if it’s not preventing moderators from seeking support, it’s going to put a question mark in their head and maybe prevent them from going on and doing that.

 

Speaker 2

And even if it doesn’t explicitly say that they can or can’t.

 

Speaker 1

Yeah, I mean, I think even the point that you mentioned there, you know, like policies and processes, like even on the HR side of a business, when you’re working with content moderators do need to be very tailored to the work that they are doing.

 

Speaker 1

And like, you know, you were talking about having like the kind of primary secondary tertiary levels of care, like, does your mental health policy at work have a stepped care model as part of that for content moderators where, you know, the wellbeing provider that you’re bringing in can maybe help manage the primary and secondary interventions.

 

Speaker 1

But that when the tertiary intervention comes in, that your wellbeing provider on site won’t be able to provide that. So they need to find the right pathways to refer people externally. And, you know, I think we have seen a shift, I think, over the last couple of years where more mature organizations seem to have more robust policies in place and they’re really working with us, you know, particularly a lot of our customers.

 

Speaker 1

They really work hard with us to develop really robust mental health policies and stepped care pathways for their moderators because they really care about their wellbeing. And I think what I’d like to see is just the industry really recognizing that as a whole and that every employer, no matter how big or small, you know, whether you have two moderators or whether you have 1200 moderators, that you have a policy in place and you have a stepped care model in place for your moderators.

 

Speaker 1

And that they’re told at the outset that, you know, this is what’s in place for you. This is how you access your primary interventions, your secondary interventions, your tertiary interventions, and that it’s not going to impact whether or not you keep this job.

 

Speaker 1

That if for some reason you are impacted by the work that you’re doing, we’re going to take care of you so that you can come back to this job and that you’re not worried about losing it.

 

Speaker 2

Yeah, yeah. And that’s definitely reflected in the issue that we have, you know, and that’s how moderators feel. That’s what they want. They want a clear pathway. And, you know, I think recurring as well was that how beneficial breaks can be, but also that moderators were sort of afraid to take breaks because they often have very high sort of accuracy quotas and just like high quotas in general of the amount of work they need to get done.

 

Speaker 2

So it sort of discourages breaks and can be a bit contradictory to, you know, take a break if you need it. But if you take a break, you’re going to then have lower productivity and that’s not good. So even just really being really transparent and clear, like you will not be there will be no punishment or consequence for taking a break if you need it, if you felt triggered, if you have had, you know,

 

Speaker 2

an emotional response and just just need a break.

 

Speaker 1

Yeah. And I think, you know, like if if we consider the the wider kind of ecosystem that surrounds moderators, right, like that’s your HR function that is maybe helping to manage that when people are moving into a stuffed care model.

 

Speaker 1

But then you’re also looking at like management teams, like how well are we training management teams to be that first line of defense for moderators to notice when they are triggered to, you know, to know even like this person has been working on child sexual abuse content for the last like three weeks, do I need to rotate them out into a different workflow that’s going to be less harmful to their psychological well-being?

 

Speaker 1

Or are there people on my team who are just naturally a little bit more resilient or who have learned to be a little bit more resilient compared to others who might be newer to the organization? Like we see it all the time the new hires coming in are always, you know, going to be impacted a little bit more than tenured agents because at the end of the day tenured employees, tenured moderators may actually be experiencing things like desensitization,

 

Speaker 1

which is a kind of longer term impact that hasn’t really been explored at all in any of the research.

 

Speaker 2

Yeah, yeah. Funnily enough, there is a line exactly on that in Ruth Spence’s paper saying the adjustment of longer term workers seem to be associated with the ability to suppress emotions, which may have longer term consequences for social and mental wellbeing.

 

Speaker 2

Future research investigates this. So it’s definitely what’s needed. It’s been acknowledged by other researchers and certainly by us. Hopefully we can do something around that now and in the future.

 

Speaker 1

Absolutely. So I suppose we know that content moderation plays a very crucial role in shaping online discussions and experiences. We’ve talked about how it’s sometimes a more invisible kind of process and it’s perhaps even a little bit of an undervalued process.

 

Speaker 1

So when we’re thinking maybe about ethical considerations or societal impacts surrounding content moderation practices, particularly I guess like in relation to issues like bias, censorship, and the increasing use of AI, can you talk a little bit about that?

 

Speaker 2

Yeah, yeah. So there’s a little bit to unpack there. In terms of like ethical considerations, I think different ethical considerations need to be made depending on culture, you know, depending on geography, depending on culture, you can probably boil it down to individual differences if he wants to keep going.

 

Speaker 2

But there definitely needs to be sort of different ethical considerations made, you know, there’s going to be different material is going to have different effects on different cultures. That’s just, yeah, that’s absolutely true.

 

Speaker 2

And you know, we’ve done a nice white paper and thought I think it’s coming out soon. So hopefully, that sort of spreads awareness of that. And there’s also, of course, the need to discuss free speech when it comes to ethical considerations and societal impact.

 

Speaker 2

It’s a big topic of contention. We want people to be able to speak freely, but we also have to make sure that the internet is safe and isn’t going to, you know, harm people. So this is a huge task for for content moderators.

 

Speaker 2

And yeah, you know, they’re they’re going by policy, which might not be clear on whether content needs to be kept up or taken down. And there might be left to to on to their own resources with an ethical dilemma like that.

 

Speaker 2

And, you know, they could be experiencing a great deal of cognitive dissonance and such a circumstance as well. And, you know, ethical dilemmas. I don’t know if there are workflows specifically for dealing with ethical dilemmas, but I would imagine if there is, then you sort of run into problems like decision fatigue, because you’re just making these complex decisions over and over again, and all day.

 

Speaker 2

And of course, whether you keep something up, it’s like an ethical time that there’s going to be a moral implication, whether you do or don’t keep, you know, an idea or a political idea or political rhetoric.

 

Speaker 2

And yeah, there’s more implications to taking it down. And there’s more implications to leaving it up. So.

 

Speaker 1

I mean, it even reminds me, I think the really obvious one, right, is around political rhetoric or anti-LGBTQ sentiments, things like that. And I suppose with the election cycles going on nationally at the moment, that’s a really big piece of it.

 

Speaker 1

But it was interesting because I once had a moderator who, and this was completely unexpected and I think people maybe don’t realize that this could present as an ethical dilemma for someone, but they were just working spam content.

 

Speaker 1

So every so often, like if you’re going on Instagram or if you’re maybe going on TikTok or whatever, you look in the comments and there’s like just people spamming the comments, right? And they were having a really hard time because there was one particular account that was continuously spamming people, but it wasn’t actually violating the company policy.

 

Speaker 1

And so they had no choice but to leave the comments up and they were really grappling with it because they were like, I know that this is not appropriate online behavior. I know it’s annoying for other users.

 

Speaker 1

I know it’s really frustrating, but I can’t do anything about it because it just doesn’t align with the policy. And it’s a very odd sort of like ethical dilemma to be faced with, like I want to take this down because I know it’s bad, but at the end of the day, I can’t because the policy says I can’t.

 

Speaker 1

And it’s a very sort of innocuous situation that you wouldn’t really think about as causing an ethical dilemma because when you think about political rhetoric, it’s really obvious that there are ethical dilemmas with that.

 

Speaker 1

But I think it’s really interesting to think about it from that perspective. It could come into any workflow, any abuse area, and you might not even realize it until you hear it from a moderator.

 

Speaker 2

Yeah, like, it’s really such a simple, simple clash between the policy and the individual’s beliefs or values, you know, and there’s not great, there’s not great depth to that or complexity when you think about it, like, but it’s at the same time, it is somewhat of an ethical dynamic, you know, do they keep that sort of thing up?

 

Speaker 2

Or did they take it down? And they’ve got no basis, I suppose, take it down, although then they know that it’s bothering people, and it should come down, and I haven’t thought of such a situation before, you know.

 

Speaker 1

Yeah. But I know also like, you know, things like, like you’re talking about beliefs, attitudes, values, you’ve mentioned like the mere exposure effect before. So do you want to share a little bit about that?

 

Speaker 2

So the mere exposure effect is very simple. It’s just the more you are exposed to something, the more it can influence or impact your beliefs or your attitudes or your underlying values. And it is quite, it can be quite, I don’t know if dangerous is the right word, but it can certainly be impactful if you’re not aware of the mere exposure effect.

 

Speaker 2

I think for content moderators, hopefully a lot of the risk of their beliefs being impacted is mitigated by their awareness of that this is the job. These are not my beliefs. This is clearly someone else’s.

 

Speaker 2

But there does need to be sort of reminders and psychoeducation around the mere exposure effect. I think that alone would reduce risk of any sort of major alterations of worldview or of underlying core beliefs.

 

Speaker 2

But I think especially in work like, we’ve been looking at red teaming recently, AI red teaming. I think it’s especially important to the mere exposure effect in that type of work where you are trying to, red teaming is basically you’re trying to trick the AI into thinking that you’re a bad actor on the internet and to produce egregious content.

 

Speaker 2

So that is certainly more, I think a more potent recipe for the mere exposure effect to actually have an impact than just sort of reading or viewing content specifically. While we’re on that, while we’re on the mere exposure effect and how your belief systems can be altered, I think it’s important to touch off political issues again.

 

Speaker 2

There is some really good research out there that shows that political opinions and issues specifically cause an emotional reaction, an emotionally charged response at the level of the brain. Like we don’t have control over this, it’s automatic.

 

Speaker 2

So when people are presented with political ideas or issues, they have an emotionally charged response. And it’s gonna be linked to cognitive dissonance. It’s gonna be a larger response if it clashes more with the beliefs that they already have.

 

Speaker 2

That there’s a great paper by, I think it’s pronounced Backer from 2021. It’s around the hot cognition hypothesis. Maybe we can link it after this, but it’s called the hot cognition hypothesis. This the fact that political issues are emotionally charged and we don’t have control over this.

 

Speaker 2

So now you can imagine that if you are dealing with those issues over and over and over again and you’re having this emotionally charged response, however many times a day, like what impact is that gonna have?

 

Speaker 2

We don’t know, yes. We really don’t know, but it could certainly have an impact on your worldview. And it could certainly have an impact. You can certainly cause burnout.

 

Speaker 1

Absolutely. Well, listen, very interesting points that we’ve talked about today. I think there’s a lot more that we could talk about as always when it comes to content moderation and the impacts on their psychological health.

 

Speaker 1

But appreciate, you know, the amount of research that you’ve done, you know, just even looking at the adjacent populations. As you said, there’s not a lot when it comes directly to research with content moderators, but there are a few couple, there are a couple of researchers out there anyway who are throwing things out there, which is always fantastic.

 

Speaker 1

Is there any sort of like last takeaway you might give to our listeners just in terms of the psychosocial impact or, you know, even about the research that you’ve looked at?

 

Speaker 2

Um, well, you know, I am coming from a research perspective here, so I don’t have like real, real life experience of content moderation. Um, but from, from the research and the literature that we do have, I would encourage, I would strongly encourage, you know, content moderators to, to engage with services like us, like Zevo.

 

Speaker 2

Um, we know that there are ways to mitigate these risks. Um, it’s more about thinking of Zevo and similar, similar supports and resources as like a toolbox that you can open, you can decide which mental health support works for you.

 

Speaker 2

It’s not about, you know, forcing people to engage with mindfulness if that’s not for you, it’s not for you. Um, you know, you know, maybe CBT will suit you better, DBT, ACT, there’s a whole range of options.

 

Speaker 2

We cater to all of them. Um, you know, we might suggest self-care practices or transition rituals that you think are rubbish and that’s okay. Like there’s going to be something that you can do that’s going to be tailored and personalized to you.

 

Speaker 2

It’s more about just taking the information, the literature, the data that we have and adopting it to yourself and it will go a long way to supporting your mental wellbeing.

 

Speaker 1

Yeah, well listen, I think that’s a great last note to end on. So thank you very much for taking the time column. Looking forward to seeing what else we can produce in terms of the research that we’re doing, but also hearing more about the research, I suppose, at some point, that continues to kind of be put out there with content moderators.

 

Speaker 1

Perhaps we’ll see, you know, 30, 40 papers over the next year, fingers crossed.

 

Speaker 2

I think so, I think there’ll be an influx.

 

Speaker 1

Lovely. Well, thank you very much everyone for listening to Zevo Talks. We’ll be back next month and we’ll leave you with that.