About this Webinar

The rise of generative AI necessitates a renewed focus on moderator wellbeing. With generative AI causing an increase in volume and complexity of content to be moderated, there is a need to enhance wellbeing efforts for Content Moderators to be able to deal with new challenges.

Join Dr. Michelle Teo, Clinical Director for Zevo Health, and Abhijnan Dasgupta, Practice Director, Trust & Safety, Everest Group as they explore the impact of generative AI on moderator wellbeing, including the role it will play in the wellbeing lifecycle, and discuss the future of wellbeing in Trust and Safety.

Key Takeaways:

  • How does genAI exemplify the need for T&S wellbeing services?
  • In which areas can AI help with moderator wellbeing?
  • What is the future of moderator wellbeing?

Who Should Attend? 

  • Chief wellbeing officers
  • T&S lead / content moderation lead of service providers and enterprises
  • T&S solution/practice lead of service providers and enterprises
  • Global sourcing managers
  • Outsourcing and procurement managers
  • Senior marketing executives
  • IT/BPO strategy leader

Meet Abhijnan Dasgupta

Abhijnan Dasgupta is a Practice Director on the Business Process Services team and manages Everest Group’s Trust and Safety (T&S) outsourcing offerings. Leveraging his significant consulting and research experience, he has advised many global service providers and enterprises on digital transformation, cost transformation, product strategy, and go-to-market strategy. He also has significant experience in advising clients on M&A and sell-off deals and post-deal engagements.

He has authored industry reports on content moderation and white papers on 5G technologies, and drives thought leadership pieces across the T&S space for the firm. Prior to joining Everest Group, Abhijnan worked as a Senior Consultant with EY-Parthenon. He holds an MBA in Finance and Operations from Management Development Institute, Gurgaon.

Watch the recording below

Unknown speaker

Hi everybody and welcome to our fabulous webinar today with Everest. I’m delighted to welcome you all here today. We’re going to give it one more minute. In the meantime, can I ask everybody to let us know where you’re dialing in from today and I will be sitting here reading those out.

Unknown speaker

And if you’re interested, we’re in Dublin today where it is a horrible wet day, would you believe? So that’s where I am. So if everybody else can let us know where you’re going from, that would be great.

Unknown speaker

Okay, great. All right, we’re going to get started. As you can tell, I am not Dr. Michelle Teo. Dr. Michelle Teo is sick at home at the moment, watching us on this live stream, I’m sure. I’m Tara Sullivan.

Unknown speaker

I’m the Chief Marketing Officer at Zevo and I’m delighted to welcome you here today. This webinar is one I’m really looking forward to because it involves two things I’m really passionate about. One is Gen AI and the other is Content Moderators.

Unknown speaker

And so I’m delighted to welcome Abby Dascupta, who’s the Practice Director in Everest. Abby, welcome to the webinar. And you’re going to be talking to us today about a whole range of moderator wellbeing in the age of AI.

Unknown speaker

So I am delighted to welcome you here today. Yeah, thanks. Thanks, Tara. Lovely to see you. And, you know, welcome, everyone. Very, very excited to talk about a topic which, yeah, is very close to my heart, and also very, very pertinent, right?

Unknown speaker

I mean, I remember two or three years back, right, when we first started on this journey of trust and safety in terms of providing research on trust and safety. and safety, we quickly realized that, you know, we can talk about trust and safety, but, you know, none of that would be very meaningful if we don’t talk about moderator well-being, right?

Unknown speaker

And, you know, it’s very heartening to see, actually, that moderator well-being has kind of progressed from being one of those things that, you know, you kind of have to do if you are in trust and safety to something that, you know, you must absolutely do if you want to be in the business of trust and safety, right?

Unknown speaker

Absolutely. And I think we’re seeing a huge change there, right, Abi, where we’re seeing people, rather than they’re treating it as kind of a requirement and a margin line, they’re recognizing how important it is to the core.

Unknown speaker

Now, obviously, moderator well-being has changed exponentially in the last couple of months since we had the the onset of gen AI. So I am going to hand it over to you for a little while and then I’m going to ask you some questions along the way so we keep it interactive.

Unknown speaker

Speaking of questions, we really want people to ask questions in the comments section. Sarah will remind you, Sarah’s in the background. She’s gonna remind you everybody towards the end of the session to put questions in there.

Unknown speaker

So if you have any questions, please let us know. Abby, over to you. Right, thank you so much Tara. Yeah, so actually let’s first start with why do we even need a moderator well-being, right? I mean, what is it that kind of drives moderator well-being?

Unknown speaker

And, you know, obviously this is not something new but it’s kind of important to remind ourselves every single day what moderators go through, right? And whether it’s in terms of filtering harmful content such as harassment or being exposed to graphic violence, CSAM, hate speech, you take your pick, right?

Unknown speaker

Every single day there are, you know, number of moderators all over the world being exposed to this such offensive content, right? And obviously it takes a toll, right? So, I mean, if you look at it in terms of the toll, right?

Unknown speaker

I mean, whether it’s vicarious trauma or psychological damage, emotional distress, anxiety, burnout, right? There’s multiple effects that this has and obviously which means that there is a very, very huge need to provide that intervention so that, you know, this kind of content doesn’t cause, I would say, long term damage, right?

Unknown speaker

And the other thing to think about is the fact that, you know, it’s not a service which is optional, right? I mean, obviously you probably put your hands up and say, you know, salute to the people who are doing this but also realize that if, you know, if today or tomorrow the company suddenly decide that, oh no, we do not want to do content moderation anymore because it causes so much harm, right?

Unknown speaker

I mean, it’s just gonna fall apart, right? And the way we look at it at Everest Group, you know, once someone had asked us that how do you define trust and safety, right? And what I remember is the fact that, you know, during the course of discussion, what came out is more than, you know, the bookish definition that you have in what trust and safety entails and, you know, moderation and this and that.

Unknown speaker

I think the thing that stuck with me the most was the fact that, you know, essentially someone said, trust and safety is the service that is required. you know, because there are stupid people out there, right, right.

Unknown speaker

So yeah, so absolutely. So I mean, and given the fact that that stupidity is prevalent, and that causes so much of, you know, offensive content. So obviously, there is a tremendous impact that that content has on the moderators, right.

Unknown speaker

And it’s not only about, you know, we tend to think of it in terms of egregious content having a high degree of impact, which is true. But it’s not like non egregious content or non offensive content, or which is apparently non offensive doesn’t have any impact.

Unknown speaker

So I’ll give you an example. I think a year back, there was a picture of the Pope wearing a white parka jacket, right. And people assume that the Pope had visited maybe one of the, you know, somewhere in Greenland, right now, that was completely fake.

Unknown speaker

However, that actually spurred a very, very intense discussion. And for, you know, moderators to see that image, for some of them, it might have actually kind of maybe driven something close to their heart.

Unknown speaker

And that has an impact on the mental well being as well, right. So so yeah, I mean, we need to remember that when we look at content and the impact that it has on moderator well being, it’s not only in terms of offensive content, but for any piece of content, it might trigger something which we may not be aware of, or which we may not have thought of.

Unknown speaker

So which is why it’s very important to take into account, okay, what is the different kinds of impact that a piece of content may have on a moderator, right? So yeah, that’s the one thing that, you know, I would probably call out over here is that and this kind of strikes close to the business is the fact that not only is it is important to do, but what we are seeing is that across the industry, the failure to provide robust moderation services is actually leading to serious legal implications,

Unknown speaker

right? So we’ve just put up some of the, you know, examples over here where there were providers and enterprises both who had been reprimanded by the courts or who had been taken to court or sued because of at least what the moderators thought were insufficient well-being practices, right?

Unknown speaker

And the other thing that I would talk about over here is the fact that, you know, the content that we know as of today, right, the obnoxious content, it has been there for some time. Yes, we all know that UGC has a huge percentage of obnoxious content.

Unknown speaker

But now, on top of that, we have something which we cannot escape, right? And that being very close to your heart, generative AI, obviously, is kind of proliferating some of this harm, right? So, for example, you know, it is leading to abuse at scale, right, across multiple languages.

Unknown speaker

I mean, earlier probably, I mean, even today, the internet obviously is dominated by the largest spoken languages on earth. but given the ease of use of generative AI, a lot of people are finding it easy to actually create abusive content in multiple languages.

Unknown speaker

It has also the ability to produce, I think, realistic contents of the example of the Pope and the Parker that I gave you. And it’s very difficult to point out that this is a fake. Now, by the time modulators actually take it down, a lot of damage already gets done because people have assumed stuff and then they start fighting, right?

Unknown speaker

And then the other thing which is coming up is content the way we know it, that in itself is changing. And that is proving to be singularly problematic for generative AI to detect or to understand. So for example, we have something called now as algo speak, so where essentially you kind of use a string of emojis maybe to denote a sentence.

Unknown speaker

But for generative AI models which are trained on large amounts of text, for them, it’s probably either gibberish or something that they do not understand the context of. And that in itself is leading to a lot of problems.

Unknown speaker

And is that like people on TikTok saying, instead of saying the S words, they talk about being undead or what is that like that so that you’re trying to trick the algorithm? Yeah, exactly, exactly. Yeah, because see, I think one fundamental principle which has kind of remained true is what we used to hear like 10 years back that some of the smartest people in the world used to be hackers, right?

Unknown speaker

So cybersecurity in many different ways was almost like a reactive measure because you learn from the hackers every single day new and different ways to hack systems, right? So it’s the same thing over here.

Unknown speaker

I mean, every single day moderation, I mean companies provide moderation and moderators are learning of different ways where people can game the AI or the gaming systems, right? So yeah, absolutely. So people are getting smarter.

Unknown speaker

The bad actors are getting smarter. They really. that there are different ways to gain the system given the fact that the system is trained on a specific set of data, and which is why they come up with innovative ways to kind of bypass those moderation channels, right?

Unknown speaker

So- And I’ll be, sorry, one question real quick. Yeah, sure. Given the amount of generative AI content that’s coming at us, right? So there’s some research that says what we have today is 1%, like 1% is gonna be human, 99% of what’s gonna be on the internet is going to be generated by Gen AI, or some kind of AI, right?

Unknown speaker

Even if there’s a human behind us. You, and we have vivid discussions about this. You don’t see though the number of content moderators necessarily increasing to deal with this massive increase. Maybe talk about that a little bit.

Unknown speaker

Yeah, sure, absolutely. I think I would probably answer that question in two parts. So- Yes, the content is increasing rapidly and absolutely right that the in terms of the content makes right, we expect that even by early as early as 2026 right, probably at least one fourth of the total content out there would probably be AI generated content right.

Unknown speaker

So which means that, you know, content is increasing at a massive volume. But what is also happening simultaneously is the fact that generative AI and other AI engines as moderation engines are also becoming stronger.

Unknown speaker

Exactly. Which means that, you know, I mean, look at it, I mean, in an ideal world, you would want to automate this entire process, right? And not because of the fact that you can automate it, but because of the impact that obnoxious content has on moderators.

Unknown speaker

But the truth of the matter is that AI even as of today doesn’t understand context as well as humans, right? So which means that humans will always have a role to play. However, having said that, obviously, more and more enterprises and providers are trying to automate moderation as much as possible.

Unknown speaker

So which means that, that the number of human moderators, right, it’s still going to grow, don’t get me wrong, because content is growing, but it’s probably not going to keep pace with the rate of growth in content, right?

Unknown speaker

Of course, yeah. So for example, and it’s not about the exact numbers, but just as an example, say, for example, if the content is growing 20% per year, right? The number of human moderators may grow by, you know, high single digits, right?

Unknown speaker

Sure, gosh, that makes no sense. Yeah, that’s one. The second thing is, as with any automation engine, the primary or the first target of the automation engines will be the low and medium complexity content, right?

Unknown speaker

Even today, the likes of a Google or a meta, they moderate 99% of the low complexity content using automation engines, right? And that’s going to go up even further. target will obviously be the medium complexity, which means that for a moderator in terms of the content mix that that person is going to deal with every single day, the percentage of obnoxious content within that content mix will be even higher,

Unknown speaker

which means that the impact that that content queue is going to have on them is going to be higher than what it is today, because they will be exposed more and more to obnoxious content. And the percentage of, let’s say, time offs or a relaxed period where they’re seeing maybe more regular, non-harmful content, that time is going to reduce further and further.

Unknown speaker

right? So I think in terms of the impact, those are the two things that I would say that there is going to be an increase in number of moderators but not at the rate at which content is increasing and that the content makes that this moderators are going to work upon on a daily basis are going to shift heavily in favor of the obnoxious content.

Unknown speaker

Great, thank you. Right and you know the other thing that I wanted to also call out is the fact that you know as far as generative AI is concerned also do realize that it has the capacity due to the obviously the creativity of the people it has the capacity to create new types of content which we are not even aware of today right?

Unknown speaker

So the impact of those kind of content is still unknown but the point is that it might cause massive impact as well in terms of psychological you know impacts on the on the mental well-being. So which means that we as trust and safety professionals as trust and safety research professionals there is a huge need to be very very proactive about how we look at moderator well-being right and not essentially work the way that cybersecurity is to work in early days where we are being very very reactive to you know the different kinds of harm that might come online right?

Unknown speaker

So with that I think it’s time to actually have a poll. Thank you. So, this poll is about which content type will grow the fastest with generation of AI. And you can get this at Slido.com on number 3681194.

Unknown speaker

And basically, we’re looking for which one do you think is going to grow the most. I’m sure you have opinions about this, Avi, about what you think is going to happen, right? Yes. Right. Yeah. For. Folks, I think you probably need to scroll down a bit if you want to see the fourth option.

Unknown speaker

Oh, yes, that’s right. So we want you basically to put them in order of what you think is gonna be the most important. We also want to hear if anybody thinks that there’s not going to be an increase.

Unknown speaker

There’s a different one, yeah, exactly. Thank you. Okay, we’re getting there. There’s a lot of text over here. Maybe we give folks a bit more time. Yes, absolutely. Thank you. So, Sarah, can we look at what’s number one at the moment, please?

Unknown speaker

Followed very quickly by some misinformation in social engineering, but followed very quickly by CSAM versus terrorism and violent extremism as number two. That’s interesting. Thank you. Okay. I think maybe we can close the poll in five seconds.

Unknown speaker

Okay. Right. Yeah. So let’s see what we have to say about it. So we conducted a research actually on, you know, the different kinds of harmful content that is getting proliferated due to generative AI.

Unknown speaker

Obviously, it’s not an exhaustive list. There are like, at least 20 other different kinds of harmful content that you can think proliferating due to generative AI. So for example, financial frauds and all.

Unknown speaker

But we wanted to look at these, some of these categories, which, you know, we have, we know for a fact has a huge impact on the moderator well-being. And what we saw was that essentially, at least the forecast is that misinformation and, you know, spamming fake reviews, these are probably going to grow the fastest, right?

Unknown speaker

Whether or not they become the single largest categories in terms of the total volume of content that is still up for debate, but they’re definitely going to grow the fastest, followed by CSAM, hate speech, cyberbullying, and adult nudity.

Unknown speaker

And and violent extremism is probably going to follow them. Sorry about the misalignment in terms of the graph, but yeah, that’s there. The other thing is that what we’re also seeing is, the other thing about generative AI is that it’s not only proliferating the harm, but it also adding more and more complexity to that harm.

Unknown speaker

So for example, spam bots have been around for some time, but spam bots created using generative AI, they’ve taken the game to a whole other level. In fact, I can tell you that I think last year, there was an incident around a financial fraud case where a person was kind of tricked into attending a meeting with who he thought as where the senior stakeholders of a client organization.

Unknown speaker

And they had a regular business discussion and that resulted in a transaction of a significant amount of money, only to then realize that he has been tricked, right? So. Yeah, people are getting innovative at a very large scale and generative AI is kind of, you know, because of the ease of use, they’re able to do that.

Unknown speaker

So yeah, so in terms of the added complexity, what we also feel is that misinformation and CSAM are probably going to become extremely complex, right? In fact, the US elections have been billed as the first AI elections, right?

Unknown speaker

And, you know, everybody is on guard that there is probably going to be a huge usage of generative AI for creating misinformation and disinformation, right? So yeah, absolutely massive potential of generative AI to both increase the volume and complexity of online toxic content.

Unknown speaker

And it happened in the midterms, right? Where there was an audio message sent by allegedly President Biden saying, don’t bother voting in the midterms. Let’s hold it for November. And it was his voice and everything, but it was an AI.

Unknown speaker

And it was an AI recording, which is just incredible. Absolutely, absolutely, absolutely. I mean, the Russia-Ukraine war, obviously people have very strong views about it. There were images that were circulated.

Unknown speaker

I mean, there were so many fake images circulated, but some of them were so realistic. I mean, it was very, very difficult to, you know, figure out the fact that it was a deep fake, including, you know, pictures of President Putin meeting with the president of China.

Unknown speaker

And yeah, and, you know, and picture, it seemed pretty, how do I put this? It seemed pretty innocuous, right? Yes. And people didn’t realize that it was absolutely fake, but the impact that it created that, okay, maybe Russia and China are coming together in this war.

Unknown speaker

So a lot of, you know, theories floating around, right? Absolutely, yeah. So yeah, it created a massive impact. So yeah, so, you know, both volume and complexity is going to increase due to generative AI, right?

Unknown speaker

Second, okay. Before… Before I get into this, just, you know, this is a quick question from my side, please feel free to put in your answers in the comments. When we think of the well-being lifecycle, right, so, like, what are the kind of interventions that companies put in for well-being of employees, where do you think, or how many stages do you think are there in that lifecycle?

Unknown speaker

So, obviously, you know, when the employee, when a person is employed at an organization, definitely well-being interventions are going to be there. That is very obvious. But other than that, how many other stages do you think are there?

Unknown speaker

Tharyal, you’ll have to help me with, you know, the answers if, if, of course. I said onboarding, because of where I came onboarding, which is one of them. Okay, anything else? Training. Training, okay.

Unknown speaker

Livia, thank you for that, Livia. Okay. Thank you. Any other ideas out there about where we would use it? Well, what might be more interesting to actually ask is, when we speak to organizations about well being interventions, what do you think is a stage and this is actually for you, I mean, obviously it’s open for the audience as well anyone can answer but I must ask this, you this as well.

Unknown speaker

So, when we speak to organizations about well being interventions, what stage do you think very rarely comes up in the discussions? The people don’t even think about it. I would say Joe, who put in after leaving.

Unknown speaker

Exactly. Is that right? Yes. Yes. Yeah. And, you know, and it’s, well, it’s not surprising but it’s concerning. Right. Although I’m happy to say that that has changed a lot over the years, right and people have realized that you know the journey doesn’t really end when an employee leaves the organization, because the mental well being impact obviously stays on for much longer.

Unknown speaker

Right. But yeah, thank you to everyone who has answered so yes so basically obviously there might be you know a lot of sub steps within this but we see four major phases within this life cycle. Pre hire, we see onboarding on the job and post exit right so pre hires more about you know having very very clear and realistic job descriptions.

Unknown speaker

In fact, we did a study with some of the job descriptions that we saw on LinkedIn on other job websites in regards to, you know, And what we figured was that it was kind of like a mixed bag. There were organizations who had an extremely clear, well-documented, realistic job description, with in fact, some of them had potential risks also mentioned in there that if you take up this job, this entails doing XYZ,

Unknown speaker

which can cause you maybe ABC harms. And then there were, you know, job descriptions, which essentially did not do a very great job at it, right? But yeah, having a very realistic job description is one of the very early well-being intervention that you can do.

Unknown speaker

The second is in terms of resilience and behavioral assessment, right? So it’s less about, you know, how resilient a person can be, but it’s more about, you know, putting a person maybe through situations and seeing that, okay, what are the kind of interventions that that person might need to build up their resilience, right?

Unknown speaker

So yeah, that is the kind of behavioral assessment that is done because, you know, doing or conducting a test around how resilient a person can be is actually kind of, studies have shown that it is kind of pointless because you know a person will obviously react differently to different situations right and you know resilience can actually be built through proper switching and proper interventions.

Unknown speaker

Exactly yeah. The other thing that it has done is you know bias assessment so you know how quickly can people detect bias also the fact that are you a person who has you know very very biased opinions.

Unknown speaker

Now do realize that we as human beings if all of us have our own biases right but if the biases tend to be extremely strong one way or the other then obviously organizations try to you know the way they try to deal with it is having people maybe moderate that kind of a content which kind of relates to their bias right because otherwise what happens is if you’re seeing a content which probably goes against your view right and you’re very you feel very strongly about your view then yeah it can cause you a lot of mental distress right.

Unknown speaker

So yeah and then the other things like decision making and language assessment health checks. Similarly on the onboarding side you have right around process training so yeah thank you for whoever mentioned training you have a lot of resilience training well-being onboarding right your role-based training.

Unknown speaker

Role-based training actually is very important because you know moderators might play multiple roles even throughout the day rather I mean let alone their time within the organization. So at one point in time they might be looking at you know obnoxious content cues at another point in time they might be looking at policies another point in time they might be you know managing a team of moderators right and reasoning with the client.

Unknown speaker

So role-based training becomes very important. Phase based exposure to sensitive content it’s kind of obvious that you do not suddenly bombard a person with obnoxious content from day one. Tools yes and tools actually both on the onboarding and on the job portion tools, obviously, are very, very important.

Unknown speaker

We’re going to talk about tech-led interventions in more detail. Psychoeducation and, yes, started coping workshops. Yeah, these are also very important. We are seeing a lot of organizations adopt this.

Unknown speaker

And Abhi, where are you seeing most people spending their time? Are people really thinking about it across the board? I know you said post isn’t a big one, but are people tending to give most of it in, because we hear a lot about onboarding, and less on the job, and then we’re starting to see a lot more people doing post-exit.

Unknown speaker

Where are you seeing most of the people you talk to spending their time? Yeah, I think I would say, I don’t think there’s a clear winner, per se, but if I had to pick, I would say onboarding and on the job are probably the two most, more phases where most of the interventions lie even as of today right and that’s not you know that’s kind of across the industry right.

Unknown speaker

We understand that it’s kind of probably also logical because you know as you’re getting exposed to that harm you need a lot of interventions immediately or in certain cases you know if that exposure to the harm can be prevented you would want to do that right.

Unknown speaker

So yeah so on the job whether in terms of psychological help or you know help through tooling like grayscale and all or you know surveys sentiment analysis or physical activities right those and the kind of trainings that we see more on the onboarding stages in terms of process resilience well-being role right those are the things we see are the most common well-being interventions right.

Unknown speaker

Having said that post companies who actually have invested in a post exit well-being strategy especially service providers I can tell you this for a fact that they are the preferred providers of enterprises when it comes to trust and safety services and this is without exception because do realize that you know the the trauma that people suffer from when they go through such kind of obnoxious content on a regular basis that trauma stays for a long time right.

Unknown speaker

So providing that kind of post exit support also you know kind of gives the employee more gives the employee kind of more faith in the organization that hey it’s not like out of sight out of mind right.

Unknown speaker

So yeah we’ve seen multiple companies now conduct exit interviews continue to give the employees resource to EAPs you know there might be alternate placements where we’ve seen for a lot of moderators it’s like you know they work in maybe content moderation for a couple of years and then they move to some kind of alternate role and then maybe they come back later right.

Unknown speaker

So you know there is there is a variety. There is this, you know, intent that you are invested in the career development of your moderators. And yeah, it also makes for a welcome break, right? So yeah, I think post-exit, people don’t realize is a very, very important part of moderator strategy, right?

Unknown speaker

Yeah. Right. So now, I spoke about a lot of these interventions in terms of, you know, across the moderator lifecycle. But having said that, I actually want to, oops, I actually want to call out, you know, a lot of best practices that we are seeing in the industry as of today.

Unknown speaker

Because realize that, you know, both enterprises and providers, they’ve been doing this for a long time now. And, you know, whether intuitively or whether as a reactive measure there have been multiple interventions which are now considered as best practices and we are happy to see that a lot of organizations are implementing them right so whether it’s in terms of like I mentioned a clear job description or access to both individual group counseling right or access to EAPs which have like actual trauma trained therapists right excuse me so well-being interventions like this are kind of best practices the underlying factor is that you know when you talk about well-being best practices there are two fundamental truths one it has to include investment in all three people process technology it cannot be just people or just processor just technology right that’s one second is you know well-being support has to be proactive right so you know your intent has to be that we will try to prevent mental ill health at the workplace you know as soon as possible obviously has it been perfected no will it be ever perfected maybe maybe not but you know if you have the right intent that okay we’ll try to prevent as much of the mental impact as possible you know you will always stumble whether by accident or by proactively upon interventions which can help the moderators right so for example you know the fact that there is in several companies there is a well-being first culture right well-being first culture obviously points to the fact that those organizations are thinking proactively about you know their moderator well-being so which is why they’re trying to build that culture across the organization and which obviously would mean that you know they would have access to resources which were probably may not be accessible right learn to trust your people you learn to you know lend them a helping hand so employees also feel you know trusted they feel valued for right it’s very but the way we see it is it’s very kind of intuitive,

Unknown speaker

but you know it has taken a long time to actually get there, right? So yeah, and like I mentioned about post-exit support, it’s now a best practice. And obviously, you know, this is based on the learnings of multiple years of moderation work as well as the fact that, you know, trying to forecast what the different kind of harms that might come up in the future.

Unknown speaker

So yeah, this is definitely not an exhaustive list, but we hope that most of the companies in the trust and safety space are doing most of this, if not all of this, right? And one of the reasons why we find the group support is so important is because a lot of the content moderators and the teammates have signed NDAs whereby they’re not able to talk about what they see, right, Abby?

Unknown speaker

And so they actually build, we help them build a network across the organization of people who get, they can speak to if they need to about that. And I think you’re right about the proactive. I think it’s so important because utilization has always been the kind of, you know, the report a lot of clients have always wanted.

Unknown speaker

Whereas now they want to understand how the proactive piece has enabled them to help people before they ever would have needed to get to a therapist, which I think is really the next phase, I think, of what we’re certainly seeing in the marketplace at the moment.

Unknown speaker

One thing that I would definitely, you know, call out over here, which is kind of important because people might misconstrue this. When we talk about individual and group counseling, the best practice is always to have both.

Unknown speaker

Absolutely. Because we have seen the pitfalls of, you know, having either of those two and not both. So yeah, it’s definitely important to have both. And, you know, obviously people were skeptic about group counseling because, you know, they felt like people may not open up if they’re in a group, but multiple studies have shown that, you know, it’s not the case.

Unknown speaker

And we have actually a lot of successful examples from the real world, which may not put into moderation, but still show that how group counseling can be effective. One such example is alcoholics anonymous, right?

Unknown speaker

Absolutely. So yeah, so it definitely helps. But yeah, group therapy in isolation, again, it’s not going to help if people do not have access to, you know, individual counseling and the vice versa is also true.

Unknown speaker

Right. Okay. So yeah, spoken about the best practices. Just want to, you know, bring in some examples, probably make it more real for you. But there’s a leading CX provider who provides 90 minutes of micro breaks per week, right?

Unknown speaker

Realize this micro breaks are not only important in terms of recharging and refreshing, but also very important because it gives you a break. from looking at or hearing obnoxious content all throughout the day, right?

Unknown speaker

There is a leading well-being provider who’s providing onboarding and regular support, but also recommends extra support for critical incidents and of boarding support as well. So this extra support for critical incidents, this is becoming more and more critical because realize that in terms of the content mix, like I mentioned, the obnoxious content is going to form a larger and larger part of the content mix as we go forward for human moderators.

Unknown speaker

So this extra support that we require for critical incidents, actually, there might come a time where this becomes in itself a regular practice, right? For example, people who are dealing with CSAM cues, right?

Unknown speaker

I mean, it’s extremely, extremely disturbing content. Now, in addition to the trauma that they face on a regular basis. or what we’re seeing in multiple companies is that for CSAM specifically, there is this extra support that is being provided because of the nature of that content, right?

Unknown speaker

And also to realize that CSAM is not necessarily always graphic in nature. A lot of times it’s very subtle, right? But it can be extremely disturbing because moderator who understands the context behind that obviously realizes the extremely disturbing nature of that content.

Unknown speaker

So yeah, it’s very important to provide that extra support. Then there is a trust and safety, global trust and safety provider who provides counseling and access to platform of resources, right? So it shares, for example, personalized reminders to take breaks, has customized trainings, very excellent stuff.

Unknown speaker

And then there is a niche DNS provider who provides, it offers EAPs, it offers workshops for mental health and stress management along with, there is an anonymous reporting system where people can report that, okay, for example, say if I’m a moderator and I want to report anonymously that I maybe cannot take this anymore, I need a break for two days, right?

Unknown speaker

And I’m suffering from this severe mental trauma, right? So I do not want others to know about it. Those do realize in a lot of cultures, mental wellbeing is a taboo, right? To speak about mental wellbeing is a taboo.

Unknown speaker

So in such cultures, obviously having this anonymous reporting system is a huge plus, right? So yeah, I just wanted to share some examples with you. Obviously, there are a multitude of such examples, both on the provider and the enterprise side.

Unknown speaker

Let me share one more with you on the enterprise side. There are enterprises who insist on psychologists, on-ground psychologists having, being trained in very, very specific certifications. Yeah, now that might seem to some people to be a bit too much, but do realize that that kind of a…

Unknown speaker

strong principle helps set the standard for a very, very robust wellbeing policy, right? So that is just but one instance, but yeah, it shows how deeply they are thinking about taking care of the wellbeing of the moderators, right?

Unknown speaker

Right, we said that we’ll talk about technology. So definitely in terms of the preventive measures in wellbeing, technology obviously plays a big role, right? So whether it’s in terms of tooling for say image blurring, gray scaling, or chat bots or content triaging web, essentially there is a AI part content triaging, which is done based on a prior data to rate each piece of content, right?

Unknown speaker

For the urgency and toxicity. So let’s say based on previous years of training data, you can probably predict that, okay, whether… piece of content is likely to be toxic or not, right? Gamification, obviously, simulation-based adaptive learning has been there for some time now.

Unknown speaker

Wearable devices, I think we know there’s a very, very disputed one because while paper, it looks excellent because we love to probably have wearables, AI-based wearables, which can act as interventions.

Unknown speaker

But the issue is there’s a huge issue with data privacy, which is why wearable devices actually are banned from being used in content moderation in several countries. So for example, I’ll give you an example.

Unknown speaker

It’s not exactly a wearable device, but I’ll give you an example of how extreme some of this data privacy laws can be. For example, in Italy, face recognition softwares are kind of banned, right? So, yeah.

Unknown speaker

And then there are AR and VR interventions as well, which, yeah, I think they have a huge capacity to kind of completely alter well-being interventions. And, you know, they have a massive potential to act as early interventions.

Unknown speaker

And they’re fun, right? So, for example, a couple of days back, I was using one of these VR headsets to play virtual cricket. Yeah, I did end up, you know, banging one of the doors in my house, but yeah, it was fun, right?

Unknown speaker

So, yes, you know, massive potential. It’s incredible. And there’s actually a company in Dublin called Vstream who work with, they worked with BPO and content moderators, and they did a really simple VR headset around the end of the day.

Unknown speaker

So let’s say you finish your shifted five, you go in and put on the VR headset, like 20 minutes is what they need. And it stops the images you’ve seen all day or the sounds you’ve heard all day. day going into core memory.

Unknown speaker

It’s incredible. And there’s like a 70% reduction in stress. People felt they were walking away from the office or the desktop, like free, rather than still thinking about it. It’s incredible what they’re doing with VR and AR at the moment.

Unknown speaker

It’s incredible. Yeah, absolutely. There’s one other example that I would give, and I think this is something that we frequently come across as a question that chatbots in a way are kind of old news, right?

Unknown speaker

There have been chatbots have been around for a long time. What people are looking at, and this is not specific to content moderators per se, but more around the mental health industry as a whole, is the use of personalized AI assistance for mental well-being, right?

Unknown speaker

So think of it maybe like your personal series for your own mental health, right? So yeah, those are the kind of technology interventions that are kind of coming up. Can they replace human psychologists?

Unknown speaker

The answer is no, right? Should they replace human psychologists? The answer is no. So, in fact, if you look at a lot of the content moderation, sorry, the moderator well-being policies of the enterprises when they partner up with providers, one of the, you know, one of the things that they mention within the contract is irrespective of whatever well-being interventions that you have in terms of tools and processes and all,

Unknown speaker

you absolutely must have trained psychologists on the ground, right? And yeah, they don’t do it for fun. They do it for the fact that, you know, that human touch is kind of invaluable. But that doesn’t mean that, you know, you cannot use technology.

Unknown speaker

In fact, I can give you this example that my wife, she’s a psychiatrist with NHS, and they have started using AI in, you know, at least they’re taking baby steps, right? But she’s very, very excited about the limitless possibilities of AI in acting as her assistant in providing mental well-being to other folks, right?

Unknown speaker

So, yeah, just wanted to call that out because I know as soon as we talk about technology, the first question that comes up, just like with moderation, are they going to replace people? Exactly. Yeah, yeah, absolutely.

Unknown speaker

Right. Yeah, future of well-being. So, we talked a lot about, you know, what the current scenario in well-being looks like and what are the kind of different interventions that companies are doing, what are some of the best practices where content is headed and, you know, how companies are thinking about the impact that these different kinds of new content or content with added complexity will have on the well-being of moderators.

Unknown speaker

But it’s time now to look at the future. right, because it’s important, like I’ve mentioned, you know, being reactive is good, but it’s not the best policy, right? So being proactive, thinking about what the future of well-being holds, how can we make it better?

Unknown speaker

Absolutely. It’s a very important discussion. And I think it’s time for another poll. Okay, Sarah. Thank you. So, we’re looking at which is the following is the most important for the future of well-being, preventative well-being intervention, training for emerging threats, organisational culture shift or investment in specialised tools, and that’s at Slido at 155-3743.

Unknown speaker

Thanks everybody for taking part, we’ll give you a few minutes. Thank you. Okay, we’re neck and neck. It’s really interesting, the preventive piece of organizational cultural shift, that’s really good.

Unknown speaker

It seems like people are really skeptical about specialized tools. Absolutely, yeah. And actually, if Michelle was here, she talks to content moderators a lot. And one of the things she says is that the tools that are blurring and reducing the audio or whatever it is, trying to gray out things can be difficult for them to actually, it slows them down sometimes.

Unknown speaker

So it can be difficult to get the content moderators to use them sometimes. Okay, I think we’re at preventative well-being intervention at 67%, and organizational culture shift at 33%. Interesting. Oh, 71 and 29 now, very interesting.

Unknown speaker

Nobody thinks training is as important as the other two. So it’s the case with tools, which I’m not very surprised to see about the tools. We all understand it’s in a way healthy to be skeptical about tools, because then that puts you in a position where you will think of all the pros and cons before you implement something.

Unknown speaker

Right. So yeah, but the thing that I’m surprised to see is that nobody thinks training for emerging threats is important. I’m guessing it’s probably because of the fact that it’s very difficult to predict emerging threats.

Unknown speaker

Exactly. Yeah, how would you train for October 7th? How would you have trained people for that? It’s interesting. incredible. Right. Okay. I think we can close the polls. Thanks, Sarah. Right. So, well, there are no right or wrong answers in terms of, you know, what he chose because honestly, if we had an option of all of the above, that would be the right answer.

Unknown speaker

So if you look at it, you know, in terms of the future of moderator well-being, it requires all of this. So it requires preventive intervention. It requires research-based support. It requires hybrid, you know, well-being intervention.

Unknown speaker

It requires training for emerging threats. Learning from other high-risk areas is actually a very interesting example that I’ll give you in a minute. Investments in specialized tools. Yes. Enhancing transparency and accountability.

Unknown speaker

Very important. And yes, overall an organizational cultural shift. So I think I mentioned before that it’s It’s not just about that, OK, we have to provide well-being to moderators because they are being exposed to harmful content.

Unknown speaker

But if you flip it and say that we need to have positive well-being culture in our organization, and so to have that culture, to make sure that that is a culture that is all pervasive through the organization, we will do whatever we need to do.

Unknown speaker

So that kind of plays into that. And yeah, so just talking a bit about hybrid well-being, I think it’s about the fact that it complements human-based well-being, human-based well-being with technology interventions.

Unknown speaker

So that to your point around some of this gray scaling and blurring tools kind of reducing the productivity. Yes, we have heard that from a lot of organizations. So I think what is important over here is which organizations are doing is sit down and understand without blindly implementing tools.

Unknown speaker

Sit down and understand what is it exactly that is going to help the moderators. And if it so happens that blurring and gray scaling tools that are an absolute must for the greater well-being of the moderators, then you need to kind of bake in that slowdown inefficiency.

Unknown speaker

Exactly. That’s exactly the point. Yeah, cannot have it both ways, right? No, exactly. You cannot have the really high productivity if you want to keep people safe, absolutely. Yeah, exactly. The other thing is that I spoke about the enhancing transparency and accountability.

Unknown speaker

So that transparency and accountability, again, has to happen at an aggregate level regarding interventions and effectiveness. There’s really no point of hiding if something went wrong. And the consequences of those were because realize that as you share, you also learn.

Unknown speaker

Well-being is one of those areas where there is really no winner. You know in the industry the more you share with others the more everybody learns and it helps Elevate everyone in the industry right so yeah It is it is one of those areas where they can be either a win-win or a lose-lose There is no you know like we like we see in some of those gamification Economics, so yeah, sorry that you were saying something No,

Unknown speaker

I was just gonna ask you so if you were to think about The the one that you see coming out above Anything else if you were thinking about the people are watching the two or three things They really need to should be focusing on this year.

Unknown speaker

Well, which ones would they be I think Well Again, I’d probably answer your question in two parts so one thing which people should always focus on and it’s not going to happen in a year, but should be a focus every single year is the Organizational cultural shift and the preventive well-being interventions kind of link to that that you have to have this positive well-being culture at your company,

Unknown speaker

right? If you don’t have that, then you know you’ll always run into a situation where well-being is just another maybe cost center for you, right, and you would want to drive as, drive hard as many efficiencies as you can, but which doesn’t really result in, you know, higher business for you, right?

Unknown speaker

It’s so true. It’s such an important element. And people think that they’re just talking about the content moderators. Some people don’t even think better from the trust and safety team, but it’s actually the overall organization has to be.

Unknown speaker

Otherwise, like you say, they’re living in that organization every day. They’re not going to be, they’re not going to feel like it’s part of what they’re being offered. It’s so true. True. The one thing that I would call out, you know, like you asked that you would want organizations to, I would want organizations to focus on this year is to learn from other higher risk areas, right?

Unknown speaker

Now I’ll give you an example. There is a provider who commissioned a study on how other organizations who own large factories, right? Deal with the wellbeing of folks who work on the factory floors. Now, yes, that is not exactly, may not be exactly a mental wellbeing challenge, but do also realize that where there is a high risk of physical harm, mental wellbeing kind of follows, right?

Unknown speaker

Absolutely. So, you know, it was a very interesting study and unfortunately that study is not public yet because of obvious reasons, but yeah, they took their learnings from that, right? And they kind of applied it to their own wellbeing program.

Unknown speaker

So yeah, we should definitely try to learn from other high risk areas, you know, they can have a lot of learnings for us. The one thing which I would want to call out over here is having this kind of an approach is not optional.

Unknown speaker

It’s kind of mandatory. And I like you a very simple reason for that. our own study shows that service providers with the most robust wellbeing practices have on an average gained more than 10 clients every single year, right?

Unknown speaker

Wow. So that’s a direct business impact, right? So yeah, it’s not a cost center anymore. It’s at the center of having a very, very robust trust and safety practice, that’s one. Second is what we are very happy to see and has been for quite some time or not very new, but still it’s worth calling out is for a lot of enterprises, moderator wellbeing, when they’re partnering up with service providers,

Unknown speaker

moderator wellbeing is not optional. It’s a hygiene factor. So if you want to have a robust wellbeing practice, client is not even going to talk to you. Okay, interesting. Right? So yeah, so which means that, you know, should definitely have a very, very robust wellbeing practice.

Unknown speaker

And yes, of course, generative AI, how can we escape that? We have spoken about the potential of generative AI to cause a lot of harm. So yeah, it would be remiss of us if we do not talk about the positive potential of generative AI.

Unknown speaker

It actually represents a significant opportunity to enhance moderator well-being. So, you know, whether it’s in terms of proactive monitoring, sentiment analysis, predictive analysis, personalized well-being, take your pick.

Unknown speaker

Personalized well-being more than anything else actually is proving to be, you know, a huge hit because in personalized well-being, what the idea is that in certain cases, you may not want, or in certain cultures, you may not want to be seen seeking mental well-being support.

Unknown speaker

Absolutely. So if you have a personal, you know, friend, right, who helps you with your well-being, why not? In fact, there is a there is this app, I think, called Weiser. I may be getting the name wrong, but I think it’s Weiser.

Unknown speaker

It’s a mobile-based AI friend kind of an app which kind of helps you with, you know, support when you’re facing mental distress. Obviously, it cannot, by regulation, offer medication or psychological, you know, prescriptions, cannot be prescriptive, with psychological interventions and all of that stuff that is still in the realm of medical professionals, but it does offer a level of support, right?

Unknown speaker

So it might tell you that, okay, probably you can look at taking a break from whatever you’re doing for 50 minutes. I know it sounds pretty obvious, but do realize that for a lot of us, right, the millennials and the Gen Zs, a lot of us who find more solace in technology than, you know, actually human, absolutely, will be a very, very powerful tool, right?

Unknown speaker

And they’re more comfortable dealing with it, that they are dealing with the human being sometimes, you know, extraordinary. Abhi, this was fantastic. Fantastic. We have about three minutes left, and I can talk to you for another hour.

Unknown speaker

We’ve got a couple of questions in which I’m going to go through really quick, if that’s okay. One of the questions is, obviously you talk to direct clients, the Googles, the Metas, the TikToks, as well as the BPOs who are working with them.

Unknown speaker

Who determines the wellbeing requirements in the business? Is it the BPO decides for each of those areas, or are the large direct clients who are the bigger kind of users of content moderators really determining what the level of wellbeing needs to be?

Unknown speaker

Yeah, absolutely. No, that’s a great question. I think from what we have seen, it’s kind of a mixed bag. So there are enterprises who have been obviously know, looking at trust and safety for a very, very long time due to the nature of their business.

Unknown speaker

And they themselves have conducted a lot of research on, you know, moderator well being. So what that has resulted in is they have set up almost, you know, like a document, if you may, of if I may, of a certain set of principles or minimum standards that need to be followed in well being, right.

Unknown speaker

So for example, like I mentioned that one of these enterprises have said that you have to have an on ground trained psychologist irrespective of what your well being intervention is, right. So there is this set of minimum standards.

Unknown speaker

And what we have typically seen is that providers, they build on this set of minimum standards, and then they, you know, implement interventions and stricter standards on top of that. And then they try to make it an organization standard in the sense that, you know, if say, for example, a company tells them that you have to do this, this is the minimum standard.

Unknown speaker

And then, as a service provider, you implement on top of that. Now, when you go to company B, and company B says, Okay, I don’t have any standards or do whatever you must, it’s not like you’re going to, you know, revert back to kind of some like a minimum standards, you want to continue whatever you have, right, as a best practice.

Unknown speaker

So that’s one set of enterprises, the other set of enterprises, which are typically some of the maybe, you know, smaller ones are upcoming, they don’t sometimes have a defined well being strategy. And it’s not because of the fact that they are prevalent about it, and they don’t really care.

Unknown speaker

It’s because of the fact that they’re very early in their trust and safety journey, right? Yes. So as they scale up, as the content grows, they realize that they need trust and safety. So in such cases, they rely on the service providers to come in with, you know, robust well being interventions.

Unknown speaker

And then through the course of that partnership, if they feel that there is something you know, which needs to be worked upon or improved upon, obviously, they’ll pass on their suggestions, and the service providers if they feel that those are relevant and can be worked upon.

Unknown speaker

Yeah, they will implement them. Fantastic. Thank you, Abby. That’s all we’ve got time for today. Thanks, everybody who dialed in. It was great to see you here. Abby, thank you so much for joining us today.

Unknown speaker

It was a phenomenal session with so much information. We will make a recording available, so anybody who wasn’t able to make it will get in touch with them. But thanks, Abby. Thanks, Sarah, for all your work in the background on the production.

Unknown speaker

Yeah, thank you so much. Thank you. It was lovely to be here. Thank you so much. Everybody have a nice day.