Information

 

Join us for the third insightful installment of our webinar series on Content Moderation in the Gaming Industry. This session is dedicated to unraveling the complex role of Artificial Intelligence in gaming content moderation. We are proud to present a panel of esteemed experts, including Dr Michelle Teo, Clinical Director at Zevo Health, and Sharon Fisher, Head of Trust & Safety at Keywords Studio. They will explore the critical balance between AI and human intervention in moderation processes, and explore how Keywords Studios envisions AI as a tool for enhancing the psychological wellbeing of moderators. This is an unmissable opportunity for those seeking a deeper understanding of AI’s impact in the gaming industry.

Delve into the nuances of managing ‘grey areas’ in platform policies through a synergistic approach of AI and human intelligence (HI). This includes discussions on policy considerations, addressing bias, understanding context, and incorporating multicultural knowledge. Gain insights valuable for gaming platforms that lack comprehensive AI support, focusing on moderation strategies and global regulatory awareness.

Key takeaways:

  • AI’s role in gaming content moderation
  • Understanding the trade-offs between AI and human intervention
  • How AI can benefit moderator wellbeing
  • Collaborative strategies of AI and HI in addressing ‘grey areas’

 

 

 

 

Watch Below

Speaker 1

So everybody, welcome back to our second webinar with Keyword Studios. I have, again with me today, Sharon Fisher, who is the Head of Trust and Safety at Keyword Studios. She has spent 15 years in the gaming industry starting at Club Penguin as a moderator and then moving to startups creating software to keep the internet safe. And while doing that, she was also doing some sales.

Mid pandemic, she quit her role and then started consultancy and trust and safety. And she has been at Keywords for almost two years. So welcome Sharon.

Speaker 2

Hi, thank you again for having me, Michelle. I enjoy our chats a little bit.

Speaker 1

time. So today, Sharon and I are going to be talking about a very interesting topic, which is AI and human intervention, how we can bring the two together to safeguard players and superheroes. So Sharon, for all of those who are watching today who may not fully understand how AI functions in the game and world, could you give a brief explanation?

Speaker 2

Yes, and I always like to start with how they were came to be to today AI and the acceptance of it. So our acceptance rather. Back in the time when you’re referring to Club Penguin, we started by doing kind of like an allow and deny list, which is very 1990s, or rather 2000s, where it was just based on words, right? So Club Penguin throws snowballs, it should be able to say snowballs.

But if you take the last word and you pair it with pretty much any verb, it’s going to become sexual, right? So the context of the things was important for us to get. So Disney started creating their own contextual filter, right? So throw snowballs is okay, touchballs is not okay. So how do we differentiate from that?

And then with that advancement, people was just like, okay, machine learning has it, you do not need any humans, let’s read those skills there. But very quickly, people realized that when it comes to language, which comes with the nuance of gaming, the different IP, different specifics to the game, the audience, all of it, you really still need the human side of things.

So I think that today we’re finally in that era where rather than thinking that AI or humans are the answer, now we’re finally allowing to work together and to understand that AI currently, and to your original question, is helping us to number one, the volumes that we’re seeing of UGC today, are nothing like what we have seen before after and during the pandemic, they were crazy.

So with that, the responsibility of looking and catching pieces that are literally time sensitive, like threatening to hurt others becomes really, really important. So within this haystack, AI really helps us to focus on what is important, what is actionable, what has to be really timely, and carefully handled.

And the other piece too, AI allows us to in any trend that we see, I see it almost like a network that allow us to make sure that everybody of our customers is protected with whatever trend we see. And it could be something beneficial and something funny that is happening in the internet and you just adapt your dictionaries to it.

But it could also be something like a threat or a trend to hurt yourself or something like that. So AI really helps us to inform the ending and just extrapolate the efforts of all of our humans.

Speaker 1

Yeah. So, I mean, in everything that you’re saying, there’s so much that AI can do, but we know that human intervention is still necessary. So, you know, with the trust and safety community, we talk a lot about trade-offs. So, is there a trade-off when considering AI versus human intervention and content moderation?

Like you sort of alluded to previously, moderators used to be very concerned that AI would kind of replace them and take them out of their jobs.

Speaker 2

Yeah, and I don’t think there is a trade off. The way that we see it is it is a tool. One more tool that allows us to see more signals. The only trade off that I can think of top of my head is actually a positive one, which is again time and making sure that you are seeing the content that you need to see in front of you.

And the fact that it really helps you to sift everything that is the, let’s say, the hellos and the rainbows and ponies to be most of the worst of the worst, not to even be seen by any human, neither players or our superheroes, and then it leaves the gray areas for humans to make a call.

And those are all examples that I was talking about is these like, cultural changes of pop culture, anything that is new to the game, or is it like a small bathing suit, but it is not pornography, right?

So just really enhancing rather than a trade off when I think about AI, there’s going to be trade offs, I’m pretty sure, but more on the negative side and how people is going to try to utilize AI to create other ways of harassment. And for example, but I mean, bring it on. That’s why we’ve been fighting for so many years.

So that is why we need to get more creative on the ways that we do collaboration, right?

Speaker 1

Yeah, and I suppose it’s the reason trust and safety exists at all, right? Exactly. That’s why we’re here. Yeah. So I know you mentioned it just there very briefly. So, you know, there’s a sense that AI can help moderate like the very easy content that we know isn’t egregious and isn’t gonna harm anyone. And that AI can address all of the content that’s very, very obviously egregious.

But is there ways that Keywords believes AI and content moderation can support the psychological health and well-being of moderators specifically?

Speaker 2

Yeah. So one of the pieces that we’re very proud of Keyword Studios is the way that we work with our clients and partners. And I think unknown to the gaming industry, one of the requirements that we have as Keyword Studios is for our clients to actually have some kind of technology slash AI before our moderators have to do their reviews.

So we call it line of defense and we also call it 2024, where there is technology out there that is able today to help us figure out again, think about the volume, think about the content.

And this is kind of like something that we have set a standard within the industry, making sure that whatever we’re engaging with a new client, they understand that this is not just us protecting our superheroes, it is for everybody’s benefit. I could probably put together a team of 300 people and 300 superheroes that can deal with all their live content.

Number one, it will be post-moderated if there is no any kind of technology. And I call that more like damage control because at the end, it’s posted, somebody already seen it, somebody has to put it down. So the damage is already done. The fact that the pop with the live content, and you will miss things. And again, souls will be crushed without being dramatic if you do it that way.

So it is now a requirement of ours. And that’s why we have also so many partners on the technology side of things. It’s not that we’re making lots of money by just bringing this one partner. We actually are not part of those conversations.

All that we do is we propose a different partners based on each of the use case, but it is a requirement for us to have that kind of technology prior our moderators.

Speaker 1

Yeah, so, you know, it makes sense right because as much as we want to safeguard any users or players, you know, in the gaming industry, the same way we want to be able to protect the moderators and having that first line of defense is really, really important. And if AI can help us do that, then that’s exactly what it’s there for.

Speaker 2

Exactly. And to be honest, thankfully, many, many years later, I there’s not a lot of pushback on that. Like in the two years that I’ve been working at Keyword Studios, I will say maybe one client was not okay with implementing technology. I think the gaming industry got to that point, but they understand again, it’s for everybody’s benefit. It is not just a Keyword standard.

Everybody wins when there’s technology involved.

Speaker 1

Absolutely. And so I suppose if moderators are now moving away from content that is the most obviously egregious and they’re reviewing more of those kind of gray area sort of content that is being put out onto platforms.

So if we think of this from like the psychological perspective, we understand that our brain functions by seeking to sort of categorize things into nice neat little boxes, minimizing cognitive overload.

But is there a way that AI and human intervention can work together to address these gray areas, kind of helping to minimize that cognitive overload for moderators and then also keeping the players safe while they’re gaming.

Speaker 2

Yeah, so there’s two answers to that. So the first side of things is like for the rainbows and ponies that are being posted, we still do quality control to make sure that the filters and the technology is continuing to catch what is really obvious sometimes. We’ll find that it has to be sent to the other side.

And then on the great areas, what we have found it is that, and that’s why we love to work with partners so close on the technology side because then that allows us to find those great areas, but not just continuing to work on them. Rather, we find them, we pass it along as a circle of feedback to our technology partners.

And then it doesn’t benefit only that specific client that we found a piece of content on. It just, we send it to the technology partner and then they are able to implement those kind of changes if applicable on the other clients to everybody else.

So that is where AI plus HI becomes even more important because we now are understanding again that we help each other and if humans can do that job of looking into the great areas where I’m already as a leader, conscious and at peace that the worst of the worst is not being seen, but this work is actually going to help to make these models better and more accurate and the great areas.

I don’t, I will not dare to say they get reduced because there’s always going to be this nuance, but at least we won’t be doing repetitive work when it comes to the great area because the models start learning from our actions or feedback.

Speaker 1

Yeah, and I think you bring up a really important point there. It is very much about that feedback loop, you know, it’s not just that you guys are noticing something telling it to your partner and they’re only implementing it for you. They’re also able to kind of implement it for any of the other clients that they’re working with and then that helps the entire community in safety as a whole.

And that continuous feedback with your technology partners is what really drives that and really makes things better across the globe.

Speaker 2

Exactly. And that is very unique to keywords, the way that we work, because in my previous lives, the way that it continues to work is the client looks for vendors, right? So there’s a vendor that does the human side, the vendors that do technology.

And then in my past life, what I have seen is just fingers pointing and the client in the middle trying to understand what do they even, what do the humans even mean about like gray area to pass the message to technology. And it is just very complicated for the client.

We found that having us kind of like being the center of it all and understanding the client’s perspective and what their goals of community are. And then working side by side with partners is way more impactful work. And even easier to work with is easier for me to turn a technology partner, we’re missing this rather than having to explain all of it to the client.

Client makes their best efforts to explain what we mean. And then technology partner asks a billion questions, and then the time length on resolving an issue becomes triple, right?

So having, I think that perspective when it comes to trust and safety of bringing, and that’s a lot of what we do, we’re connectors and translators at the same time, I find it is very key to make sure that everything is going organically. And it’s more like a team effort rather than just two different, very capable vendors, but just going blind and not talking to each other.

Speaker 1

Yeah, and that’s, I think, one of the pieces that I’ve noticed within the community, you know, across trust and safety over the past year, two years, is that there’s a lot more conversation happening cross-industry, cross-functionally, and it ensures that that feedback is being shared in a way that’s really beneficial and really impactful.

Speaker 2

We’re leaving our best live interest in safety, for sure, it’s the best year ever.

Speaker 1

So I suppose, like, after this conversation, is there any kind of final takeaway that you would have for other gaming platforms who maybe don’t yet have such a comprehensive AI support as part of their moderation teams.

Speaker 2

Yeah, I will always say talk to your closest trust and safety appear. But the other piece too is look for a technology that is sweet for the size that you are today.

What I have seen many times happening is the obviously you’re starting a new platform, a new game, you want the best of the best, but it might be too early to have the Mercedes, call it somehow, you need to start small, there’s many technologies out there that will give you the very basic, especially when you’re an indie or startup, you want to make sure that you are not putting your horses before the cart.

Sorry, Max English. It’s not a thing. I should not make that reference. But you got to make sure that everything is proportional to what you are experiencing. These kind of technologies usually go by volume. And that’s the cost that you’re going to be paying for. But again, having any kind of tool before you start is not impossible.

If you’re looking at the again at the Mercedes, and you’re like, well, I cannot afford it today. That’s fine. There are tools out there that I public that are pretty much out of the box, not customizable. But you still are able to put those ones in place. Always thinking that you are building is like a personal life, you’re building your reputation as you’re growing your game, your platform.

So it doesn’t matter if it’s three people or 300 or 3000 that are chatting. At the end, you want to make sure that you’re providing that safety blanket so your community starts thriving from the get go because these are the early adopters that are going to set the pace and the voice of what the community is going to look like when you have 3 million users, right.

So there is always ways to implement and to plan for what we call moderation by the side. So do not think that it’s not for you yet that there are a lot of pieces that you can implement prior going based.

Speaker 1

That’s I think really great advice. You know, there’s, there’s no reason to shy away from it or to hide from it and feel like it’s not for you yet. There’s always some element of AI that could be implemented and it has that dual purpose of safeguarding your moderators, while also safeguarding your users and like you’re saying, just help build that community and help your company thrive. Exactly.

Speaker 2

and reach out if you have any questions. I’m always very open. I’m on LinkedIn. That’s my social network. Very link, but that’s what I leave off.

Speaker 1

Yeah, so listen, I really appreciate you taking the time Sharon and I think that the information that you’ve shared and the advice that you’ve given is going to be really useful to people.

So we’re really happy to be finishing this webinar series with Keyword Studios and all of the links where you can find Sharon, where you can find Zevo Health, where you can find Keyword Studios are going to be included in this description. So feel free, as Sharon has said, to reach out to us and we’d love to chat more.

Speaker 2

Thank you so much for having me, Michelle. This has been amazing. I know the team has been having a great time, too. So thank you for having us.

Speaker 1

Thanks so much, Sharon.