Industry Voices: AI is an essential tool in tackling online hate in gaming

by Arnaud Chemin, head of gaming at Bodyguard.ai

The egaming sector is booming, with the live-streaming audience in video gaming expected to reach 1.41 billion by 2025. This has been accelerated by increased interest during lockdown, increased smartphone penetration and the rise of mobile streamers commanding huge audiences.

However, online toxicity continues to be a major issue within the gaming community, with the gladiatorial aspects seeming to spill over with alarming frequency into harassment. Analysis from Unity Games shows that 72% of gamers experience harassment while playing online. The problems are magnified in online gaming because of the immediacy of interactions.

Within live streams, there is also in-game chat, forums and other social channels all engaging at the same time. With the lifespan of an average tweet lasting just 16 minutes, and a trained human moderator taking ten seconds to process just one comment or post, it’s apparent just how impossible (and thankless) a task it is to attempt to moderate the hundreds of thousands of comments every second during a live stream.

Tackling toxicity in gaming matters as the industry is built on player experience and engagement. If this is threatened because gamers feel unsafe or unsupported, they may leave. And if the community disappears then is there really a game anymore?

AI has had a bad rap in the media in recent times but an automated moderation system has to work in tandem with human moderators if the online gaming industry is to thrive. For instance, some moderation solutions (such as Bodyguard.ai) allow you to filter significant volumes of comments in seconds across Twitch, YouTube, Discord, chat rooms, in-game chat, forums, fan communities and other social platforms. Combined with a human team focused on building community through their experience and engagement – this is a very powerful approach.

Users are provided with an easy-to-use single solution capable of providing a detailed analysis of any gaming community, spanning all player behaviour and interactions. Not only does this allow organisations to reward positive behaviour through comprehensive insights and the ability to filter on specific categories, but it also allows action to be automatically taken against those sharing toxic content. Importantly, it frees up time for community managers, games developers, players and gaming leagues to focus on building and playing the games they love.

Importantly, AI can be trained to recognise culture and context and recognise an enthusiastic but NSFW response to a win or loss, versus a hateful or toxic comment directed at an individual designed to inflict harm or hurt. This should not be overlooked as part of the gaming experience. Being passionate and vocal is not the same as discrimination or hatred and it’s vital to have space for the former and take out the latter.

Online games live and die with their communities. These powerful communities are what attracts new fans. Without good moderation the online gaming ecosystem is at risk. With automation and strong community managers, it can be a place where communities flourish and grow.

About Guest Author

Check Also

Q&A: Stefano Petrullo and Torsten Oppermann on Renaissance PR joining the 1SP family, and what creating a ‘superagency’ actually means

Renaissance PR was acquired by 1SP Agency last week. We checked in with both companies to ask some questions about the acquisition, and what it’ll mean for them going forward