by Mike Pappas, CEO of Modulate
So, you manage an online platform, and are trying to ensure it’s safe and inclusive. You’ve identified a bad actor – not merely someone who misspoke or had a bad day, but one of the rare (but sadly non-zero) folks who truly enjoy spending their time making others feel miserable, ashamed, or offended.
What do you do, now that you’ve found this person? The simple answer seems to be “ban them.” But this is more difficult than it seems, both for gated (paid) platforms as well as free-to-play or ungated equivalents. What makes this a complicated problem? And what should platforms do instead? Let’s answer those questions by breaking down the approach by platform type.
UNGATED (FREE-TO-PLAY) PLATFORMS
When an ungated platform bans a user, they need to consider the risk of the user simply creating a brand new account. After all, it costs them nothing to do so, and it lets them get right back onto the platform with basically no repercussions! Some platforms attempt to solve this technically – using IP addresses and other tricks to make a guess whether a new account actually belongs to the previously banned user. These approaches are tricky at best, though, and the existence of VPNs and other tools mean they are far from fool proof. Inevitably, some of your banned users will get back in.
This is especially problematic for online games, especially those which rely on a skill-based matchmaking. Generally speaking, individuals who end up banned tend to have spent a decent amount of time in the game in question (if they hadn’t, the odds of them getting caught in time to be punished would be low in the first place.) But after being banned, they’ll create a fresh account, which will be matched with other new players.
Those new players will then play their first game with a teammate who is much, much better than them; and who also communicates in a highly toxic way. For some new players, this will simply drive them away; for others, they’ll be impressed enough by the banned player’s skill that they’ll subconsciously begin to model themselves after them – including their toxic communication habits. This is an insidious loop – not only did you fail to truly get rid of your toxic user, you actually made sure your newest, most vulnerable users would get exposed to them – increasing your churn and increasing the number of new players who adopt similarly toxic styles. Despite your best efforts, everything got worse!
How do you avoid this? One option is to ensure your players are more invested in their original account, so that it’s not so cheap for them to just give up on it and create a new one. Platforms that rely heavily on in-app purchases or time-based unlocks can do this reasonably well – nobody wants to pay or wait for something they already earned. But (as we’ll cover in the next section), this approach is still far from perfect, and in some cases is even legally tricky.
Another more successful approach is to shift from full-on bans to more precise punishments. For extreme offenders, this could be a temporary ban (most players will endure bans of up to a week before they decide to create a new account); more often, platforms choose to forcefully mute the individual, allowing them to still participate in the game but preventing them from misbehaving in ways that could cause harm to the other players.
The final technique, which works best when coupled with punishments like those listed above, is to make sure that there are other sources for new players to learn from beyond your toxic, banned users. The best way to do this is to match new players with trusted old-timers, who have proven through a reputation engine or similar that they are invested in the community and able to provide a good, safe reference point for new folks. Another good approach is to simply be loud about your Code of Conduct, especially for new folks – which includes explaining the Code of Conduct in plain language, and not being afraid to visibly enforce it even among new players, to show the others that the CoC is serious.
GATED (PAID) PLATFORMS
Gated platforms would seem to have an easier task. If they ban the offending user, then that user would need to pay a significant amount of money to get back into the game. This will keep most folks out, and those who do come back are likely to be more conscientious out of a desire to avoid having to pay a third time if they get banned again. All in all, there would seem to be no problem here.
The challenge here arises on the legal end. When the user originally paid $60, were they buying something that they now owned? Or were they paying an entrance fee to a theme park which only honors tickets for a limited time? This is a complex question to answer, and bears significantly on your right to ban a user. If they merely purchased a ticket while was always going to expire, your obligation is minimal; but if they thought they were buying something they owned and now you are taking their stuff, that tends to get more contentious.
I am not a lawyer, and advise anyone worried about this to talk to an actual attorney – as none of this is legal advice. With that caveat, though, there are a few things which many platforms tend towards which seem to minimize their risk.
Firstly, having a clear code of conduct built into the terms of service, which includes clear explanations of what is prohibited and the fact that the platform could ban disobeying users. Additionally, preserving evidence of the violation, and offering an appeals system, are invaluable in demonstrating that due diligence was observed before taking this extreme action. Finally, platforms wishing to be especially safe could consider partially reimbursing the price paid for the title, though this is an extreme action and appears quite uncommon.
Even for gated platforms which have protected themselves from legal liability, though, bans should be an action of last resort. As was briefly alluded to at the beginning of this article, most bad behavior comes from players who are misinformed, frustrated, or otherwise behaving anomalously. Apex Legends found that 85% of recidivism went away when they started explaining to users why they had been banned; studies by studios like Riot and Activision Blizzard, and shown in our internal data at Modulate, find similarly that only a small percentage of players actually willfully misbehave. No matter whether a platform is gated or ungated, being safe and inclusive means not taking the easy way out and investing in making the platform work for everyone who is willing to reciprocate by putting in the effort to behave appropriately.
—
For more insights from Mike and Modulate’s team of voice moderation experts, you can sign up for their Trust & Safety Lately newsletter. To learn more about ToxMod, visit toxmod.com