In any game with a chat function, you’ll inevitably come across offensive messages. Google and FACEIT are attempting to tackle this issue with its new Minerva AI, which uses machine learning to issue warnings to, as well as ban, toxic players. If early tests are any indication, it seems to have been successful.
Announcing the feature on its blog, FACEIT, a gaming client with a focus on competitive gaming, showed off Minerva and all its capabilities and achievements so far. Since its rollout in late August, Minerva has analysed over 200,000,000 chat messages, with the program identifying over 7,000,000 as toxic.
The AI program has issued over 90,000 warnings, and banned over 20,000 players due to their toxic behaviour. This has led to a decrease in toxic messages by over 20%, with fewer unique players sending offensive messages. A full explanation by Jigsaw Labs (a subsidiary of Alphabet) of how the AI works can be read here.
Despite the already clear success of Minerva, FACEIT claims this to be just the “first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI”. The end goal by FACEIT is to be able to “detect and address all kinds of abusive behaviors in real-time”.
Toxic behaviour in the gaming community has always seemed like an impossible task to solve. With artificial intelligence like Minerva however, it seems like we might be able to finally remove the problematic aspects of gaming, allowing everyone to just focus on having fun playing video games.
KitGuru says: What do you think about this recent development by FACEIT? Would you like to see Minerva be expanded, and adopted by all major gaming services, or are you concerned that there may be an issue of false positives while the technology still matures? Let us know down below.