Moderation | KitGuru https://www.kitguru.net KitGuru.net - Tech News | Hardware News | Hardware Reviews | IOS | Mobile | Gaming | Graphics Cards Thu, 31 Aug 2023 21:04:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.kitguru.net/wp-content/uploads/2021/06/cropped-KITGURU-Light-Background-SQUARE2-32x32.png Moderation | KitGuru https://www.kitguru.net 32 32 Call of Duty introduces real-time voice chat moderation using AI https://www.kitguru.net/tech-news/mustafa-mahmoud/call-of-duty-introduces-real-time-voice-chat-moderation-using-ai/ https://www.kitguru.net/tech-news/mustafa-mahmoud/call-of-duty-introduces-real-time-voice-chat-moderation-using-ai/#respond Thu, 31 Aug 2023 08:00:27 +0000 https://www.kitguru.net/?p=628212 The video game industry has – particularly in the past – been notorious for its toxicity when it comes to online gaming and voice chats. Being one of the biggest franchises in gaming, Call of Duty has always struggled to get some of its users to reign in such behaviour. The latest initiative by Activision …

The post Call of Duty introduces real-time voice chat moderation using AI first appeared on KitGuru.]]>
The video game industry has – particularly in the past – been notorious for its toxicity when it comes to online gaming and voice chats. Being one of the biggest franchises in gaming, Call of Duty has always struggled to get some of its users to reign in such behaviour. The latest initiative by Activision sees the anti-toxicity team implement a new real-time voice chat moderation feature using AI to detect negative behaviour.

Making the announcement on their blog, the ‘Call of Duty staff’ said “Call of Duty’s new voice chat moderation system utilizes ToxMod, the AI-Powered voice chat moderation technology from Modulate, to identify in real-time and enforce against toxic speech—including hate speech, discriminatory language, harassment and more.”

They continued, “This new development will bolster the ongoing moderation systems led by the Call of Duty anti-toxicity team, which includes text-based filtering across 14 languages for in-game text (chat and usernames) as well as a robust in-game player reporting system.”

While the global release of this new initiative will go live with the launch of Call of Duty Modern Warfare III on the 10th of November, a beta of the new system is being tested right now in the US within Modern Warfare II and Warzone.

While using AI to detect toxicity could do a lot of good for the online space, there is the fear that it could flag up a bunch of false-positives. It’ll be interesting to see how this beta period goes – and whether this new tool ends up doing more good than harm. We will have to wait and see. 

Discuss on our Facebook page HERE.

KitGuru says: What do you think of this new AI powered moderator? Do you use voice chat when gaming online? Have you ever been suspended/banned? Let us know down below.

The post Call of Duty introduces real-time voice chat moderation using AI first appeared on KitGuru.]]>
https://www.kitguru.net/tech-news/mustafa-mahmoud/call-of-duty-introduces-real-time-voice-chat-moderation-using-ai/feed/ 0
Investigation finds Facebook moderators trained to overlook certain kinds of abusive content https://www.kitguru.net/channel/generaltech/ryan-burgess/investigation-finds-facebook-moderators-trained-to-overlook-certain-kinds-of-abusive-content/ https://www.kitguru.net/channel/generaltech/ryan-burgess/investigation-finds-facebook-moderators-trained-to-overlook-certain-kinds-of-abusive-content/#respond Mon, 23 Jul 2018 11:28:49 +0000 https://www.kitguru.net/?p=380551 Facebook has been at the centre of a few major controversies recently. The social network has been embroiled in everything from privacy scandals to politics. Another long-criticised aspect of Facebook is the inconsistent moderation of offensive material. Similar to the undercover Cambridge Analytica investigation, an undercover reporter has since tackled Facebook, with a documentary set …

The post Investigation finds Facebook moderators trained to overlook certain kinds of abusive content first appeared on KitGuru.]]>
Facebook has been at the centre of a few major controversies recently. The social network has been embroiled in everything from privacy scandals to politics. Another long-criticised aspect of Facebook is the inconsistent moderation of offensive material. Similar to the undercover Cambridge Analytica investigation, an undercover reporter has since tackled Facebook, with a documentary set to shed some light on the network's inconsistent moderation.

This new undercover documentary is titled “Inside Facebook: Secrets of the Social Network” and it will air on Channel 4. The investigation was achieved by sending an undercover reporter to work at CPL Resources, a content moderation contractor based in Dublin. Facebook seems to outsource some of its moderation to third-party companies like CPL Resources.

Once inside CPL, the undercover reporter learned that content moderators were mismanaged and told to do things differently, leading to inconsistent punishments on certain types of content. As Business Insider reports, the investigation also discovered that trainees were told to ignore racist memes, despite them falling under Facebook's terms of service. In some cases, moderators would also overlook child abuse content.

One example of this involves a video of a man beating a young child. This video was reported to Facebook back in 2012, but it was only taken down recently, one week after Channel 4 brought it to the company's attention.

KitGuru Says: Facebook needs to really stamp down on the disgusting material that is allowed to propagate on their platform. What do you make of Facebook’s moderation policies?

The post Investigation finds Facebook moderators trained to overlook certain kinds of abusive content first appeared on KitGuru.]]>
https://www.kitguru.net/channel/generaltech/ryan-burgess/investigation-finds-facebook-moderators-trained-to-overlook-certain-kinds-of-abusive-content/feed/ 0
Update: Twitch AutoMod is already having some success https://www.kitguru.net/gaming/jon-martindale/twitch-automod-will-try-to-stamp-out-chat-abuse-2/ https://www.kitguru.net/gaming/jon-martindale/twitch-automod-will-try-to-stamp-out-chat-abuse-2/#comments Thu, 15 Dec 2016 08:22:53 +0000 http://www.kitguru.net/?p=314437 The automated moderation algorithm introduced by gaming streaming site, Twitch, earlier this week, is already paying dividends. According to a number of streamers who traditionally have to deal with abuse or fighting in their attached chat, the number of people saying mean stuff has fallen dramatically. Traditionally, viewing any popular Twitch stream's chat means you …

The post Update: Twitch AutoMod is already having some success first appeared on KitGuru.]]>
The automated moderation algorithm introduced by gaming streaming site, Twitch, earlier this week, is already paying dividends. According to a number of streamers who traditionally have to deal with abuse or fighting in their attached chat, the number of people saying mean stuff has fallen dramatically.

Traditionally, viewing any popular Twitch stream's chat means you may find at least a few people using the platform to be abusive or hateful. Hey, it's the internet right? With sometimes tens of thousands in individual chats though, moderating them manually was nigh on impossible, but Twitch's algorithmic approach seems to be working.

Working to flag up content for human moderators to deem a bannable offence or not, AutoMod has helped clean up streams for the likes of Alex Teixeira. Kotaku quotes him as saying that it works well and has made it easier for him to clear out racist and homophobic messages, even if the text is spelled wrongly to try and avoid being filtered.

The important part of it he says, is that it denies the people behind it the attention they want for causing problems. Better yet he says, “nobody gets the idea to jump on the bandwagon,” and join in with the messages.

Another streamer, Little Siha, has had similar success with AutoMod, we're told. Not only has it made her chat cleaner and a nicer place for non-abusive viewers to chat in, but it's made the job of her moderators far easier to achieve. They can now find most of the trouble makers in one place, rather than having to spot them as part of the rapidly changing chat stream.

The importance of AutoMod though is that it will allow streamers to set their own thresholds. They'll be able to select what words or types of content they allow through the filter, thereby setting the tone or at least the social bar, for what is and isn't acceptable in-stream.

twitchchat

Source: KingDenino/Youtube

Twitch claims that AutoMod will learn over time too, becoming better at detecting clever ways to circumvent its filters.

Currently available in English – though there are beta tools available for Arabic, French, German, Russian and more – AutoMod is likely to ruffle a few feathers during its first few weeks of use. But it's already proving effective in some circles and in time it could well become much more capable than any human moderation team could ever hope to be.

“We equip streamers with a robust set of tools and allow them to appoint trusted moderators straight from their communities to protect the integrity of their channels,” Twitch moderation lead Ryan Kennedy said (via VentureBeat). “This allows creators to focus more on producing great content and managing their communities. By combining the power of humans and machine learning, AutoMod takes that a step further. For the first time ever, we’re empowering all of our creators to establish a reliable baseline for acceptable language and around the clock chat moderation.”

Discuss on our Facebook page, HERE.

KitGuru Says: Usually I'm not a fan of filtering or moderation, but I can understand the difficulties Amazon faces here. With some streams having tens or even hundreds of thousands of simultaneous viewers, all of whom can contribute to chat simultaneously. That's simply impossible to moderate with human hands. Something like AutoMod may be the only way to instil some base level of civility in a chat that size. 

The post Update: Twitch AutoMod is already having some success first appeared on KitGuru.]]>
https://www.kitguru.net/gaming/jon-martindale/twitch-automod-will-try-to-stamp-out-chat-abuse-2/feed/ 1
Twitch AutoMod will try to stamp out chat abuse https://www.kitguru.net/gaming/uncategorized/jon-martindale/twitch-automod-will-try-to-stamp-out-chat-abuse/ https://www.kitguru.net/gaming/uncategorized/jon-martindale/twitch-automod-will-try-to-stamp-out-chat-abuse/#comments Tue, 13 Dec 2016 08:46:53 +0000 http://www.kitguru.net/?p=314437 Gaming streaming site, Twitch, has announced the introduction of new automated moderation tools, which will try to analyse the intent of messages and block those it deems hateful or abusive. Designed to do what no team of human moderators could achieve, Twitch claims AutoMod will become more effective over time, thanks to machine learning. Viewing …

The post Twitch AutoMod will try to stamp out chat abuse first appeared on KitGuru.]]>
Gaming streaming site, Twitch, has announced the introduction of new automated moderation tools, which will try to analyse the intent of messages and block those it deems hateful or abusive. Designed to do what no team of human moderators could achieve, Twitch claims AutoMod will become more effective over time, thanks to machine learning.

Viewing any popular Twitch stream's chat is a near impossible task – the messages simply fly by too fast to keep up. However if you do manage to break in through the maelstrom and see what's beyond, you may find at least a few people using the platform to be abusive or hateful. Hey, it's the internet right? But that's something that Twitch hopes to put a stop to with AutoMod.

This isn't the kind of tool that will just demand profanity users space out their words to avoid the filters though, AutoMod should leave harmless swearing alone. It's when the tone of the message is aggressive towards another person, be they a chat user or streamer, that AutoMod should step in. Understandably Twitch parent firm, Amazon, hasn't pointed out how it does this, as that would allow people to circumvent the filtering. But we are told it will get better over time, as it learns to better analyse conversations.

twitchchat

Source: KingDenino/Youtube

It will certainly have a bevy of content to work with and learn from.

Currently available in English – though there are beta tools available for Arabic, French, German, Russian and more – AutoMod is likely to ruffle a few feathers during its first few weeks of use, as it will likely either be too heavy handed, or rather lax, as it learns the ropes. But in time it could well become much more capable than any human moderation team could ever hope to be.

“We equip streamers with a robust set of tools and allow them to appoint trusted moderators straight from their communities to protect the integrity of their channels,” Twitch moderation lead Ryan Kennedy said (via VentureBeat). “This allows creators to focus more on producing great content and managing their communities. By combining the power of humans and machine learning, AutoMod takes that a step further. For the first time ever, we’re empowering all of our creators to establish a reliable baseline for acceptable language and around the clock chat moderation.”

Discuss on our Facebook page, HERE.

KitGuru Says: Usually I'm not a fan of filtering or moderation, but I can understand the difficulties Amazon faces here. With some streams having tens or even hundreds of thousands of simultaneous viewers, all of whom can contribute to chat simultaneously. That's simply impossible to moderate with human hands. Something like AutoMod may be the only way to instil some base level of civility in a chat that size. 

The post Twitch AutoMod will try to stamp out chat abuse first appeared on KitGuru.]]>
https://www.kitguru.net/gaming/uncategorized/jon-martindale/twitch-automod-will-try-to-stamp-out-chat-abuse/feed/ 2