Moderation accuracy significantly improved
Moderation has been an issue at Chatroulette for 10 years and despite huge investment in people, research, and technology, it seems that maybe moderation is just a tricky topic – made even more complicated at scale.
While we don’t expect to make huge gains quickly, we have made some small improvements that are immediately apparent in both the statistics and in the experience of using the site. The basic objective of moderation is to decide whether a user is ‘playing by the rules’ as quickly as possible so as to limit their potential impact on the rest of the community.
Humans moderators are the best at this sort of thing. But they’re relatively slow, and of course need regular breaks. Computer models are fast, but even the state-of-the-art is not that accurate and as we’re hyper-sensitive to false positives (i.e. users who are incorrectly banned) they’re not a viable solution on their own.
Our new technique blends the results of human and machine efforts to maximise the efficacy of moderation as a whole. We use machines when they’re extremely confident and we fall back to humans for the corner cases.
This is hardly a new approach but the upshot is a reduction in explicit adult content (i.e. someone exposing their genitals when you first connect to them). We’ve reduced this kind of content from 23% to 9%.
This is a great improvement but it’s hard to get that excited when 1 in 10 conversations are still compromised.
This is a story that will continue to unfold. It now seems likely that further improvements will require more severe interventions.
-- AD.