Let's say they are able to make an IA that detect toxic comments.
They can even make it relatively accurate with 1 error out of 1000 detections.
Sooner or later they will have a false positive that gets a lot of bad publicity as censorship or a false negative on somebody truly horrible.
Just cutting all comments on kid seem like the only option that will allow them to give the impression that they are taking the problem seriously.
Just like they could not afford to have google photo call dark skinned people gorilla 1 time out of 100000 photos. Better to just remove that label entirely.
They can even make it relatively accurate with 1 error out of 1000 detections.
Sooner or later they will have a false positive that gets a lot of bad publicity as censorship or a false negative on somebody truly horrible.
Just cutting all comments on kid seem like the only option that will allow them to give the impression that they are taking the problem seriously.
Just like they could not afford to have google photo call dark skinned people gorilla 1 time out of 100000 photos. Better to just remove that label entirely.