They already do this and so does Microsoft. Many companies have shared image finger-prints (hashes of images that can deal with resizing) and special people granted permission by the Dept of Justice to report such images. It's a small number of people, leading to at least one Microsoft employee filing a lawsuit for the PTSD he got having to verify and report illegal images.
As far as these YouTube videos, Google does get rid of all content with clear abuse and reports it to the authorities in the US and possibly the origin country. For this particular case, none of these videos have illegal content. Most of them are videos of teens filming themselves. It's others who go through and place them into a creepy context by grouping all of them together in playlists and via comment-chain links.
Creating any type of honeypot is most likely unethical and illegal. Just look at what happened to the FBI in the Playpen case.
> It's others who go through and place them into a creepy context by grouping all of them together in playlists and via comment-chain links.
Wanted to pull this out as (IMO) the most important part of your comment. There's nothing for the police to go after, because nothing illegal was happening here. The videos themselves were 100% benign, and while the comments were creepy and awful, they were also free speech.
So they cannot be censored, sure. But can they not be probable cause for a temporary tap on their internet connection? And if that reveals stuff like https connections to unpopular sites whose content can't be viewed publicly, Tor traffic, maybe torrent traffic, then that might be suspicious enough to bust the door while the connections are going on.
I'm against dragnet surveillance and all, but comments that are clearly predatory is a red enough flag that even I start to think it warranted to escalate step by step, and continue depending on what is found in each step.
Of course, this entirely depends on whether the comments are "oh look at that sweetie, probably has a nice puss" or just "oh look at that sweetie" interpreted the wrong way. I'm on the second page of HN comments and read three linked articles about it, and nobody mentioned what it's all about other than "predatory comments on videos involving minors".
Edit: As expected, Dutch news coverage is more explicit than prudish American. Example comment is: "$time $camel_emoji toe... Then again at $time". Another example: "Hi honey baby where r u from??". Last example is just "Love you" with a bunch of emojis like hearts and presents. Another website mentions the kids were called "godess" or "barbie". This sounds like it would be extraordinarily easy to just build a blacklist of words that trigger new comments to land in a moderation queue (processed either by youtube itself or by the video owner)... not sure what the difficulty is exactly. It seems harder to detect kids in videos than to build a blacklist of these comments, except that the former can be automated and the latter is ongoing moderation (probably too simple for Google: "if you can't automate it, it can never scale, so it can never be a solution").
As far as these YouTube videos, Google does get rid of all content with clear abuse and reports it to the authorities in the US and possibly the origin country. For this particular case, none of these videos have illegal content. Most of them are videos of teens filming themselves. It's others who go through and place them into a creepy context by grouping all of them together in playlists and via comment-chain links.
Creating any type of honeypot is most likely unethical and illegal. Just look at what happened to the FBI in the Playpen case.