Chris Ulmer, who runs the popular YouTube channel Special Books for Special Kids, said that comments were removed from his channel despite no evidence of unacceptable comments, and YouTube told him that if he turned comments back on, he risked his channel being deleted. [EDIT: Turns out, this is not actually the case. See child comments.]
"Last night I realized all of the comments on SBSK's YouTube channel were disabled. I saw I could manually turn them back on so I did. Then I read a post by YT saying that by turning comments on I risk our channel being deleted. I love and respect YT but IDK what to do.
"The beauty of SBSK is the love and acceptance in the comment section. It shows families and individuals across the world that their [sic] are people who accept them. Many people I interview have been socially isolated. Comments can change their self perception."
So it sounds as if YouTube content creators are now in the unenviable position where they need to actively moderate the comments section for videos featuring children, and if they don't do so to YouTube's satisfaction, they could have their entire channel nuked. Even if you're pretty darn sure that your commenters will behave themselves, that doesn't sound like a good deal.
Seems like YouTube will need to come up with some sort of "trusted subscriber" designation, and allow content creators to permit comments only from those subscribers, so that any random bad actor can't swoop in and destroy a channel.
I might be in the minority here but I enjoy the low quality youtube comments. Most people dismiss them as cancer but the reality is a lot of people think in patterns that drive these comments.
I would rather be in touch with and exposed to this rather than try to pretend it doesn’t exist. It won’t ever go away, it will just be hidden.
I think that there's an element of validation and indoctrination that is a serious concern here. In addition to an overall significant lowering of the discourse around things by allowing garbage quality trolls and other horrible comments, and in addition to bullying, aggressive behavior, stalking, and other things that are broadly considered unpleasant, this kind of behavior breeds more of this behavior.
Consider someone growing up on the internet. The more they're exposed to this sort of content, the more it will be normalized in their mind. The more they're exposed to this sort of content, the more likely susceptible people are to be radicalized by it, and to grow into the same sort of troll. Hiding or suppressing low-quality content creates a herd immunity effect. It prevents a shift of the Overton Window to where "generally allowable discourse" suddenly includes timestamping the most "salacious" parts of a child's video, or telling people to actually kill themselves, or sharing their purely racist, hate-filled viewpoints.
While you may rather be exposed to it because you have the strength and capability to view it as a curiosity and a sociological study, the impressionable among us maybe deserve to not be bombarded with garbage, and we may have a moral duty to at least make some effort to minimize indoctrination and radicalization on these platforms.
Looks to me like you guys are talking past one another.
disillusioned is taking the position of The Internet. If we allow these things, things will just continue to get worse and worse. Standards exist for a reason, Gresham's Law, and the rest of it. (Including some wonderful pleas for civility and kindness that I completely agree with)
raxxorrax is taking the position of The Human Species. We have survived and evolved because we are a wild, aggressive, curious species. Whatever boundaries there are, we will push them as hard as we can. Life will find a way.
Both of you folks are correct. This is the crux of the problem with technologies like YouTube (or books, for that matter). If YouTube is a publisher, then it has an opinion, a political position, standards for what it thinks is civil, and so forth. That's great, but there's no freaking way in hell I'm going to agree to having a handful of companies decide that kind of thing for the entire planet. The idea alone is insane. I just read yesterday about people getting warnings from Twitter from things they posted 5, 6 years ago that somebody in a dictatorship found offensive last week. It's ludicrous.
If, however, they're a public forum, then they should just shut up and stop looking at the things that appear on their site. The public forum -- books -- require chaos, disorder, and evolutionary pressure. Can't fight mother nature. Life will evolve.
But they can't do that, can they? Because they're monetizing all of that content. So they have to keep close track of each little piece of everything.
Now they're stuck.
They want to have it both ways, and we feeble-minded folks watching this spectacle end up choosing one or the other. It's a sucker's game, and no matter which side we choose, it doesn't work. That's because the premise is broken. BigTech wants to be two things at the same time. Picking one of those things and arguing with folks who pick the other one just plays into their schtick and keeps the gravy train rolling along.
> I think there is no evidence of the mechanisms you describe.
Which part? If we are talking about this part:
>> I think that there's an element of validation and indoctrination that is a serious concern here.
Then I think there is evidence of that being true.
Take for example “incels”. They gather in online forums where they construct a worldview built entirely on misogyny, entitlement and hate. The ideas they are spreading among themselves are much worse than probably most of those people would come up with on their own. In their echo chambers they validate these ideas to one-another and make them seem acceptable.
Those horrible people still exist either way, they just dont say out loud what they think. You dont restrict people thinking that way but people talking out loud about it. The no platform approach was rather successful in Germany when it came to pushing the extreme right out of the public discourse and banning their parties. But that only works as long as they dont organize a platform of their own. Then you have a far right party rising into parliament and entering as the second largest party in some states.
People dont change you just filter them from your view of reality. And its no surprise people who bothered to look knew rather well that they existed.
No, I am literally talking about the group of “incels” that are specifically engaging in hate speech.
The people I am talking about here are outright promoting rape, violence and child abuse. Not indirectly. Not between the lines. Just straight up stating those kinds of things.
> don't we have decades of evidence from online communities showing that?
In my experience the community changes as a whole, but not the individual actors. Once a community tips the member base changes. People displeased with the behavior leave and people attracted to it join.
As you are clearly blurring the lines between this specific issues and a more general one, I'm going to reference the latter and exclusively the latter. There's a wide gulf between pedophilia and then jumping into appeals against the ever more amorphous 'hate speech.' So on that note it's interesting that in the early 20th century if you look at support in the US for eugenics it was practically a who's who of academia: The Carnegie Institution, Rockefeller Foundation, W.E.B. Du Bois, Harvard, Stanford, etc, etc. [1] One might argue that such views were "purely racist, hate-filled viewpoints." The very reason freedom of speech was such a revolutionary concept is because authority figures, since time immemorial, have been able to propagate bad ideas and manipulate the general population by making false statements which could not be challenged.
In contemporary times a good example of this is Iraq. Our invasion was precipitated based on fabricated evidence and appeals to authority of the sort 'x intelligence agencies have proven beyond any doubt that Iraq has or is pursuing nuclear weapons of mass destruction.' After the fact when an individual [2] tasked with investigating whether Iraq was or was not trying to purchase uranium (he found they absolutely were not and reported as such - the government would go on to claim they were trying to purchase uranium), a government official outed his wife as a covert CIA operative (which the Washington Post would go on to publish) not only potentially endangering her, but terminating her career.
In the case of eugenics it wasn't so much a conspiracy rather than that authority and academia were simply collectively wrong, as has also often occurred. Censorship benefits those in power and only those in power. Those in power choosing to try to inhibit censorship, as happened with the first amendment, was quite the revolution! The point of free speech is to ensure that not only with the good comes the bad, but also that with the bad comes the good. When you begin to tolerate censorship by one side - you very much risk that the side doing the censoring is the bad. And in efforts to obtain only the good, you end up with only the bad.
In today's world, free speech and corporations are becoming a major new issue. An ever larger percent of all human communication is digital, and digital communication is extremely monopolized. This means, for instance, that the US government could effectively circumvent the first amendment by simply pressuring a single company rather than passing a law against the dissemination of any given viewpoint. It also means that a very small handful of people could end censoring or otherwise manipulating public discourse for billions. That is exactly what the first amendment sought to prevent; only the founding fathers could never have imagined a corporation (under which two people have majority control) having more power over speech than any government in the world.
This is one of the reasons I think anonymity on the Internet is a bad thing. Not that I am saying it should be banned, but it should not be the norm. Much of the toxicity simply would not happen, could be prosecuted, or could be filtered if a "real person" attribute was widely available.
Toxic comments can be just as common in non-anonymous forums and venues. Facebook produces a lot of toxicity, for example. Further consider; the Internet is much less anonymous then twenty years ago and that hasn't stopped the overall level of toxicity imo. The type of person who spontaneously makes toxic comments spontaneously will still make them when forced to use their real name (they'll just suffer more from it). The provocateur, those who calculatedly elicit toxicity, is always going to be here too - no matter many Russians Facebook filters.
One factor is that once one person begins attack another person, both using real names, both people have a hard time backing down, especially if they know each in real life or if they are semi-public figures. For a lot of people, admitting that they are wrong is a huge hurdle - and these tend to be the people who engage in toxicity in the first place.
I think it's the other way round. Anonymity (or rather: the right to present different personas to different audiences and to give up a burned persona when you see fit) makes the internet a bearable place. It also allows people to make up their mind without standing in their own way, and disregard hurtful comments as trolling (which they often are). Facebook painfully shows that real name policies do not necessarily lead to more civil discussions, instead they facilitate ad hominems, and expose vulnerable groups to hate everywhere, as aspects of their identity can not be selectively hidden anymore. This even extends to their real lifes, with their names being publicly known. Now speech needs to be controlled, because people lost control of their personas and need to fight everywhere, instead of just when they choose to.
Facebook comments aren't much better than YouTube, especially non-English ones that Facebook doesn't bother moderating at all. With about 97% of social market share in my country and a real name policy, it should be a brease to use it. It's absolutely not.
As for the prosecution, only if your country decides to give a fuck. There's a case in which a soldier called for a journalist to be raped and killed in a public Facebook post using his real name. He didn't face any consequences despite the story being picked up and screenshots of his public Facebook status shared all over the news.
Death threats, hate crimes, separatist movements, fake news... Real name policy stops absolutely nothing.
You are not the first person to think this, and some even put it into practice [0] hoping it would frontload forum moderation, but couldn't follow through with it [1].
As another commenter pointed out, facebook (and facebook-based commenting) has plenty of trolls in public comment sections. Anonymity is not what makes people jerks. More real reasons why internet forums enable trolls include:
- the asynchronicity of negative feedback: our primitive brains don't get to closely associate the negative feedback with the moment we expressed something in a socially unacceptable manner
- the lower bandwidth of negative feedback: IRL involves awkward silences, dirty looks, snickering, people turning up their nose at you, telling their children not to be like you within earshot, etc. All of this is suppressed over the internet, at most summarised by a downvote (which some trolls feed off of as positive feedback, knowing they ticked someone off, but don't face immediate social consequence). The trolls don't get conditioned to avoid doing things that hurt other people or are socially unacceptable.
I intentionally didn't list that, because that's something else that's often cited as a reason why people aren't jerks in real life compared to the internet, but I don't think that's true in most civilized cultures.
If I'm a jerk in public, outside of racist/prejudiced violence or a mentally unstable antagonist, I'm not at serious risk of being physically attacked unless I start breaking laws or pose a notable threat to someone else. Because about 90% of the less edgy side of trolling (just being annoying or verbally mean, mostly) won't really draw physical violence in meatspace, you'll just be treated like a jerk or risk being politely asked to shut up or leave by an authority figure. Yes there's a risk of physical violence, but it's overplayed as a reason why people aren't trolls in real life as much as they are online, and I think it's significantly superseded by the things I mentioned, and probably other things too, which stop people from being jerks far before physical violence needs to be brought to the scene.
you need only look as far back as the covington incident to see that 'real people' are hardly inhibited from vile and abusive behavior just as long as they believe themselves to be on the right side of the mob. this has been the case since time immemorial. in my admittedly anecdotal experience, fully anon communities are far more respectful & a have a better standard of decorum than communities with high standards of verification, even and perhaps especially in dealing with the sorts of topics that tend to make more highly regulated forums collapse into flame wars. real name communities are prone to witch hunts, purity tests and all sorts of hysteria full of very real life consequences. anons just call you a retarded faggot.
Not sure why this was flag killed. The Covington incident is a good example. A controversial event occurred and everyone from every 'side' was inflammatory and no one maintained civil standards.
EDIT
I guess flagging indicates part of the problem. Society is so busy screaming at ourselves that the only thing reasonable people think they can do is check out of controversial topics.
EDIT
Would any downvoters care to engage about why they disagree?
HN has many great qualities, but some of the most important conversations we need to have are too emotional and inherently divisive to be connected to any kind of social currency system. there are times when identity itself is simply too much of a barrier.
the heavily moderated upvote/downvote system of prioritizing content is great when dealing with technical topics that aren't too emotional and don't challenge cultural established norms too heavily. however there are many aspects of life that are too messy, too heated & too shameful to be effectively dealt with in this format. these tend to turn into shadow topics that are better out of sight & out of mind, only discussed behind closed doors where the risk of vulnerability is limited and even then they still carry potential repercussions. in 'real name only' networks there are far too many risks to ever be a real person at all so we cultivate a brand that is the best & most superficial version of ourselves. we hide our pain, our doubts, our regrets. this extends to pseudonymous networks as well and anywhere that a cumulative identity is present. very few people want to be recognized as assholes and no one wants to be a pariah, a victim or a failure even if they really are. in a sense the only time you're really talking to anyone is when you've both stripped your personhood away and have nothing to lose.
Toxicity is a non issue compared the the huge disadvantage of losing anonymity. In fact you're feeding the trolls by giving them hints of ethnicity, gender, etc.
A real name policy also wouldn't actually address the actual problem. Toxic behavior is possible because on the internet you can write angry comments to a person on the other side of the globe and then forget about that person tomorrow. If that person does come back tomorrow then it doesn't matter if they are called "hahaurmom" or "Jonathan Willis".
The word you want is "impunity" not "anonymity". Because awful behaviour persists in actively de-anonymized contexts (eg: Facebook) when there's a power imbalance that puts society (or a local subset of it) on the side of the troll and against the victim.
I get where you're coming from with the academic curiosity of seeing how people think.
That said, it seems you're assuming comments are simply a one-way output of different people's thought patterns, rather than a two-way process that has an effect on others. It's obvious it has a two-way effect - the point of reading and listening is to learn, and we can learn in ways that make us worse off.
The idea that we should have a super low-friction way to be exposed to the internal thought process of anybody who is motivated to post those is an idea worth challenging, and we're starting to wake up to just how negative the repercussions of it are. That level of reach without any filtering steps (such as community standards, social feedback loops, etc) is fuel for extremism, harassment, and triggering/exacerbating mental illness. Lets not give everyone PTSD if we don't have to, cause thats effectively what your argument boils down to.
It's kind of like deinstitutionalization. When deinstitutionalization happened, lots of chronically mentally ill people started wandering the streets behaving erratically. You run into these people in cities--I've encountered one who loudly narrates her paranoid delusions about the people around her. Sometimes late at night you see people committing vandalism or doing drugs.
I think it's a better idea to give people the tools to control where they direct their attention.
Good analogy, the internet is entirely deinstitutionalized, but even worse, because:
a) people are more likely to have gumption to be negative in the absence of social cues of another person in their presence (global effect, whether or not user is mentally ill)
b) those motivated and possessing time to post are more likely to be suffering from issues (where those not mentally ill may just not engage because they have other things going on in life)
c) unlike the physical space where filters exist, you're often exposed to these people when navigating to entirely innocuous content (e.g. a kids video).
d) way way easier to find people with same or adjacent point of view on internet, reinforcing beliefs and potentially driving person further to extremes
Problem needs to be addressed from multiple fronts - as you say, more control for end users to tweak their experience (top down), alongside better platform-level filtering (bottom up), along with all the helpful designs therein like good defaults.
The open and free exchange of ideas isn't the same as deinstitutionalizing criminally violent people and self-harming drug users. Not in the slightest. The people "harmed" by Internet comments (live aside) simply have no recourse in U.S. law for their problem with it.
As a staunch supporter of free speech, being exposed to your internal thought process had some negative repercussions for me. None of my fellow pro-free-speech HN users were able to filter you, nor even were they able to socially pressure you, and as result I read an extremist opinion. I think it's important that communities control anti-free-speech comments, otherwise they will descend into extremism.
You haven't refuted my argument that super low-friction reach is valuable enough to preserve.
I have pretty decent karma in HN and there are much better filtering mechanisms in this community to remove bad actors and posts than there are on youtube. That's the context of my post that, while hidden, is nonetheless present in this community. There is no community for many youtube comments - its driveby comments that get free reach.
What's so bad about being exposed to the public id that you'd be willing to even loose a sliver of free speech to prevent it? You're already being exposed to the public id every moment, awake or asleep, because part of it is you.
Indeed, it's important for people to not think so academically about viewpoints all the time. There are ideals, and then there is the real world. It's fine to have ideals about things like absolute free speech in all domains, but it must be balanced against the constraints, rational and irrational, of society at large which never constrains its behaviours to anything resembling the ideal.
It's a classic question of the world not being black and white. Either people accept this greyness and work within it to produce the most desirable result possible, or fruitlessly wonder why nobody else is able to see what apparently is so patently visible to your eyes only.
and it is exactly the shifting of the Overton window into that machiavellian realpolitik approach that has the empire invading and performing coups since at least ww2, causing millions of lives lost and reverberation effects for generations to come. The same for any number of major constitutional issues such as the justice system, regulatory capture, etc.
We need more idealism, not less. Tempered with pragmatism is one thing, but the problem is then one is encouraged to temper that idealism a bit more, and a bit more, and one more time, until the original ideal is more a memory than anything.
For me this mostly stems from a lack of knowledge or respect for history.
I've downvoted this because it's clearly taking the piss, recontedtualising a comment for gainsaying. No part of the content reinforces nor advances the main thrust of your argument that I can see.
I didn't say that ideas need to be 'approved'. I challenged the idea that the ability to reach a worldwide audience should be free and available to anybody, regardless of content therein.
Filtering mechanisms exist in all forums, online and offline. Some suck, like certain sub-communities in youtube. The discussion should be about what filters we employ to best balance the positive value of conversation against the negative.
> I didn't say that ideas need to be 'approved'. I challenged the idea that the ability to reach a worldwide audience should be free and available to anybody, regardless of content therein.
If that "reach" should not be available to "anybody, regardless of content therein", then the logical and unavoidable conclusion there is some person or organization that has to decide what people and content are allowed access. You can hardly dodge the logical implications of your own statement.
Do you have a specific point? Or just picking a nit?
There are vastly different ways in which the flow of communication between people or groups of people can be impacted, and the differences between these ways can be very important. Because of this, sweeping generalizations don't seem very helpful.
Reach is about broadcast. There are plenty of non-authoritarian ways to reduce reach. The most obvious one is to make people pay for it (like they do on Facebook). Community moderation is another, and trust metrics is yet another.
All have tradeoffs, but - since you are nitpicking here - it isn't a logical and unavoidable conclusion that this has to be a decision about allowed access.
While we're at it, if you'd like to protest the White House, let's make it so that you're allowed to do so but your protest must fit within a 9' by 9' space that must be a minimum of 500 feet from 1600 Pennsylvania Avenue. No carried signs may extend above 8' from the floor and your total volume must remain below 90dB. Make sure you optimize your protest space to have the loudest (but not too loud), boldest (but not too bold) voices, otherwise it will diminish your potential impact.
Lest you go to an argument about public vs private property, in our society, social media _is_ the public forum... Just as when these laws were written, public spaces were.
To use reductio ad absurdum on your argument for a minute:
Everyone in the world has an earpiece which they must wear at all times. Everyone has a microphone which automatically broadcasts everything they say to everyone's earpiece. You can't take your earpiece off, as that would reduce someone else's ability to speak freely to you.
There are downsides to allowing everyone a global platform.
Not, not forced, but then I did say it was an absurd example. Slightly different qualitatively from my hypothetical: there are plenty of examples of unwanted speech appearing in places, and while they are not forced to actually read it, the "owners" of those places are unable to stop it. For example: https://www.theguardian.com/technology/2019/feb/27/facebook-...
You still can set your page to private if i am not mistaken. And you are also not forced to use facebook in the first place. I understand that its unpleasant and annoying in public, but the assumption that someone elses free speech in a public place affects you personally whether you want it or not doesnt seem necessary true to me.
While not so well worded, i do agree with the broad rational, that in order to restrict free speech in public, some sort decision has to be made about what is allowed and what not. Like it is with everything from laws to social norms. The methods for decision making are broad, from authoritarian, to economical, over technical to consensus based decision making. But they are all methods of decision making. Asking who, or better how that decision is made is a sensible concern.
There are two people needed for a conversation in a public place. You have the ability to leave. You are not forced to listen to anyone. In a private context this is another topic altogether. You can kick someone out of your facebook group, unfriend him or block him. If they then show up at your front door we are talking harassment not free speech. Which is why the loophole of anti abortion protestors being allowed in the streets in front of clinics is such a horrible situation.
I know its annoying to be confronted with stuff you dont like, when your private life takes place in alot of public places, but that is your choice. You dont have to have a public facebook profile, you dont have to have a public youtube channel. And facebook and youtube as a whole dont have to be a public place if they decided otherwise.
> I know its annoying to be confronted with stuff you dont like, when your private life takes place in alot of public places, but that is your choice. You dont have to have a public facebook profile, you dont have to have a public youtube channel. And facebook and youtube as a whole dont have to be a public place if they decided otherwise.
Thank you for understanding and better expressing my core argument.
In the context of that, I'm fine with youtube's rule, but we were already off on a tangent in the meta discussion and that is clearly not the context of this particular discussion.
Go up two levels if you don't believe me.
We actually already have laws here in the US about signing up people under the age of 13 for online services and have since the 90s. They're very lax and not enforced.
A very large measure that YouTube could take is to hide all comments if you aren't signed in and strictly enforce the over 13 rule in the US (or whatever other rules elsewhere). But they don't do that.
My point is that if the existing laws were followed, children shouldn't ever be seeing those comments unless their parents are insane and allowed them to...Don't give your young children a youtube account. Youtube can and should add blocking _ALL_ comments to the parental controls.
If all of these services weren't being offered for completely free, you could require a credit card purchase to make an account and solve this problem in an afternoon over a cup of tea. That's how simple it is. The problem is that the economics of these services allows and encourages children into these spaces without restriction.
Part of being an adult is figuring out how to deal with a world where some of your neighbors are horrible monsters. You don't just drive them into dark corners where you pretend they don't exist.
If pedophiles want to self-identify, that's great, because it gives us the option of getting them the treatment that they need AND of keeping our children far away from them.
It's not risk to children reading them, it's promotion of pedophilia that is the problem.
You don't just drive them into dark corners where you pretend they don't exist.
You don't pretend they don't exist, but you don't let them normalize their conduct either. And that is absolutely what is happening with the previous approach.
You don't let them normalize their conduct. You loudly tell them how intolerable they are to society. You watch them at all times and make sure they don't harm anyone.
As someone who was on the receiving end of attention from pedophiles as a child and who has spoken to many of their victims, you do not want these people isolated in their own communities. You want to know exactly who they are.
Letting them speak, which at least where I live is still their right, is NOT tolerance.
This isn't about shitposting on YouTube videos, this is about pedos time stamping videos of kids in sexualized positions. Mostly in response to this video [1] (with a warning that it's pretty uncomfortable and gross).
that truly was uncomfortable to watch, did not finish. Amazing how the human mind works that such a system came about but the algorithm that is built into youtube itself needs to be questioned.
so what solution is there, preventing any content with children under a certain age? blocking any comment that features a link? can youtube detect use of vpn? once you leave certain countries I am not sure it can be policed.
My best idea is containerized video platforms. A parent can pay for the service or host it themselves. You control who is on it. The place where this breaks down though is not everyone has friends and the world would no longer be the audience. As well, video hosting is expensive, not sure it is affordable in most countries.
Are direct-to-DVD types of content dead and buried in developed countries?
Lots of middle-class families in developing countries still buy (pirated) DVDs of Barney & Friends and similar family friendly shows for playback on their home theatre systems, for those moments when kids announce they are bored.
deplatforming is an effective strategy to reduce the spread of malicious content for several reasons, in the case of youtube because it reduces the incentive to monetize and spread harmful content.
To say that you'd rather be a exposed to pedophilia in youtube comments than to remove them is a little bit like saying you'd like to stay in contact with the marburg virus.
Hi dang, I’m given to understand that you’re one of the moderators here. I’ve tried to contact you three times in theee weeks through email, and this is my third time trying to get your attention in the comments, so far without a response. Is there a better way to contact you? I don’t feel good about inserting my comment in an unrelated post like this, but I’m unsure what else to do.
There are logical stopping points on the way from "all comments should be allowed" to "no comments should be allowed."
One such stopping point could be "no comments should be allowed on media with certain features that overwhelmingly tend to attract morally reprehensible comments." Another such stopping point could be "all comments should be allowed, except those which the community has decided have no merit."
In fact, (almost?) all worthwhile online communities are somewhere between the two extremes you seem to advocate. They don't always call it "deplatforming," which may be where you're getting hung up, but comment moderation is everywhere, and it does work.
Now, if you want to say that Youtube could handle this particular type of comment moderation in a better way, I think there are plenty of arguments to be made. But the choice isn't "all comments" or "no comments."
You may see logical stopping points, but I see a slippery slope. And at each such stopping points there will be someone having good arguments for going down just a little bit further. Interestingly, starting down that path always seems to start with something relating to pedophilia. It's unfortunate that one always has to defend scoundrels in the first stages of what will otherwise naturally snowball into something nobody at the starting line ever intended for.
I think HN is actually a poster child of strong moderation on social media, because it remains relevant by being one of the most regulated platforms. There is a reason this place hasn’t devolved into complete and utter uselessness as well as why you rarely see any form of political debate, and it’s exactly because this places isn’t liberatarian in its approach to free speech.
Free speech comes with responsibility, and part of that responsibility is to make sure people aren’t legally monetising videos of children for pedofiles.
I know it’s not always a popular opinion, but I think social media has been hiding from its responsibility for far too long, and I’m personally happy the EU is stepping in to regulate it.
Big tech companies shouldn’t be able to get away with things no one else could simply by being big. Especially not when it’s used to undermine the foundation of our society. If someone uses your platform for a network of pedos, or to commit genocide then you are also responsible for enabling it in my opinion.
HN is self selecting a higher quality audience. The content is interesting to an educated audience, the website contains very little fluff/idle entertainment and the ui is ugly to the average user.
While true, the moderation system also tends to discourage the snarky or ad hominem attack that at least some of us might otherwise be attempted to make from time to time.
I'd say the one down side to HN is the moderation and I don't think social media has any responsibility to moderate comments.
Readers of NN comments are free to ignore any content they wish to ignore.
I find the trend of individuals demanding organizations and governments protect them from content they may find distasteful or content they don't agree with to very disturbing.
> I would rather be in touch with and exposed to this rather than try to pretend it doesn’t exist.
YT comments do more than just represent ideas that are out there. Sometimes they can serve as a means of disseminating harmful ideas.
Just yesterday I was looking at some YT videos about the history of the Balkan peoples and languages, where the speakers were internationally recognized historians and linguists (and who are from outside the region and don’t have a dog in any of the ethnic fights). However, the comment sections had become a place where people could post lengthy crackpot claims about their people’s history in direct contradiction to the authoritative scholar speaking in the video. Furthermore, it appeared that many people, when they came to the video, went straight to the comments section to read the other YT commenters, and thus absorb their crackpot ideas. They didn’t actually watch the video and learn any of the information in it.
I couldn’t help but feel that two decades ago, people similarly believed things that were baseless, but they had less capability to disseminate those ideas to other people.
> I couldn’t help but feel that two decades ago, people similarly believed things that were baseless, but they had less capability to disseminate those ideas to other people.
Which people? Twenty years ago, people (by which I mean schoolteachers) baselessly believed that Columbus proved that the world was round rather than flat and disseminated that idea, and it was hard to disseminate the counter-idea of "no, actually, that's not true, the Ancient Greeks already knew the world was round, Columbus just thought the world was round and also much smaller, and he was wrong".
If you can stop people from disseminating "harmful" ideas, you have the power to decide which ideas are "harmful". You might think it's "harmful" to disseminate the idea that the American Civil War was motivated primarily by the issue of slavery, for instance. If you're just in charge of buying school textbooks for the state of Texas, you won't buy textbooks that disseminate this idea, and that's harmful enough. But if you're in charge of moderating user-generated content on a huge set of platforms like Google and Facebook are, it starts becoming more and more of a problem.
Two kinds of ideas I would consider harmful to the point of using the force of law to ban them: antivax materials, and the kinds of comments broadcast over media leading up to and during the Rwandan genocide that lead to that genocide taking place.
I could not in good conscience say "the kids who die of preventable illness, their lives are worth it so antivax people can spread misinformation without consequence."
I could not in good conscience say "the Tutsi lives lost were worth protecting the Hutu rights to free speech."
The "force of law" was largely on the side of the Rwandan genocide, so it's a slightly nonsensical example, although some of the broadcasts do satisfy the tight bound of speech that calls for "imminent lawless action" (which is the most recent criteria set by the US Supreme Court).
Aside from that, both of those examples are examples where the harm comes from a specific action (or inaction), rather than the speech itself. If refusing to vaccinate children was treated as criminal neglect and unvaccinated children were forcibly removed from their families, people could talk and talk all they want and it wouldn't matter.
Do you not see that there is a line drawn from speech to action, like in the situation surrounding the Rwandan genocide or in the case of antivaxxers leading to a resurgence in preventable disease?
Like, are you trying to frame this in the sense of, "I can swing my fist toward your face all I want, but so long as I don't actually make contact it's okay"?
I'm trying to understand how you're trying to thread the needle here and am coming up empty-handed.
Speech can counter speech, and action can counter action. If you don't vaccinate your kids, CPS can take them away from you. If you want to write paranoid screeds about how vaccines and fluoridated water are a communist plot to block the pineal gland, that's up to you, buddy. Just know that if you have kids, we're keeping an eye on you, and if their shot cards aren't up to date or you have a doctor buddy forging them, we're gonna take away your kids and throw you in prison for a long, long time--just to make an example of you. This is what happens to tax protesters (like Wesley Snipes) and this is why tax protester conspiracy theories don't really get anywhere despite the absolute lack of any legal authority to stop people from disseminating them.
I mostly agree with you about Rwanda because openly calling for people to commit genocide does cross a threshold that justifies forceful reaction. My point there is that you're appealing to government, which is exactly who in Rwanda was organizing the genocide in the first place. But if you're imagining a UN peacekeeping force being deployed to Rwanda being the censors instead--sure, among many other things they should probably shut down the radio stations. I'm fine with that.
> Do you not see that there is a line drawn from speech to action
The problem is that this argument makes it too easy to misuse the instrument of censorship. You're calling for censorship that's both unnecessary and insufficient to solve the anti-vax problem, using a standard that could be just as easily abused to call for censorship that even you would disagree with.
That's the problem with videos. It's a lot quicker and easier to read a short text and respond to it than to watch a 10+ minute video and then respond. There's been a very large movement towards making videos instead of writing articles, even when a video isn't really necessary or helpful.
Hiding the insanity of YouTube comments won't change anything it will just hide it ever so slightly. Like strategies to _hide_ homelessness in cities.
But I strongly disagree when it comes to pedophiles sharing suggestive timestamps in comments and making vile sexual remarks targeted at children. That's where I nope out. That shit shouldn't be allowed in the comments section - even reddit doesn't allow this kind of commentary.
The problem has more to do with validation and echo chamber.
It wouldn't be a huge issue if these kind of comments would stay constrained in the digital realm.
But as people see that the tone and hardline position is shared by a lot of other people, they feel empowered and validated, so that that behavior starts rippling in the "real" world.
>but the reality is a lot of people think in patterns that drive these comments
and that's where you might be mislead by machine-learning optimized comment sorting, recommendations that drive more people that'll agree with any given videos point and vocal minorities in general.
Actually, that's a very good point. I think along similar lines. That it's not the comments or platforms that are terrible, they simply shine a light on our own human nature. We're dark, ugly creatures.
The only reason I still have yahoo as my homepage is the the comments on the top stories. While most are low quality, there are some very clever comments and I feel I get a better sense of what everyday people are thinking about it.
They're the solution to a hard engineering problem: how to generate stupid statements on a given topic? Not grammatically incoherent statements, but the products of true stupidity. Pretty hard to automate that kind of thing.
But with YouTube, it's easy. Just search for videos that match the topic, and pull random comments from them. A large fraction are guaranteed to be examples of genuine stupidity.
People's beliefs and behaviors are shaped by what they're exposed to and what they see to be publicly acceptable. Of course you can't completely stamp these things out, but you can absolutely affect how widespread and popular they are.
>That’s pretty obviously false. I think you mean “anarchic”
Democratic. The people ("demos") control the dialogue and have equal voice.
As in this dictionary definition:
(3): relating to, appealing to, or available to the broad masses of the people - democratic art - democratic education
(4): favoring social equality - not snobbish
I'm also referring to the original meaning of the word and practice in ancient Athens when it came to deciding, where every participating citizen could be heard [1].
What you describe (suppressing the minority etc) is about lawmaking and decisions (e.g. the democratic , which is not part of the YouTube dialogue. Users don't decide what's to be done. Nobody is surpassed in YouTube comments, minority or not -- they just get downvotes and negative counter-comments, but they can still write and have their comments shown.
[1] The innovation was that it was not a king, tyrant, or group of rulers, but the whole citizen community (even if restricted to slaves/women which was the baseline at the time -- not to mention also for 2.5 millennia later) themselves openly debating and voting.
> YouTube content creators are now in the unenviable position where they need to actively moderate the comments section
Why is that so wrong?
I understand that it's often hard work. I understand that they might want to be making videos, or product-placing, or whatever it is any given creator wants to be doing, but they're the people at the hub of their own communities.
They should have some responsibility over the conduct displayed there. And if they don't want that, they can host it themselves somewhere else, or disable comments.
Honestly, I feel YouTube would be a lot nicer place if the standards required weren't just the channel owner's. Plenty of niches are way too happy to foster the the next /b/.
Sure, but getting people to keep their bedrooms clean serves as an additional metric for YouTube. You can see who's putting in the time and effort to keep things clean, as well as another metric to rate the content they're uploading.
If you upload videos of kids and you're sitting back and letting paedos write this weird shit, you probably shouldn't be making, let alone uploading videos of children.
Just as if your community has an above-average number of people calling for the lynching of Muslims. Or quacks suggesting you can cure cancer with alkali diet pills. You let this stuff stick on your videos, it sticks to you and that attracts more.
Pushing this onto the uploaders forces them to think about what they're doing, and whether or not they really want to build that community.
I dunno... I see what you are getting at but it also lets Google off the hook here.
In my mind it's the Broken Window Theory [0].
It's Google's house. Google's windows. I think they need to take more responsibility for cleaning up.
It's not the creator's fault if pedo's show up and comment on their stuff at a scale they can't possibly control.
There's another side to this coin!
There is a part of me that asks "Why are parents uploading pictures and videos of their children to the internet for the public to ogle over?".
The cynical part of me says "What did they expect to happen?"
It all comes back to money though... Google have had this problem for a while but their bottom line wasn't threatened until recently... let's face it: They don't really give a shit until money's involved.
And again, nobody is forcing you to enable comments. If you can't handle the workload, the arguments, the very worst of humanity, etc, just turn them off.
Video that include minors and are at risk of predatory comments may receive limited or no ads (yellow icon). If you think we made a mistake please appeal [link]. We will continue to refine our approach in the coming weeks and months.
Is misstaken as deletion? English is not my first language, but I have a hard time reading this as "we may delete your channel".
Removing monitization is as good as deletion for the content creators. Furthermore, their "appeals" process tends to be a black hole, as is basically everything at google that would require human intervention.
The majority of content that is watched on Youtube is monetized and people do depend on it to some degree. If they didn't, then they wouldn't be on Youtube, because there are plenty of other services that offer you the ability to upload videos for free.
> because there are plenty of other services that offer you the ability to upload videos for free.
Even without monetization, the community/discoverability benefits of YT have to be appealing. As much distaste as I have for Google, the ability to find a plurality (at least) of the content related to my area of interest in one place is quite appealing.
So it sounds as if YouTube content creators are now in the unenviable position where they need to actively moderate the comments section for videos featuring children, and if they don't do so to YouTube's satisfaction, they could have their entire channel nuked.
Is that a problem? If a run a large forum and have moderators running subforum, they either moderate comments in their subforum to my satisfaction or I "nuke" their subforum. Given that the channels can just turn comments also, I can't see a terrible burden here.
Popular forum software (discourse, phpbb) have good mechanism for comments moderation. For example a comment from every new user can require approval before it becomes public. YouTube doesn't seem to have such mechanism, so the moderator would need to continuously monitor the comment section.
Yeah but also your moderators are actually just users who don't know anything about moderation, and you didn't tell them at any point that they have to handle moderation duties.
Does YouTube even provide tools to let people do effective moderation nowadays?
Just telling content creators that it's essentially their problem now can't be the whole solution. This reads like the typical Google response of leaving the people that fill their platform with quality without any useful feedback.
Yeah, and most of the channels I follow have 1-5 people producing them while sitting on millions of subscribers and tens of thousands of comments on most videos.
It will just kill small but successful channels because of the sheer amount of moderation to do.
laowhy86 had all his video comments removed by youtube because his toddler daughter was in a couple of his videos, despite not being the focus of or even in 99% of his videos, and despite the fact that he does curate comments. He received no notice of this from youtube. He found out when one of his twitter followers pointed it out.
What's worse for YouTube? That news articles continue to be written about how they're ignoring a pedophilia problem, or some channels getting caught up in the algorithm? It sucks to be negatively affected as a content creator, but YouTube is doing what everyone has been pressuring them to do.
It's simply absurd for YouTube to manually moderate comments at the scale they currently operate. If you force them to do that, it won't be profitable, and you'll end up with YouTube blowing channels away because they can't afford to host them anyway.
You are thinking at human scales, and that is understandable, but Google doesn't think at human scales and it's only "absurd" if you think that Google has the inalienable right to the smallest possible cost of goods sold, even if that means offloading their externalities onto everyone else.
It is probably obvious that I do not. You shouldn't, either.
At Google's scale, trained-but-unskilled workers are not expensive. They are not cheap, but they are not expensive. And Google makes a lot of money. This is a common throughline from large societally-threatening, socialize-our-externalities-but-never-our-profits companies from Facebook to Google: "doing something correctly, or even trying to, would just cost too much money, so we should continue our societal-termite ways!" Until these unwatched monsters--and that is, I stress, the default state of the corporation, it is only the threat of the society that grants them their charter taking it away that adds even a speck of decency to them--prove, prove, that they somehow just can't survive by reducing incomprehensible net revenues to merely gigantic, then I will continue to operate on the understanding that they don't want to. Which I tend to think is a much, much more realistic thing.
I don't care. They fix their product or YouTube delenda est. Either is preferable to the current situation.
They make money by not spending it when they can get the same outcome for free[1]. Also, Search and Adwords make money, YouTube is getting by[2] (relatively). Why should other divisions subsidize a loss-making YouTube? Some channels don't make enough money relative to number of comments to be financially viable (no matter how cheap the moderators are) - Google has simply outsourced this decision to individual channel owners.
1. Google user's do a lot of things for free already, e.g. Map POIs
I understand that. I also understand that stuff like YouTube is effectively becoming the public square of the twenty-first century and if a company wants to own that, they can deal with not making all the money off of it that they could possibly, theoretically, make.
People matter more than corporations. Society matters more than corporations. I'm comfortable asserting that it would be better for Google to close YouTube down than to let an organ of growing central importance to society at large become what it's obviously starting to become; something less damaging than that neglectful caretakership can arise in its wake.
Are you seriously suggesting that disabling comments on a certain type of video content is more damaging to society than losing a global engine of content creation and community?
YouTube benefits society immensely by sustaining a very expensive 21st century public square. If we as a society want to have that - and I at least very much do - we can deal with not making comments on all the videos we could theoretically comment upon.
I am seriously suggesting that this is not something that can be algorithmically determined. I'm quite OK with all manner of content not having comments enabled. I'm not okay with unthinkingly stupid false positives all over the place harming creatives' (actual creatives) ability to feed themselves, and those false positives are overwhelmingly caused by bad heuristics and objectively dumb algorithmic decision-making.
Feeding humans into The Machine, having The Machine make context-free, alarmingly inaccurate, and functionally beyond-appeal decisions--because the appeal process doesn't scale either, we are so frequently told, when it isn't just "drop the appeal on the floor--is bad. If Google has no other answer than Feed The Machine, then The Machine should be considered inimical to humans and should be dismantled.
But, of course, The Machine is not necessary; that's a convenient fiction to paint the problem as a dilemma of "no YouTube" and "some unaccountable algorithm runs YouTube and decide what you can see, free to lead kids from Let's Plays to Nazi agitprop and pedophiles to their spank bait." It's just that the Machine is cheaper, you know? And that's really, and literally, all.
I can't take your concern for creatives' ability to feed themselves seriously when you turn around and advocate for fully destroying the platform that is feeding them. Many full-time content creators on YouTube aren't big enough to make it on a smaller platform or on their own.
I also don't think the "Machine" is necessary, but I do think it's better than having no global engine for content creation and community at all. If you think there's a viable third option, I'm interested in hearing how it would work and the cost of achieving it. But of course you're free to continue making dystopian metaphors and pointing at Nazis instead.
The way it works is to have these companies hire, and pay for, and care for (see Facebook terminating counseling services, etc. for leaving content-moderation employees) employees to make the decisions to provide a platform that's safe and sane.
That's it. That's literally it. That's just...it.
You are ultimately correct, in that it will be of relatively higher cost. You are ultimately correct, in the sense that "anything" costs more than "nothing". And I genuinely don't care. It must to happen,. And a large part of why I don't care is that I am not advocating for its destruction; what I am saying is that I am perfectly okay with going to the mat with Google and other ostensibly supra-national corporations because they'll back down. They will back down because they will still do just fine. Google is not going to shutter YouTube, Twitter is not going to fold (well, not because of this), Facebook is not going to hang a CLOSED sign on the door because governments say "no, you have to actually have humans make decisions that impact these other humans and process them sanely instead of having your robots blap stuff to death because it found a peak in their hill-climbing." They will comply, because they will still make plenty of money.
And if they don't? If I'm wrong? Somebody else will do it. They're plenty of gold in that hill, even if you aren't allowed to get at it for completely free.
(It is also worth noting that...uh...on YouTube, those Nazis exist. They're right there. I've watched them radicalize teenage boys who started on Let's Plays. The algorithm happily feeds those boys to them. That's part of this problem, too, and you can't just handwave it away.)
You speak with such confidence that YouTube is printing enough money to sustain such a massive additional cost but that's unlikely. Don't just take my word for it, the WSJ has reported on this matter [1] because Google doesn't release financial details for YouTube on its own.
You've done nothing but rattle off assertions about how YouTube just so profitable and won't shut down, how there's so much money in ad-supported video hosting, how somebody else can do it. These are fantastic claims, by which I mean they are rooted in fantasy.
I have no trouble believing that this represents an existential threat to YouTube. If Google massively shrinks or shuts down YouTube as a free and global content platform, it's not just their loss, it's ours as well.
> The online-video unit posted revenue of about $4 billion in 2014, up from $3 billion a year earlier, according to two people familiar with its financials, as advertiser-friendly moves enticed some big brands to spend more. But while YouTube accounted for about 6% of Google’s overall sales last year, it didn’t contribute to earnings. After paying for content, and the equipment to deliver speedy videos, YouTube’s bottom line is “roughly break-even,” according to a person with knowledge of the figure.
I didn't say YouTube was "just so profitable." Google is so very profitable and Google won't shut YouTube down because Google derives incredible mindshare value and analytics insight from owning YouTube. YouTube and a similarly not-super-profitable-but-very-useful product--Gmail--get people into the Google ecosystem and facilitate greater understanding and deeper analytics into their userbase in ways that make the things that do make money make more money. To reduce it to a P&L for that single division is bonkers.
And from a brand perspective? To younger people, YouTube is the part of Google that they like. It's not going away if it becomes marginally more expensive to run (and we are talking marginally. Facebook pays $28,800 a head for content moderation, and that's American employees), because all doing so does is open the door for a competitor--and while 2009-me thinks this is crazy to say, I find myself eyeing Microsoft in 2019, though Facebook is also of course a likely contestant--to come take all those eyeballs and all that analytics data.
I promise: it's okay to dare even a megacorporation to blink. We live in a society, they operate under our rules.
Google doesn't need YouTube to exist in its current form to have a large viewership. It can just as easily turn YouTube into a controlled TV-like platform where content is primarily created by incumbent professionals with little room for anything else. They'll still get incredible viewership. That's where the mainstream lives after all. Smaller content creators aren't particularly profitable or popular, so why bother if all they do is invite the press and people like you to slap them around for having them. I'd say it's already going in that direction.
And from a brand perspective? The linked article in this thread is a global, mainstream news publication burning Google & YouTube's brand by associating them with pedophiles.
Marginally more expensive? Try hundreds of millions a year to employ the thousands of workers to properly vet the 80k+ hours of video content uploaded every single day, with countless more comments. Then get slapped around by the press anyway because those workers aren't paid enough, and they aren't given quite enough mental care because they're still a bit screwed up after watching garbage 8 hours a day, and by the way they shouldn't be watching garbage 8 hours a day because that's awful for a human being to do that, they should do it at a nice 8 hours/week but they should still get paid a lot more because they're doing god's work and market rate wages aren't enough for them.
So what's your plan? Google realizes that hey, they don't need to operate a free global platform for content creators of all sizes at a P&L loss, they can do what everyone else does and make a lot of money, get a lot of mainstream viewership, avoid PR blows like this one...then you get to proclaim victory because youtube.com still exists?
Oh right, if Google stops operating a free global platform for content creators everywhere at a loss, someone else will do it. Like Facebook, which suffers from the exact same issues, is working towards the same AI approach as YouTube, and got slapped by the press after hiring human moderators anyway? Like Amazon, which acquired Twitch and almost immediately applied an AI-based automatic content moderator even more inaccurate and punishing than YouTube's? Like Microsoft, which...uhh what? I'll let you come up with reasons why Microsoft is somehow an appropriate competitor.
I can only describe your comments as wishful thinking. We live in a capitalist democracy, we operate under its rules. You're free to suggest that we as a society choose a different system, but good luck with that. Until that changes, I promise: megacorporations don't blink, they just look away. I think it would be a tremendous loss if one of the most competent members of our society looked away from the project of a free, global video platform for content creators of all sizes, stripes, and beliefs.
I would argue against banning those things, even if it's a thing I don't like, like I am now. I argue against the cultural idea that if I don't like it, YouTube needs to get rid of it. If you think something is so bad that it shouldn't be on YouTube, you should go through society's democratic process and get it enshrined into law.
On any popular video or channel the comments have always been a cesspool of hate and evil. I am glad that YouTube is finally trying to do _something_ about it, but it also seems really shitty that it took pedophiles to instigate an advertiser boycott and get them to act.
Pedophiles are just the proverbial "Straw that broke the camel's back."
Advertisers have always been agitating, often behind the scenes, that their ads not show up on certain channels. It's just that now that there are all these code words, out and out brazenness, and what not that undesirable people use to have their conversations on otherwise innocuous channels, the advertisers are really starting to put their collective foot down.
This really is an "existential" level threat for YT. I understand the urgency. I'm just wondering if there is a better way to accomplish the same goal? Is there a way to, more directly, target undesirables?
Or maybe advertisers want better ad deals? They're also agitated from behind by traditional media that sees Youtube as a threat. Back during the first adpocalypse it seemed as though media organizations were threatening companies to pull out otherwise they'd be written poorly of.
Gab just released a tool that permits anybody (with a browser plugin) to comment on any web page, so this will probably not only hurt legitimate users, it will not do what it's intended to either. Just like every other moral panic in history.
This isn't a moral panic, I don't know why people push that line anytime racists or pedophiles are even mildly inconvenienced.
Advertisers left Youtube because people left creepy and sexually suggestive comments on some videos, and Youtube responded to preserve ad revenue. No one was clutching their pearls over this.
This is pretty much raw, uncensored, money grubbing, greed.
Racists and pedophiles are bad for the bottom line because advertisers refuse to pay to have their ads next to such content. So unless we're willing to start paying to use YT so that YT can get off the ad supported model, then we'd better get used to moves like this one.
> Seems like YouTube will need to come up with some sort of "trusted subscriber" designation, and allow content creators to permit comments only from those subscribers, so that any random bad actor can't swoop in and destroy a channel.
Alternately, a "trusted moderation service" designation, where you—if you think it's worth it—can pay a third-party to do the moderation that YouTube doesn't want to pay for, and YouTube can verify such third-parties as being "thorough enough" that it won't automatically nuke a channel upon report if such a verified moderator service is doing the moderating (just like they wouldn't nuke a channel upon report if they were doing the moderating.)
I only have a peripheral knowledge of Twitch, but don’t they have a similar solution to what you’re outlining here? There is a “public” channel, on major streams this is full of spam and nonsense, then a separate channel for subscribers where they can talk amongst each other.
Perhaps a subscriber-only gate to comments would be a good thing. And only subscribers could read the comments, too. Channel owner is tasked with moderating the conversation or recruiting mods to do it, like any chat room or message board. Then, whatever happens in the comments becomes the channel owners responsibility.
So did they ban comments as the headline indicates or just default disable them. Default disable seems much more reasonable to me, let creators decide if they want to moderate comments but keep them default off so family videos don't have to deal with that.
This is so bizarre, so Google/Youtube is still trying to claim that they are NOT a media company, just a platform, no media, not responsible of course, but at the same time they are putting responsibility of moderating user comments ONTO THE VIDEO MAKERS themselves.
This is completely hypocritical and idiotic. Youtube comments have been famous for a decade for being among the worst of the worst content on the web, and now they're going to try to just foist that cancer on channel owners and wash their hands of it? Are you kidding? Is this a joke?
Is nobody in charge at Google anymore? Are they just going to keep endlessly reacting to whatever media story got the most attention last week instead of actually trying to build something new?
The fact that Google is ceding the fight against toxic comments on YouTube is actually pretty shocking.
In a company that knows more about you than you can possibly imagine, and more about automated sentiment analysis than anyone in the world, they couldn’t algorithmically determine who should be allowed to post on certain subsets of videos, or devise a system they thought was worth deploying to ensure comments meet a basic level of decency.
- shake the money stick at them and they will dance (thanks, of all people, Nestle!)
- they've admitted a serious problem exists that they were unwilling to deal with until external pressure forced them to (i.e. they can't be trusted to self regulate)
- they've all but admitted they can't fix this in reasonable time, if at all
This is the first time I can think of where there has been a seriously material chink in Google's.. cultural armour? Turns out the advertisers are in control, and turns out they don't have a technical cure all. It'll be interesting to see how they attempt to reintroduce comments in the long term, no doubt more ML. Of course, this says nothing about a recommendation system that continues to blindly cluster videos of lithe toddlers together. I wonder if any advertisers are making a stink about that
Favourite summary: kids are safer on YouTube today because of Disney and Nestle, not because of Google. Let that sink in. The subtext here of course is that Nestle and Disney are some of the most evil companies around, and yet they're the ones that were forced to strong-arm Google. The irony of this defies words, and the reality of the only mechanism at play here to protect children is almost as disturbing - these companies don't "care about children", they were only forced into action to maintain their reputation.
(gentle reminder: HN punishes highly commented 'controversial' stories. If you care about this issue being more widely understood, try to limit your commenting)
>- they've admitted a serious problem exists that they were unwilling to deal with until external pressure forced them to (i.e. they can't be trusted to self regulate)
No, they didn't. The problem that Youtube is dealing with is advertisers leaving, not the actual comments. Dealing with the problem of advertisers leaving is different from dealing with those comments. It's unclear whether dealing with those comments is even desirable, because Youtube would, in essence, become the censor of what is and is not okay to post in the comments sections, even if they don't break any codifiable rules or laws. Would you find it acceptable that what your freedom of expression is filtered by rules as unclear as that?
Laowhy86 got his comments disabled because his daughter appeared in some of his videos. If this standard were to be pushed across the board then it would effectively ban comments on any videos that have underage people appear in them. Is this desirable for our society?
>Favourite summary: kids are safer on YouTube today because of Disney and Nestle, not because of Google. Let that sink in.
But this is not true.
Firstly, kids on YouTube were NOT impacted by this. When a creep watches a video of a person then that person is not negatively impacted by it.
Secondly, all you've done is hide the problem. The videos of the kids still exist, the time stamps can still be created. All the creeps need to do is share that information somewhere else. That's it. And the only way to fight this one is to simply bar kids from appearing in any media content. Good luck posting a video of you walking around town or of an event where somebody underage might be.
This is similar to the discussion about whether you are allowed to let your 9 year old go outside unsupervised. It's a question of how much freedom do we allow people and kids in our society. It seems to be that public opinion is on the side of less freedom and more protection.
> Firstly, kids on YouTube were NOT impacted by this
Unfortunately this is inaccurate. There are billions of daily adolescent YouTube users, some of whom entering puberty that were exposed and continue to be exposed to the recommendation algorithm that created this mess in the first place, and for periods running into hours every evening.
At such an early stage in development, it is absolutely the company's duty of care to ensure that a 5 year old is not being sent up a gradient of a recommendation system that is encouraging them (with the help of the comments just removed) to view people of their own age in a sexual manner. Not only were children impacted by this, but the mechanism that enables it remains active to this day.
I agree with the idea of allowing children outside, but as per my reply to your previous comment, not if that means spending all day in the back yard of the village creepy old man. Balance is required in every situation, and denial of the kind your comment is riddled with accomplishes nothing.
>Unfortunately this is inaccurate. There are billions of daily adolescent YouTube users, some of whom entering puberty that were exposed and continue to be exposed to the recommendation algorithm that created this mess in the first place, and for periods running into hours every evening.
You are talking about a different issue.
>At such an early stage in development, it is absolutely the company's duty of care to ensure that a 5 year old is not being sent up a gradient of a recommendation system that is encouraging them (with the help of the comments just removed) to view people of their own age in a sexual manner.
Why would it be the company's duty of care and not the parent's? The parent should be the one that controls what the child consumes, not some nameless company or the government.
>I agree with the idea of allowing children outside, but as per my reply to your previous comment, not if that means spending all day in the back yard of the village creepy old man.
Again, this should be done by the parent, not by a faceless corporation or the government. It is the parent's job to deal with this.
> Why would it be the company's duty of care and not the parent
It is the company's duty of care for the same reason that
- it is the city's duty of care if the child goes outside and falls down an unmaintained manhole
- it is the school's duty of care if a fire extinguisher malfunctions and kills the child
- it is the driver's duty of care if the child crosses the road and gets struck on a green crossing
In all these cases, the entities assume a certain privilege to operate due to the trust placed in them that enables the freedom for the child to go outside whatsoever. If that duty does not exist, then there is no trust the child can safely leave home unattended. In effect what you're arguing for is total control from the parent -- a much worse outcome for a child than self-moderation on behalf of the trusted entities they would otherwise have had the freedom to interact with.
> Why would it be the company's duty of care and not the parent's?
It is the company's duty of care for the same reason that
- it is the city's duty of care if the child goes outside and falls down an unmaintained manhole
- it is the school's duty of care if a fire extinguisher malfunctions and kills the child
- it is the driver's duty of care if the child crosses the road and gets struck on a green crossing
In all these cases, the entities assume a certain privilege to operate due to the trust placed in them that enables the freedom for the child to go outside whatsoever. If that duty does not exist, then there is no trust the child can safely leave home unattended. In effect what you're arguing for is total control from the parent -- a much worse outcome for a child than self-moderation on behalf of the trusted entities they would otherwise have had the freedom to interact with.
>that enables the freedom for the child to go outside whatsoever
But this is not comparable to "going outside" at all. This is "seeking out and going onto private property and seeing stuff you shouldn't". The logic that a company is comparable to public spaces doesn't hold water. YouTube is not a public space any more than Pornhub or 4chan is. If 4chan wanted to cater to 10 year olds (yes yes I know) then the way to do so would clearly not be to remove content not suitable to kids but to create a separate place for those kids.
> The parent should be the one that controls what the child consumes, not some nameless company or the government.
The ways in which people access community-created content (or commentary) have changed substantially enough that I don't believe this is true any longer. Content providers now bear a portion of this responsibility in addition to child-rearers. That is ethically much more sticky, I'll admit, but I also think that moral imperative does not automatically follow the path of greatest simplicity.
> Youtube would, in essence, become the censor of what is and is not okay to post in the comments sections
They can, that's enough reason. It's their lawn, they can do whatever they want. If you invite me into your house and I start yelling like crazy (because I have the right to), if you don't like it you would suggest me to get out (gently or not). You will be right. Who am I to complain about what I can or cannot do in your house?
> freedom of expression
Why people always misinterpret what freedom of expression is? It does not apply to the comments section of Youtube. People are not entitled to be heard or to express whatever they want in Youtube comments.
Wait, so you're saying that if every company in the us wanted to ban videos of people that were gay they could because they dont have to allow them that they could. Yet we both know there would be a lawsuit so fast that your head would spin. Age is a protected class too. Children just dont happen to have their own lawyers as often and knowledge to use them in cases like this.
I was referring to publishing comments as the equivalent of free speech or freedom of expression, and not about denying participation and/or discrimination. Why going so far with the reasoning?
It's not Youtube's responsibility to protect anything, they just need to keep things barely legal. They are doing this because of the dollars and public image. Youtube owes us nothing, let alone guaranteeing our freedom of expression or protecting classes. If they say so, it's just PR.
Can a child consent to fame? Should a parent be permitted to make that choice for their child?
I don't believe so. We don't let parents send their children down into coal mines, and neither should we allow parents to make their children famous on youtube. Parental rights are secondary to the rights of the child.
You only think that's a good idea because you don't believe in flat earth. They could equally put a link to flat earth sites under NASA/SpaceX videos. Do you really want to decree that whatever they link is "true" just because you agree with them in this particular instance?
I try not to consider things in terms of absolutes, so to answer your direct question: yes, I am pretty comfortable believing the vast majority of such links would be substantially factual.
Google already does this, I don't see the issue with it hypothetically happening on YouTube. If you search for a well-known person, place or thing, you'll be presented with information aggregated from trustworthy sources. Can they be wrong? Sure, and it happens. It's not sinister.
It sounds like you're envisioning someone manually tagging things. That would be odd and suboptimal. Instead, just tag helpful Wikipedia pages about the broad scientific consensus of things on relevant videos.
When I am objectively wrong, I want my mind to be changed. I expect this to be the case for around 10% of my ‘knowledge’, even on topics I care to educate myself about, and much worse on other issues.
Putting a link under the videos isn’t likely to achieve that, but that’s a separate issue.
The only problem I have with Google doing this, is that I trust corporations and government about the same — i.e. that both will lie and dissemble as much as they are allowed to get away with for their own or their leader’s benefit, without regard for my interests.
You're missing the point. Most things in life aren't settled as easily as the flat Earth debate. Google could also do this on something that's far more controversial (eg political) and justify it in the same manner. Imagine if Google were against climate change and every video about climate change gets a link to some website that says it's not true. That wouldn't be acceptable, would it? But being okay with Google doing this for flat-earth also makes it more acceptable for Google to do it for climate change.
Recent past (e.g. in US politics) has shown that our society needs mechanisms to incentivize consensus. Google promoting fake information would get appropriate push-back and in the end help form consensus. Yes, even on political topics I'd be fine with that as long as they try to stay fact based. There is a lot of room for honest actors to discuss ideas. But I'm fine with society (including companies like google) pushing back once people (including politicians) go crazy with stupid positions (likely trying to widen the overton window to redefine the "reasonable center")
> But being okay with Google doing this for flat-earth also makes it more acceptable for Google to do it for climate change.
Sounds great, I'm all for it.
Sure the truth can be complicated, but I fail to see how implementing software that auto-links to a relevant article on Wikipedia causes Google to be the arbiter of truth.
No, but I reject the implied premise. I don't just think Google should endorse the things I agree with, and I think characterizing this that way is disingenuous. Rather Google should endorse the truth which substantially recognized experts have consensus on.
There aren't sources backing up that climate change isn't real. That's a hill I'll happily die on. There are plenty of alternative sources which make that claim, but Google does not aggregate facts from them for presentation to its users. Because they don't have evidence.
Like I've said elsewhere in this thread, this isn't a revolutionary idea. Google does it on its search engine and the sky hasn't fallen. I remain unconvinced it would fail if they pushed it out to YouTube.
That sounds like a leading question for rhetorical purposes - is this something Google actually did, or are we speaking purely of hypotheticals here?
Note the solution I posed is something which Google already does on its search engine without calamity. Therefore I don't see a reason why it would fail for YouTube. In contrast the example you're giving seems pretty hard to just link to an authoritative source.
Put another way, I'm not advocating for Google to arbitrate the truth on a case by case basis. I'm advocating for Google to identify ahead of time which sources are well-researched and trustworthy, then outsource its fact-linking system to those sources.
If Google were to supply facts on a case by case basis that would be suspect. But that's not how the company operates, so I'm deeply skeptical they would become some kind of arbiter of truth.
You can’t abstract away from the content to the pure form of an action, and then posit that there’s no observable difference between two (very much different) examples.
Ex: „If banks stop allowing strangers to withdraw money from my account, how will I ever get my money“
And, no, questions of fact aren’t different. The earth is round, not flat. One tree does not a forest make, but a thousand does. Even if we can’t agree on the specific cutoff (is 15 trees a forest? 50? 500?), that does not prevent us from accurately describing the extremes.
Do you realise how silly this line of argument is? Why exactly should we (or Google for that matter) not recognise that there are differences in some actions? That some things are good, and some things are bad? In what world do you conceive it to be the same to advertise flat earth theories as legitimate unless you are yourself a flat earther?
This idea that because nobody has a monopoly on the truth then we can't make decision is utterly futile and silly; I have no idea where it originates, except perhaps in the darkest places where reason has utterly collapsed.
Of course, every authority since the beginning of civilization has carried this mantle in justification for censorship of all sorts. Having Google decide what information can and can't be shared on their platform (read: utility) is a dangerous state of affairs. What new social movement, or recognition of a current injustice will be stifled due a status-quo bias that is codified by such top down control over the media? We can't know from where we currently stand, which is what makes such control dangerous.
The fact that a reason is used (and has been used) to justify censorship unjustly does not mean that the reason itself is invalid, or that there is no such thing as good censorship; most people agree that some censorship (of threats or child pornography etc.) can be a great positive force.
There is no reason behind the idea that because we can't totally differentiate between good and bad where the line is blurry then we can't do anything at all. I'd also question whether free speech is intrinsically valuable, more than other actions are. I have seen no convincing reason to think so.
We can approve of good actions (like putting a Wikipedia link about earth science under a flat earther's video) and disapprove of bad ones (like putting flat earth propaganda underneath a scientific video). I see no issue here.
The debate isn't whether we can do any sort of "good" censoring, but whether we should do any censoring at all. (Just to be clear, I'm narrowing the scope under discussion to ideas. Of course things like child pornography should be censored due to the direct harm.) I reject the idea that society should welcome some authority having control over ideas such that ones deemed "bad" enough by a large enough majority should be actively suppressed. The "good" we presume can be done by shielding people from bad ideas does not outweigh the fundamental right of expression and communication.
Your claim is that some form of censorship may be permissible if the harm is direct, but I think this carries with it a certain ideological slant - what harm is 'direct' and 'indirect' has vastly different consequences; for instance, prevention of direct harm may be sufficient to protect children, but it probably isn't enough to protect the proliferation of racist or sexist ideas which have historically led to widespread oppression on those fronts. What is your threshold for harm?
Here we see the vacuity of the harm principle: one can claim anything is (or isn't) harmful in order to attach their favorite idea to it. As an example, some people may be said to be harmed simply by the knowledge that someone is watching pornography in their house. You'd likely say that doesn't "count" as harm - well then what does? As it turns out, controlling speech under your schema is simply a matter of defining what counts as harm and what doesn't. Yet philosophers such as Joel Feinberg and Catherine MacKinnon have pointed out, very few people (if any) would like to live in a society in which only harmful speech (or acts, since there is no meaningful distinction between speech and acts other than invoking body-mind dualism) is not permitted.
Then we get to ideas: who's to say that threats or child porn can't carry ideas in them? In censoring them, aren't we censoring ideas too? Some would say the idea that "it's not so bad to have sex with children" is encoded in every instance of child pornography. What if I made my threat into an art piece?
Your argument is unmoving. Child pornography isn't speech nor it is an idea. Images are records of events and the dissemination of such records can be directly harmful. There is no ambiguity about the harm principle to be mined from this example.
> In censoring them, aren't we censoring ideas too?
Ideas are by definition abstract and so they should be communicable through some other medium.
You've managed to circumvent my entire post and you're still wrong; my point was that ideas are communicated through a medium, and they can even be communicated through, for instance, threats and pornography. You have given no convincing reason to single out child-pornographic images for censorship while allowing others, such as regular pornography. What differentiates the free speech content of child pornography from other pornography, or even art which required harm in its creation?
Obviously I'm not defending child pornography here, but I think there's a logical flaw in your reasoning.
The fact that ideas can be communicated through other media is irrelevant, since it would mean that we can censor whatever ideas we like in any major category (e.g ideas conveyed in photography and film) but thereby only farcically allow them otherwise (e.g the expression of the idea is only allowed through physical speech).
>and they can even be communicated through, for instance, threats and pornography.
But censoring one particular medium is not censoring the idea. So your attempt at finding a contradiction doesn't hold water.
>What differentiates the free speech content of child pornography from other pornography, or even art which required harm in its creation?
Consent.
>since it would mean that we can censor whatever ideas we like in any major category
This doesn't follow from my argument that censoring one particular medium is OK. Child porn is a genuine special case (direct harm in its production, lack of consent in dissemination) that doesn't transfer to other mediums that don't have the same problems.
Censoring a medium is an instance of censoring the idea, and if censoring one particular medium is permissible, then any number of media may be therefore censored.
>This doesn't follow from my argument that censoring one particular medium is OK.
It does, since by your own admission, censoring a particular medium does not entail censoring the idea.
>Child porn is a genuine special case (direct harm in its production, lack of consent in dissemination)
So this is what I was getting at - you say it's fine to censor a particular way of conveying an idea due to other harms being associated with that particular way of conveying. In child pornography it's the violation of consent in its production and violation of privacy in its reproduction. Extending this argument from child pornography to regular pornography, some would say there are significant harms involved there too (e.g it conveys the idea that women ought to be subservient to men), and then to hate speech.
The core idea is that speech is not absolute, just like actions aren't absolute. You're free to swing your fist so long as it doesn't hurt anyone, and you're free to say things so long as they don't hurt anyone (or require anyone to be hurt, of course). This means that with a sufficiently convincing empirical dataset, we can outlaw regular pornography and hate speech.
If addressing a current harm leads to worse harm in the future, then yes it is an argument against addressing the current harm. I see no reductio here if that was the intent.
The people who use gmail essentially are "letting google decide what is true" by having google do their spam fighting.
And letting google choose which responses to choose to a search request is essentially also asking them to decide what's true (how many people even go to the second page of results, much less later ones?)
I agree with your sentiment (and am not a gmail user -- yuck!!) but I do sometimes use their search engine...and know that when I use ddg I'm ceding the same authority to them.
I ran sendmail from the late 80s until around 2001 when I switched to qmail, which I then used until around 2012. Since then I have used postfix.
My mail and other services run on hardware of mine in a colo. I've had a rack of personal machines in a colo since the mid 90s; before then I simply plugged my machines in at work.
> Google: gay people are bad and women are lesser than men
> You: sue them out of existence.
Well, no, so I wouldn't say that, but, OTOH, saying that might be problematic from an employment noon-discrimination standpoint. And there are other things Google might say that would be problematic from a standpoint of either consumer or securities fraud. Or libel. But none of that is germane to the point under discussion.
The common theme in every discussion about moderation is that Google, Facebook et al have a division of literal wizards who can wave magic wands to enforce arbitrarily nuanced moderation policies at scale.
This just isn't in touch with reality, but public opinion and pressure has no incentive to be rational, so blunt, restrictive policies like this one are the only path left open to these companies.
In Google’s latest press release they admit they are pushing out a new classifier which flags 2x more comments, and that they are increasing efforts in this area.
I think that the reality is that until very recently there wasn’t a huge bag of money attached the having fantastic moderation tools. I don’t moderate a large YouTube channel, but I will go out on a limb and guess that the tools are rudimentary.
Basic things like scoring the comment poster as well as the comment itself, the ability to have verified commenters (but why should this have to be a manual switch?), the ability to adjust the sensitivity of the classifier based on the intended audience of the video...
There is a massive valley of opportunity in between what we have today, and “magic wands to enforce arbitrarily nuanced moderation policies at scale” and perhaps the biggest comment platform on the Internet just decided they need to turn off comments entirely and if you do turn them back on, you need to do manual human pre-approval on each one.
>I think that the reality is that until very recently there wasn’t a huge bag of money attached the having fantastic moderation tools.
Uh, spam? Spam has a pretty damn massive bag of money attached to it and nobody has managed to get it right. Every single spam filter catches a ridiculous amount of false-positives and spam is far easier to classify than what's required here. Youtube catches a ridiculous amount of comments as false positives in their spam filter. I can't imagine this new category is going to be any better.
Yea, that's probably fair. I didn't mean to suggest there's no scope for improvement or investment. But I suspect that you and I differ on our estimates of the degree to which this is possible, particularly without fundamentally changing the platforms and drastically reducing their value for legitimate usage.
The existence of the gap also doesn't change the fact that my experience is that implicit in the majority of the mainstream conversation about these topics is is an imagining of tech giants as wielders of unlimited power over their billions of users, and I'm skeptical that they'd be satisfied with most/any of the points in the valley you describe.
Not to mention that the flipside of public opinion is that the caprices of an algorithm are even more hated than their cases of leniency! All of the suggestions you make are going to increase false positives, and the PR backlash I've seen from that is far worse than the backlash to unmoderated content.
That isn't to say that there isn't room for reasonable complaint, but most people are really, really stupid, and even worse, entirely unconcerned with the consistency of their objections (a problem exacerbated by the fact that different vocal subgroups complain about different problems, and our various media will happily signal-boost all of them). Again, I don't see a solution that threads the needle effectively other than blunt restrictions, leaving everyone worse off but neutering the worst of the PR damage.
I feel like this cart before the horse vision is the problem. If you are going to suck up as much data about the world and monetize it, you have an obligation to spend some of that dough on a coven of wizards who can make the place you have created safe for people. Facebook, YouTube, Twitter, et al have proven to be a disease vector over the past decade and if they want to be rich, they have a responsibility to bleach the shit out of the door handles.
Do you similarly feel that the telephone shouldn't have been rolled out until we could ensure that people couldn't speak about objectionable things, paved roads should have waited on ensuring that getaway cars wouldn't work on them, and the printing press should've been gated on technology that ensured it couldn't print falsehoods? As you can imagine, there are a trillion and one other examples I could bring up: you could ask the same about pretty much every sufficiently big advance in technology and the new structures enabled by it.
This is a serious question, not a rhetorical one: the responsibility line is unclear to me, but it seems to me that there's a level of Ludditic absolutism around the topic of Internet platforms that I don't see anywhere else, and it's possible that I'm missing the variable that makes it relevant here where it wasn't elsewhere.
I think they are shining examples of privatizing the profits and socializing the costs. In Google's case they have accrued over $100 billion in savings, and the Chinese government moderates content more effectively and from more disparate sources. A handful of companies provide phone support to over 100 million customers and Google provides automated emails.
I don't think people are this naive by accident: Google, Facebook, etc. have been pushing this idea for a long time. It has always been unrealistic but I don't think we can blame the average consumer for taking the statements of Google at face value.
Decency is famously difficult to define [0]. One person's idea of decency is another's idea of "free and open debate", is another's idea of alternative culture, etc. And the truly bad actors are using throwaway accounts anyway, or will start as soon as you start banning them.
I think active moderation and community curation is the only real solution to this. Sub-communities (formed around particular channels or topics) have to define and enforce their own standards of decency. In this case, I can understand Youtube taking the conservative approach of not trusting communities to do that, especially given a lot of the stories that have come out showing that they aren't.
> And the truly bad actors are using throwaway accounts anyway
Have you tried to make a throwaway Google account lately? It's kind of useless to do so. Not so much in the sign-up, but due to the fact that accounts with no history and no clear ties to a physical identity have nothing "vouching" for them, so constantly trip the "Are you a robot?" checks. (And, I imagine, {email, comments, Google Docs shares, etc.} from such accounts also are invisibly thresholded lower on any spam filters they are run through. This is why people write viruses to take over people's existing Google accounts—accounts without "reputation" are kind of worthless for doing anything public-visible.)
I agree. Much like the email reputation systems that evolved over time, I think other companies (like Google/FB/etc.) will eventually create hidden "reputation scores". For brand-new (maybe throwaway) accounts, those scores will harm the ability to participate in many places. After cultivating reputation (sending emails that don't look sketchy, talking to people who are deemed likely--by IP/demographics/click habits--to be your real-life peers), more participation in different communities will be allowed. Situations where reputation is insufficient for participation will be shadowbans more often than not.
This type of system solves a lot of the throwaway-for-purposes-of-evil-behavior problem, and may well be able to do that algorithmically, in lieu of human moderators. In return, it requires people to give up a lot of privacy/anonymity to participate. I don't think that's an inherently good or bad tradeoff, but it is definitely one people should be aware of.
It’s important to note they’re ceding a fight they never even seriously attempted to engage in where comments and content are concerned. As with all things Google and YT, if it can’t be badly automated, they’d rather kill it than set the precedent of doing anything useful. The short term result will be attention moving on. The long term is building up pressure that will probably result in major regulation. I wouldn’t be shocked if they’re digging a grave in the shape of “They’re too big to be responsible, so break them up.”
If your company has a platform that is beyond your capacity to manage responsibily, then your platform needs to shrink, or be chopped up.
Yeah, this whole "Youtube isn't even trying" is a massive survivor bias/fallacy because it ignores all the failed moderation attempts that aren't visible.
Its less that nobody at Youtube tried to solve the problem but more that it was never a business priority in the interests of the bottom line to solve this problem in a nuanced way.
For Youtube, these kinds of comments threaten to alienate advertisers. Comments only have a miniscule impact on viewer retention and engagement, so if the actual business is threatened by comments you just get rid of the comment system.
Saying it's not a "business" priority is the same as saying that Youtube isn't trying, just with more words. It's also subject to the same survivor bias because we don't know the "business" investments into community and moderation.
Exactly, they want to have their cake and eat it too. Low overhead, automated everything, anonymous comments, and respectability to draw ad revenue and avoid sanctions. Those are mutually exclusive goals on a large platform, and I don’t look forward to seeing hamhanded legislation with the inevitable cronyism and political infighting “solve” the problem either.
Like the big game companies and their kiddie casino business model, it will eventually be slapped down if they can’t control themselves first.
What's even more amazing is that they're willing to do it in the bylines of local newspapers, where there's a non-zero chance of you running into that person in your day-to-day activities.
"If your company has a platform that is beyond your capacity to manage responsibily, then your platform needs to shrink, or be chopped up. "
This is how I feel about fake new too. It's not exactly what you said, but if your business can't afford to moderate what spews out of it, then adjust your business or shut down.
>In a company that knows more about you than you can possibly imagine, and more about automated sentiment analysis than anyone in the world, they couldn’t algorithmically determine who should be allowed to post on certain subsets of videos, or devise a system they thought was worth deploying to ensure comments meet a basic level of decency.
Those solutions are not perfect and every time Google implemented them they caused collateral damage or even major changes to the whole platform (see the demonetization situation). I believe that they refrain from using them unless pressured to because fixing a problem that is not know to the general public isn't worth the negative PR, complaints, investigations and the damage to stability of the platform.
> In a company that knows more about you than you can possibly imagine,
Maybe the problem is that they don't actually know that much about us.
Since one of the other threads (which one, I cannot remember right now -- one of the adtech ones I think) brought up how much supremely accurate location data Google or FB etc knows about any given user, I have been thinking about how much the megacorps actually know, you know, about us.
If someone knows that I go to the grocery store two or three times per week, do they also know that I detest going to the grocery store? I do it for a reason, of course, but e.g. showing me ads for grocery stores is likely to create negative feelings in me.
So, someone might know I go there a lot, but that doesn't mean they know enough about me to make my world better in any useful way.
I don't doubt that some companies know a lot of objective facts about me -- where I go, what I spend money on, with whom I associate -- but I would be surprised if any of them know much about me, about who I am, about what I really like and dislike, about what motivates me, about what I care about, et cetera.
Maybe they know more than I can possibly imagine; maybe they know less.
Maybe that's not at all why moderating comments programmatically is so difficult. I certainly have no idea. But it makes me wonder.
Will you stop using YouTube because of this? No. So Google doesn't loose anything and gains the ability to say they're "fighting toxicity".
In reality, if they wanted to fix comments, they could easily do it without any high-tech stuff, through basic UI redesign.
It's the quintessential broken window effect. The comment section design is horrible for actual discussions. People who have something intelligent to day simply don't bother. On the other hand, anyone who wants a place to put some hostile nonsense with high visibility gets exactly that. The behavior gets normalized and the ratio of hostile nonsense keeps itself at a high level.
They won’t. You can’t fix it through UX or algorithms. Honestly why allow comments on kids videos at all? What value do they deliver over the risk of abuse? None.
What are their other options? Imagine they could, with great accuracy, predict whether somebody is going to leave a good comment or a bad one. Do they expose that to the commenter, saying they've been blocked from commenting because the algorithm thinks they're a shitty person? Imagine the optics of that...
The other option is the shadowban, where toxic commenters are hidden without notice. It sounds like a good idea, but just about led to a user revolt when Reddit tried it.
Simply turning off comment sections that have the potential to become toxic may not be the most technologically interesting solution, but it's the safest PR move.
If the structure of the service offering exceeds the limits of available technology, then the structure of the service is at fault and needs to change -- wishing away the bad people doesn't work, nor does pretending the issue does not exist (their current strategy) until advertisers are forced to threaten them
The option available to them is clear, it's just not something anyone is willing to accept: connecting untrustworthy anonymous third parties to the bedrooms of 5 year olds cannot be done safely within the limits of existing technology.
This reply is needlessly personal, are you materially affected by this issue somehow?
> Would you ban beaches as well, because creeps might watch others on a beach?
No, but nor would I pretend people can shower and change without cubicles, or permit the nearby village to drown during the first storm because I deluded myself in the belief there was no need for a seawall.
There is no rationality to be found in an absolute "save the children at all costs" position, much as there is none in "freedom of speech at all costs". This issue like so many more is nuanced, and I'm surprised people here are so easily willing to deny its possibility, but prefer to get worked up by its very existence or the plain reality nuance is required to solve the problem. Progress is never due to people like that.
This is true, as it doesn't need to be a matter regarding only the creeps. I was thinking sociopolitically, where adults related or not make the decision to use minors as props, whether for a thing as big as ad campaigns or as small as for social brownie points. As a grownup, if someone were to post my image without my express permission I could pull some legality in my favor. Children generally don't have the same resources, lacking their own guardianship. I'm just saying that until they can legally make such decisions for themselves, privacy should be the standard, to save them from embarrassment or abuse. We must all decide for ourselves what to share of ourselves.
>This reply is needlessly personal, are you materially affected by this issue somehow?
No, but you're saying that if we can't limit people from posting bad things then we should remove or change the ability to post. Take that same idea into real life. It's impossible to stop people from saying and doing things without excessively infringing on their freedoms and as a result people say and do bad things. You're arguing that we should excessively infringe on their freedoms to prevent that from happening. This leads me to believe that you are against freedom of expression and speech.
>There is no rationality to be found in an absolute "save the children at all costs" position, much as there is none in "freedom of speech at all costs".
But you didn't express nuance. You categorically said that if a service can't do it, then the service needs to change. If you want to paint a picture of nuance, then express nuance.
And I would argue that freedom of speech is a necessity for a free society, meaning that it is almost "at all costs". I've seen what the Soviet era did with people. I don't want to live in such a society.
> You categorically said that if a service can't do it, then the service needs to change
Would you categorically state that if the service can't do it, the service should continue in its present form?
Walls are some of the oldest inventions of civilization, and YouTube currently lacks /any/ walls. I'm arguing for some walls in the right places, you're arguing for no walls whatsoever. We have a difference of opinion, it's fine.
Kids lack the same free speech rights as adults who don't have to live under a guardian's authority though. If you have open floodgates there will invariably be contradictions and conflict, when a little bit of Reason could avoid much of that. Reason along the lines that the owners of even the biggest online platforms are not themselves magically entitled to all datum.
I'm not trying to be prickly here, but I've thought quite a lot about freedoms of speech, and I believe for it to concisely work, it cannot unto itself guarantee an audience, as that would still be mandating opinion, only in the other direction. It can provide legal protections, but not protections from social ramifications, as again, this would be mandating opinion one way or the other. We can live in a free society without that freedom meaning I can help myself to my neighbor's wife. THAT is nuance.
Let's say they are able to make an IA that detect toxic comments.
They can even make it relatively accurate with 1 error out of 1000 detections.
Sooner or later they will have a false positive that gets a lot of bad publicity as censorship or a false negative on somebody truly horrible.
Just cutting all comments on kid seem like the only option that will allow them to give the impression that they are taking the problem seriously.
Just like they could not afford to have google photo call dark skinned people gorilla 1 time out of 100000 photos. Better to just remove that label entirely.
Algorithmic contextual sentiment analysis is still, very much, an open question. Not even the best data and scientists in the world can get all that close to inferring the context clues that the human brain is capable of.
I think this is indicative of their actual ability to meaningfully use this data. The hype around their algorithms and machine learning paints a picture that doesn't match reality.
Google is really a former shell of itself... hell, search doesn't even function correctly for me in Gmail of all things. Google's hands-off algorithmic approach to moderation and support has repeatedly been shown to fail and be easily exploited by nefarious actors. I'd like to see them hire some actual people for once, I think they have the budget for it.
My guess is they decided it would take them a little while to do it algorithmically and would thus lose even more advertisers, so they bit the bullet and disabled comments. I suspect they're working hard on a different solution.
Note this is ONLY for videos featuring minors not general videos. Really big difference I think and doesn't mean they're giving up on all comments. Really is seems like an harder problem than general toxic videos. From the reporting it seemed like the whole thing was just comments linking to other videos (not sure anywhere had examples) which is perfectly innocent and the line between that and being part of this pedo comment ring is a matter of exactly where in the video the link leads.
Google probably could devise such an algorithm or system, but rather chose the cheapest solution that required the least amount of human intervention. That shouldn't surprise anyone, as their goal is maximizing ROI from ad revenue.
SlateStarCodex's recent post on a similar topic was illuminating [1]
> It’s very easy to remove spam, bots, racial slurs, low-effort trolls, and abuse. I do it single-handedly on this blog’s 2000+ weekly comments. r/slatestarcodex’s volunteer team of six moderators did it every day on the CW Thread, and you can scroll through week after week of multiple-thousand-post culture war thread and see how thorough a job they did.
> But once you remove all those things, you’re left with people honestly and civilly arguing for their opinions. And that’s the scariest thing of all.
> Some people think society should tolerate pedophilia, are obsessed with this, and can rattle off a laundry list of studies that they say justify their opinion. Some people think police officers are enforcers of oppression and this makes them valid targets for violence. Some people think immigrants are destroying the cultural cohesion necessary for a free and prosperous country. Some people think transwomen are a tool of the patriarchy trying to appropriate female spaces. Some people think Charles Murray and The Bell Curve were right about everything. Some people think Islam represents an existential threat to the West. Some people think women are biologically less likely to be good at or interested in technology. Some people think men are biologically more violent and dangerous to children. Some people just really worry a lot about the Freemasons.
...
> The thing about an online comment section is that the guy who really likes pedophilia is going to start posting on every thread about sexual minorities “I’m glad those sexual minorities have their rights! Now it’s time to start arguing for pedophile rights!” followed by a ten thousand word manifesto. This person won’t use any racial slurs, won’t be a bot, and can probably reach the same standards of politeness and reasonable-soundingness as anyone else. Any fair moderation policy won’t provide the moderator with any excuse to delete him. But it will be very embarrassing for to New York Times to have anybody who visits their website see pro-pedophilia manifestos a bunch of the time.
> Every Twitter influencer who wants to profit off of outrage culture is going to be posting 24-7 about how the New York Times endorses pedophilia. Breitbart or some other group that doesn’t like the Times for some reason will publish article after article on New York Times‘ secret pro-pedophile agenda. Allowing any aspect of your brand to come anywhere near something unpopular and taboo is like a giant Christmas present for people who hate you, people who hate everybody and will take whatever targets of opportunity present themselves, and a thousand self-appointed moral crusaders and protectors of the public virtue. It doesn’t matter if taboo material makes up 1% of your comment section; it will inevitably make up 100% of what people hear about your comment section and then of what people think is in your comment section. Finally, it will make up 100% of what people associate with you and your brand. The Chinese Robber Fallacy is a harsh master; all you need is a tiny number of cringeworthy comments, and your political enemies, power-hungry opportunists, and 4channers just in it for the lulz can convince everyone that your entire brand is about being pro-pedophile, catering to the pedophilia demographic, and providing a platform for pedophile supporters. And if you ban the pedophiles, they’ll do the same thing for the next-most-offensive opinion in your comments, and then the next-most-offensive, until you’ve censored everything except “Our benevolent leadership really is doing a great job today, aren’t they?” and the comment section becomes a mockery of its original goal.
>It's ironic, because the left, not the right, has spent nearly the last 50 years promoting pedophilia.
... What? Your evidence is that Salon.com had one article about people who specifically avoid acting on their urges?
>But what is kind of shocking is how anti-free-speech so many on this site are. I'm old enough to remember the idealistic - although rather fatuous in hindsight - early days of the "internet culture" which promoted John Perry Barlow's "A Declaration of the Independence of Cyberspace" [0] and John Gilmore's sentiment that "The Net interprets censorship as damage and routes around it." [1]
The difference is that many of us have seen how bad groups have benefited from the current state of affairs since then. It was easy to be idealistic about free speech on the internet back when it was almost solely populated by academic types. But the world changed, and we don't have to be dogmatic about old ideals.
>>When Barlow said “the fact remains that there is not much one can do about bad behavior online except to take faith that the vast majority of what goes on there is not bad behavior,” his position was that we should accept the current state of affairs because there is literally no room for improvement. [...] In my opinion, Barlow’s opinions on online behavior, given his standing and influence were irresponsible. [...] Saying “we can do nothing” is like saying it’s not worth having laws or standards because we can’t achieve perfection.
>and "racist" sentiments, are quite popular in America, Europe, and in all countries among all human populations on earth.
Just because it might be natural or common doesn't make it acceptable and worth spreading.
>many of us have seen how bad groups have benefited from the current state of affairs since then. It was easy to be idealistic about free speech on the internet back when it was almost solely populated by academic types. But the world changed, and we don't have to be dogmatic about old ideals.
Exactly my point. This is simply a minority political faction using their political and economic power to silence those they don't agree with, all the while hypocritically pretending it's about pedophilia, which they, more than anyone, have been attempting to mainstream.
>Just because it might be natural or common doesn't make it acceptable and worth spreading.
Unacceptable - to you - you are simply asserting moral superiority. Why should anyone accept your assertion? The left certainly has no objective definition of "racist" which is why certain kinds of "racist" speech are acceptable to leftists. Again, this is the point. You, Google, Facebook, the SPLC and the ADL are simply declaring themselves the moral authority and censoring speech. The pretense at morality is just that, a pretense. It's just a display of raw political and economic power, couched - as naked displays of power typically are - in the language of morality. Many people find your ideas unacceptable and not worth spreading.
> Your evidence is that Salon.com had one article about people who specifically avoid acting on their urges?
That is called "an example" and one could give 100 more from the last 50 years, for instance, the German Green party chairman, who admitted in his own book to molesting children at his leftist school as part of "sexual liberation." But I obviously can't give anymore, because YCombinator doesn't allow expressions of opposition to the Silicon Valley/Democratic party political establishment, hence the removal of my comment.
I extensively watch music videos and live music performances on youtube, comments are fun to read. Not sure they are completely necessary but i've learned some interesting things and been turned on to good bands from YT comments!
I watch lots of hobbyist how-to videos and the comments are usually pretty nice. At the very least they are some human engagement from the views to the author, without which making/posting videos would seem pretty lonely.
Could be a function of what you watch. I frequent cricket/soccer/indian music videos which match your experience. Other things like VASAviation are filled with awesome comments.
This very much depends on what part of YouTube you tend to be in. Much like any online massively-used portal, different segments of videos will attract vastly different users and types of comments. There is somewhat of an over-generalization of YouTube comments, where the description of toxic comments seems more accurately applied to truly /viral/ videos: those that have broken beyond normal segment boundaries, and where the 'best' comments are perceived to be those that draw the greatest reaction, as opposed to genuine commentary on the video itself.
I'm surprised they don't just disable comments for viral videos, too.
Now that I think about it, imagine the equivalent of that for other services. For example, imagine if all of Reddit's "default" subreddits (the ones that make up the front page when you're not logged in) had no comments attached to posts. To get comments, you'd have to opt into a community. (Which might very well just be a post-for-post mirror of a "default" community, but with comments enabled.)
I've found that comment quality is pretty reliably proportional to the specificity of the audience niche. I also have the hunch that any meaningful algorithmic comment moderation would have to approach being a general be AI. Might be room for advancement in machine assisted moderation though.
Low quality channels have low quality comments. High quality channels have high quality comments. I feel a bit embarrassed for people who claim they've never seen a good comment on youtube.
Here are two examples I came across a few minutes ago:
> Grady, to gain a variable flow control, could you use the horizontal angle of a folder weir? If the point of the folded weir was say 5 degrees higher than the outer sides, a slow flowing river would only flow over the lowest parts of the crest. As the flow increased, more of the weir's crest is used by the water, This would effectively self regulate the weir's geometry.
response:
> Such structures exist and they're called compound weirs. These structures come in various cross-section geometries which can be tailored to provide better control of water levels under various discharge rates. The structures discussed with a fixed crest height, also have a fixed relationship between the upstream water level and discharge capacity.
I think that's an interesting exchange, don't you? Nobody is tossing around profanities, neither claimed the earth is flat, called the other a 12 year old, or anything like that. Insightful and constructive comments are common on decent channels, and non-existent on trash channels. That's not really a reflection of how youtube works, but probably a symtom of something more fundamental about human nature.
There are a few channels -- Ben Krasnow's 'Applied Science' comes to mind -- where the commenters tend to be well-informed and supportive.
But yes, in general YouTube commenters are enough to make one question the long-held conventional wisdom that a nuclear war would be a bad thing. I don't envy the people at Google who have to support and maintain the commenting system on YouTube.
> The fact that Google is ceding the fight against toxic comments on YouTube is actually pretty shocking.
Not shocking at all, it's like a typical "welcome to Internet" thing that somebody outside of "computer people culture" gets.
Internet is the future of ads, Internet is also the end of ads.
Ads are made and used by rich, well fed people, and in their majority targeted at people on other side of social ladder.
The TV was perfect for that, it was one sided.
Now, the "reflux" that comes back from the Internet culture is hurting them. When some random nobody can pour dirt and bile on videos of pierre cardin toating, avocado munching "successful people" it destroys that image.
Google really should do it that way: you either agree to have your ads run on every type of scatalogic content, and your image covered by a metre of dirt, or you don't advertise at all.
> A small number of creators will be able to keep comments enabled on these types of videos. These channels will be required to actively moderate their comments, beyond just using our moderation tools, and demonstrate a low risk of predatory behavior. We will work with them directly and our goal is to grow this number over time as our ability to catch violative comments continues to improve.
> Let people discuss videos on other sites like twitter or reddit.
Yes, exactly. I've never understood why content-host sites seem to think attaching first-party comment hosting directly to hosted content is a good idea.
Why not just host what you host, and then let aggregator/discussion sites (various subreddits, various private forums, Usenet, Slack/Discord groups, etc.†) link to your thing-that-you host and host comments? You can have RSS feeds to ensure those aggregator sites can—if they want—automatically generate their own link-posts when your site has a new post.
And, best of all (from my perspective, at least), it doesn't force people who are consuming your content for different reasons into the same room where they will inevitably shout over one-another due to competing access needs. Each community can have their own conversation about your content, and can feel morally justified in kicking out uncivil people, since they can just go find a different conversation about the same thing in a different community. Rather than there being one "canonical" conversation that they feel shut out of.
† I exclude Twitter from this list because Twitter has no concept of partitioned subcommunities, and so inevitably, if there's a conversation about something on Twitter, it becomes the "canonical" conversation. That's kind of the point of Twitter, for some use-cases (e.g. celebrities arguing with other celebrities where other celebrities can watch and get pulled in), but it's bad for the use-case of regular people trying to have a regular discussion.
There's been a video on yt about a real airline pilot reviewing the sully movie [0]. The comment section is filled with people from the field, pilots, turbine technicians, etc. Even Sully's Co-pilot Jeff Skiles HIMSELF commented on the video. While there are usually tons of low quality "upvote this comment if watching in 2019" things, this thread was wonderful. And I think if the comments were hosted on a different site e.g. some pilot specific website (there are probably such), or even twitter, I think I wouldn't have found those comments.
Were there any valuable comments that weren't root-level (or that wouldn't have been made as root-level comments if it were impossible to reply to comments)?
I ask because I feel like you're describing the same sort of system I am in my sibling reply below:
> But really, if content creators want to get "feedback", they just need a thing that's essentially like email—a separate private channel between them and each audience-member—but where you can easily forward a whole email thread, after the fact, to the content host, and it'll appear below the content in an FAQs section. Sort of like how reviews work in some digital storefronts.
If all such comments were submitted knowing they'd just be treated as "private feedback" by default, with the moderator having the option to promote such feedback to appearing as a public "comment" [which, yes, isn't that different from "comments with pre-moderation", except for different expectations on commentors' parts], do you think that (valuable!) thread still could have happened?
Yeah, Jeff Skiles's comment got a lot of replies and he replied to some of them. This spontaneous AMA thing was definitely valuable, made the guy even more approachable!
Cramming everything into one flat thread has already been done in phpbb and works for small threads but big threads get into issues when you are not interested in every part of the discussion but only some.
1. the creator has their own private forum (e.g. a Discord group);
2. you, as a potenial commenter, can easily join said group as a guest, with your YouTube credentials used to identify you in the group;
3. or you can also just post "into" the group without joining it, using the form below the video;
4. the community itself can be deputized to moderate such comments;
5. the community can have all sorts of thought-provoking discussions in response to such comments, visible to one-another in the (private) group;
6. the content-creator would have an easy interface to "highlight" any given thread/subtree of conversation from the group, making it into a visible comment thread appearing below the video.
Essentially, this proposal would just take each YouTube channel, and make its comments section into a private forum under the content-creator's control; and then, separately, make the "comments section" below the video into a sort of moderated "best of" version of that forum's discussion of the given post.
The community is the valuable part; creators want people to find their stuff, users want to find interesting channels and its fans, etc.
If you evict the community to an outside site, will someone care if the video links starting point to Vimeo or another video hoster instead of YouTube? Probably a bit, but not that much.
Why would YouTube makes changes that make them more into a replaceable part?
I didn't suggest that you remove the community. Users have a stream of liked content, a list of subscriptions, etc. That's valuable on its own—just ask Pinterest. You don't need comments to make the experience social.
I don’t think of Twitter for discussion as much as for feedback. Whether you liked the video, what you’d like to see the next video be about, etc. Relatively 1-way communication.
Twitter definitely works great for that use-case. But in the process, it also frequently blows up into a tire-fire of "hot takes", when [Alice from subcommunity "Skub"] takes a feedback thread between you and [Bob from subcommunity "Anti-Skub"], and retweets it to flame Bob, or to flame you for talking to Bob—and you, the content-creator, get pulled into the ensuing discussion as dozens of people keep @-ing you about it.
I think, if users couldn't see (or maybe just couldn't retweet) one-another's DMs to a checkmark'ed-account (or to any account that was "made for the purpose of feedback"—maybe we could have a separate icon for that?) it'd fix a lot of Twitter's current problems.
But really, if content creators want to get "feedback", they just need a thing that's essentially like email—a separate private channel between them and each audience-member—but where you can easily forward a whole email thread, after the fact, to the content host, and it'll appear below the content in an FAQs section. Sort of like how reviews work in some digital storefronts.
What you're missing is that those other sites can eventually come under fire for the same reason. Then they will take the same route.
Also, imagine being a small creator that makes youtube videos. Are people really going to go find your subreddit to post comments on the video? That sounds inefficient and serves really no purpose.
Google has been moving toward that model for a long time... ideally, everyone will be using the get lucky button without typing anything (then they can show you what they feel like)
Over time it will probably become a large fraction, as YouTube stabilized rules, increases automation for monitoring the rules, and, frankly as channels that want integrated comments but still can't meet the rules leave YouTube for alternative platforms, or abandon creating the affected kind of content for some other pursuit.
What a ham fisted reaction to the controversy. Now all content creators have to ensure that no kids are ever in their videos, which includes background footage and movie clips.
I think this is a major problem with our current centralized model of the internet where giant corporations make all the rules. Large corporations have no ability to rationally judge situations or inject nuance into discussions. They simply take the easiest and most expedient pathway to make the problem go away regardless of the long term consequences. This is not behavior we want from the few major building blocks of our modern life.
I disagree that this is a problem with giant corporations. I think the problem lies in the public that demands blood on everything. There is no solution to the problem that youtube is being asked to solve other than simply banning all videos with kids in them.
Not even the public, it's the small group of news outlets which are trying to discredit youtube as a source of news and entertainment. Youtube is a direct competitor to cable news and the networks that host them, so being able to say, "Look! There's bad stuff that they didn't know about!" is a great way to push that narrative and reclaim some advertising dollars.
This isn't even in retaliation to the Verge and the other news articles that originally covered the story, it's due to the advertisers that pulled out of Youtube. There's very little chance the advertisers decided to pull ads because they have the 'moral high ground' - big corps like ATT and Disney are simply doing this so that, after the issue has passed, they can negotiate lower ad rates with Youtube. If there were no ads whatsoever for "Avengers: End Game" on youtube, the movie would still be popular, but it wouldn't nearly generate the same amount of buzz or 'hype' as the first one did.
This sort of stuff and the fake news hysteria in general seem like an attempt to discredit all alternative media sources, and the internet/blogging/free speech in general. For them, competing with thousands of people writing/making videos for free means their business model doesn't work any more.
Fact is, they won't stop with these articles and attack pieces and hysteria till the internet is basically like cable TV.
Google cares about being polite/correct too much. They are hyper-sensitive to any controversy about them. On the other hand, controversy loves to find its way to Google, because bad big tech, rogue algorithm is a hot topic of these days.
Well that is what this will end up with, in an era that everyone is easily offended by everything and anything, the common denominator might be a very very small number.
Ham-fisted is right. This is going to kill family vloggers, or even just vloggers who happen to have a family in the background. YouTube is a lot less engaging when you don’t feel like you can interact with the creators or other viewers. I know everyone trots our the “YouTube comments are a toxic cesspool and provide no value” trope, but almost every video I watch on YT 1) has a kid in it somewhere, 2) has worthwhile and positive comments.
It’s like the joke about nuking your house from orbit because you saw a spider. Personally I’m far more concerned about the creepy videos aimed at kids than I am about pedos leaving hobo signs in the comments. As another commenter said, they’ll just move to a separate site to link to videos. You can’t stop them from watching videos and sharing them with other people unless you shut the site down completely.
Do we know if that's the final solution? To me it seems more like a short term plug solution until they can train specialized models and adapt their moderation. I see no signs that this is the final long term solution to the problem.
I'm sure they're just as aware as we are that disabling comments kills the community and engagement around many channels; they just needed a way to stave off the controversy for a little while. It's a blunt solution but as a temporary solution it seems fine as long as they can roll out a better solution before content creators are irrevocably hurt.
Yup. Not only do corporations have no way to rationally judge these things they have innumberable perverse incentives to not even try to judge and just ban everything. Whatever hurts their bottom line will soon be censored as well as anything that even remotely looks like it.
Initially no one will leave because alternative don't allow you to rake in money or have the network effect like youtube does. But as the noose tightens and the money dries up people will act on their greed and move away to some new centralized service. Then it'll happen again in a handful years time.
The only way to cut this gordian knot and break the cycle is to use federated services, p2p, or self-host.
Agreed about the problem of centralization on the internet, but this problem is what happens. Ignored problems are amplified, but this is Google’s responsibility to police and fix.
If Youtubes ad revenue were negligible to the value proposition of the platform they would not be profitable and they would not give away the service for free.
> I just don't see how you get 24/7, 4k streaming without it being monetised by a centralized corp
For popular video, the cost center is bandwidth, not storage. In that case, webtorrent and ipfs are both capable answers - as long as you can overcome the initial viewership bandwidth early viewers of most online video are power users that will stay around and can be used to seed said video to the broader audience, and as the popularity wanes its likely user availability to act as seeders would also wane.
For unpopular video - esoteric, niche media - I don't think there is a good distributed answer. You want to use something like filecoin as a distributed incentivized data store but it isn't browser compatible. And nobody can come close to guaranteeing the availability of any video anyone wants to upload without resources like what Youtube has.
I guess theoretically you could have a distributed ecosystem of video hosts, akin to filecoin, but where members that have videos required act as http servers to serve the video directly. No central authority would host the video, and the actual host could be compesated by the relaying party. Of course then you are just making a really complicated bureaucratic way to effectively rent rackspace, but it would be at least a distributed one.
But at least half the Youtube centralization problem is solved, and as I demonstrated there are interesting ways to approach the later.
Of course all of these have the problem that western copyright is wholly incompatible with how digital information transfer operates, but thats a larger problem than online video, and its not really a problem you can solve with a technical solution besides repeatedly demonstrating the obsolescence and social / economic harm of the incumbent copyright regime.
> If Youtubes ad revenue were negligible to the value proposition of the platform they would not be profitable and they would not give away the service.
Well, it is neglibie. Youtube has never been reported to be a profitable business. The last time its finances were reported a few years back it was basically ‘break even’.
Would that decentralised idea work? Seems pretty cool.
Check out DTube (https://d.tube/) - it's effectively a decentralized YouTube built on IPFS. It's built on Steem to incentivize video uploading/engagement (https://about.d.tube/)
Theres also peertube, but with dtube particularly there are two major problems:
1) Steem in general is a mess. Its controlled by the parent company, you cannot mine it - the only way to generate it is on their content platforms. Its only a cryptocurrency insofar as it has public transaction records, without distributed proofing the integrity of the blockchain is faith in Steemit. Likewise, the value of the currency can solely be attributed to how the public perceives the value of a Steem token. Thats like every crypto, but most cryptos are trusted by their distribution of work, time, space, or stake. For Steem, its about the popularity of Steemit websites.
2) Using IPFS for the video hosting is what I'm advocating for but nothing addresses the second. D.tube simply centralizes on the site and depends on the valuation of the cryptocurrency keeping the lights on when guaranteeing an available seed of every video on the site. If you host your own d.tube instance, you would need to seed every video uploaded lest risk videos being lost.
Additionally, to my knowledge, getting money out of Steem is quite challenging since there is no corporate business account on Steemits end. You have to use third party wallets and exchanges to sell Steem. Good luck trying to onboard normal people into something like that.
That all being said I don't think D.tube is bad. Its a very novel approach to trying to bootstrap a video site by trying to incentivize participation with a deterministic token given out on popularity. But that depends on constant, sufficient growth to keep the hype going so the token isn't arbitraged down to nothing in value. Its only worth anything for its scarcity determinism, which isn't worth much in an endless sea of cryptocurrencies.
Your last point is valid, not sure how that service can be truly open. However, they do have a responsibility in the same way a city has responsibilities around keeping crime low, streets clean, etc., so it's business friendly and local business can grow. What they just did is like police finding out about pedophiles and then proceeding to shut down all parks and playgrounds, along with a curfew for everyone to be home by 8pm.
It's not a just a service provided by benevolent alphabet dictator. Youtubers and Youtube are in a mutual relationship (easy to argue Youtube depends on Youtubers as well), and it's currently very lopsided.
> It's not a just a service provided by benevolent alphabet dictator.
Except it kind of is. As I mentioned elsewhere, Youtube has never reported a profit. That doesn’t mean there is not tremendous value they’ll get from it at some point, but for now it’s huge amount of effort for marginal reward.
Hmm, ok I don't the details of their profit model, but I'm sure this isn't just charity and have some plans of profiting from content creators if they haven't already. It's business from both sides. I'm sure YT would shut down if it didn't see the value of content creators.
There are tons of websites that allow you to stream and upload high-quality videos. In 2019 that's nothing special. The only thing YouTube has and none of the others do is a giant audience... which is built thanks to videos uploaded by content creators, mostly for free.
YouTube is incredibly important to the median young person in modern society. It's where probably a quarter of all my own personal STEM knowledge comes from. In high school it might have as much as half.
Like it or not, products like YouTube, Instagram, and Snapchat have become keystones in the lives of the younger generations.
As a Father of very young kids, I think this is nothing short of emergency. Not only commenting on kids videos should be banned, also youtube should heavily vet and tag all their kids related videos -- these days kids are hooked on watching cartoons and poems on youtube and a lot of times, there are really garbage videos targeting kids just showing little girls buying and playing with make ups, cleaning houses and learning about shopping.
People said the same about the chat rooms we used when we were kids. You could argue that those had much more potential than Youtube comments to lead to actual physical harm. In some cases, bad things really happened. Chris Hansen made a show about it, and police used chat rooms to find pedophiles. But were internet chat rooms ever an "emergency"? Where do we draw the line with bans?
There are family-friendly content creators who diligently police their comments sections who now have their livelihoods severely impacted.
its not only stereotypical or extreme behavior in kids video which I am talking about, its also about banning videos (and people who make them) which really are harming kids today.
Imagine if your kid really enjoys watching crap youtube shows like “Emma Pretend Play”, “Nastya and Baby”, and “Ryan toys review”, what exactly are they telling the kids? That money is meaningless and buy buy buy?
But this is a completely separate point from the topic itself. The content curation of your kids is up to you as the parent. Don't let them watch videos like that if you don't want them to watch them.
With this kind of issue, there are no one "right" line, everyone has their own line. What you can do is, you fight for you line, because if don't other will try to impose their line. That being said, I personally favor no ban whatsoever.
Its not even just about individual actor child predators. Its a parents responsibility to protect their child from harm - that used to just mean making sure they don't get hit by a car or burn their hand on the stove, but nowadays it means shielding them from hostile actors looking to exploit them psychologically and modern society is completely inundated in such actors. Disney, Netflix, Google, etc are all in the business of addicting your kids as a profit center. Their minds are sponges, and its so critical parents realize what you let soak in will hugely influence the person they become. I don't want to fathom how mentally damaged the kids growing up on skinner box mobile games, pregnant Elsa & Spiderman, manipulative social media, and the fruits of billion dollar ad budgets by some of the smartest men alive coming out of Disney will be. It will probably be on similar scale to how many children pre-1980 grew up with regular beatings and resource deprivation by parents leaving them bitter addled adults with endless mental illness.
They aren't going to be any more damaged than the previous generations. Why? Because none of this is new. This kind of manipulation happened and happens all the time in all spheres of life. How do you think religion spread?
The accessibility and availability of screens to glue childrens faces to is totally novel and modern. There used to only be "saturday morning cartoons" because the singular TV in the house was dedicated to content the entire family of all ages could watch the rest of the day. Even at the height of wanton kids "advertising" tv (shows in the 80s like TMNT / He-man / Transformers that were just 30 minute ads for toys) the design of the product was never to totally subsume the child at all hours in the product.
We are seeing substantially more children coming of age now without any social skills or ability to interact with people because they spent all their time glued to screens. They are highly addictive, conditioned to consume, and emotionally stunted. Zombies by any other name, programmed by smart minds through operant conditioning to crave gambling dopamine releases from insanity like loot boxes.
> We are seeing substantially more children coming of age now without any social skills or ability to interact
I would even say that the stubborn refusal to even see a problem here, and the shallow, abstract and disinterested ways in which that is argued, is evidence of pretty hefty damage already inflicted on previous generations.
I won’t let my kid watch YouTube at all. Not when we have people editing peppa pig into being a murderer and listing it as a kids cartoon.
That said blanket disabling of comments on anything that features a kid is stupid. One channel I watch laowhy86 who vlogs about China had all his comments disabled on all his videos because his kid is in the pram during his vlogs.
>also youtube should heavily vet and tag all their kids related videos
Clearly YouTube usage needs adult supervision more than Google trying to moderate comments and blocking specific content from children. I will never work.
This seems like a ham-fisted solution does it not? Channels like ChadTronic play on the nostalgia of being a kid back in the '80s and '90s, yet AFAIK has been unable to enable comments on any of his videos. The weird, funny comments are a big part of the experience in those kinds of videos. Some tropes of the show itself have been built around the comment section.
Would it make more sense to instead target the patterns used by predatory commentators rather than shutting down the system completely? This is Google, the company of bringing meaning out of arbitrary data, is it not possible to build social graphs of what people like, scour, and activate these time codes? Couldn't you restrict the features that enforce those patterns?
Lastly, does forcing these comments off negatively impact the rankings of these creators? Comments have traditionally played into the engagement of any said video and had an (understood) impact on how a video ranks on release. Are these channels now just permanently stunted in their future growth?
>Lastly, does forcing these comments off negatively impact the rankings of these creators? Comments have traditionally played into the engagement of any said video and had an (understood) impact on how a video ranks on release. Are these channels now just permanently stunted in their future growth?
1) Yes.
2) Yes.
Youtube has taken this action without an overall plan. One of the Youtube creators was told as much in a chat with youtube support. (https://youtu.be/oeI0-ijIotk?t=504) They indicated to him that some creators will be negatively impacted until everything is worked out. The program is working just like they planned. lol
Or ya know they thought the situation was sufficiently urgent that they needed to do something immediately without thinking through every possible eventuality. Have you never dealt with a PR emergency before? Often, the "let's take two months to game this out" approach is not the most effective way to address the problem.
>Would it make more sense to instead target the patterns used by predatory commentators rather than shutting down the system completely?
Terrible, terrible idea.
Transfer this to the "real world", and see how it plays out - let's build a system that will analyze people's behavior and penalize them on whether it classifies them as pedophiles or not! This will surely protect the children.
Moderation is a notoriously tough problem even without the whole trouble of automating it through something as "ephemeral" as content analysis.
>Transfer this to the "real world", and see how it plays out
This isn't the real world though. This is, tentatively, "pattern match on videos uploaded by non-verified creators with little or no uploaded content, featuring primarily or entirely children, with an unusual level of timestamps in the comments, in an unusual number of playlists that also fit this description."
The main challenge here is more meta: how do you discover these videos before the engagement identifies them as such. I think that's why YouTube went with the overreaching "throw a NN at the problem and just flag anything with kids" solution.
Probably wasn't the best solution from an engineering perspective, as many here have pointed out. But it may have been the smartest thing to do from a PR perspective.
Man people are depressing. I'm guessing the scum bags will have to post scummy things on other forums and link to videos. At least the consumers on youtube wont be subjected to such.
Incoming new challenges for reddit. Maybe its an opportunity to get rid of some more scummy communities.
You are depressing. You are validating blanked "solutions" that harm the creative content and at what cost?
This obsession with regulating and controlling everything that happens on the internet is just hysterical. Also "Think of the kids" is the most common of the "appeal to feelings" falacies that keeps on getting hammered and hammered and it's _depressing_ to see people fall for it.
When I saw your comment was flagged I vouched for it. For the record I don't consider it a personal attack.
My reply was be thus before it was reflagged.
Youtube comments are so painfully low quality not to mentioned badly designed as far as interaction that they are nearly 100 useless as it stands. Its a minor loss at best.
Reddit is an actually useful discussion forum with a useable model of interaction and sub communities dedicated to particular interests leading to discussions that are somewhat on topic and with people that are at least peripherally interested in the topic.
To be clear I think there is room for disagreement within the boundaries of the law and site rules and have no problem with communities existing whom I disagree with.
Whats more I'm happy that the code for at least old reddit is open sourced which is why you have at least one complete clone of old reddit with different moderation rules. Even the truly odious can have their own say on their own platform starting at about $75 a year. More if its popular obviously.
What I am specifically predicting is that the individuals who are specifically perving out to kids and juveniles will be liable to turn to a platform like reddit because it allows topical, optionally private subforums with a friendly interface where one can share links and comment.
I'm specifically calling for individuals like that to be banned from reddit. Let them have their own site until they end up guest starring on to catch a predator.
I didn’t make the comment to your original post, nor did I necessarily agree with it. I just thought it ironic that a comment (admittedly crudely) arguing that censorship is dangerous was immediately censored.
I think the time is ripe for a YouTube competitor to take chunk of its market share. YouTube has an impossible task of making a video hosting platform for everybody. Kids definitely need a particularly safe space to upload and to have supervision of some sort. Even adults don't necessarily want to be subject to the weirdness of the internet. Others really want a completely free, wild and weird platform. It doesn't make since for one platform to try to cater all these groups.
My conclusion, after writing and thinking about this a lot lately, is that the three mega platforms are unwise, but unavoidable, due to network effects.
We are going to have to make a concerted effort to somehow fix this. Hyper-niched communities, like Hacker News, work so well because we are united around shared passions and goals.
Mega platforms contain many ideologies which are at odds with each other, constantly battling for their truth.
Never before in history have competing tribes had to share the same space like this. It doesn't work.
Facebook et al go out of their way to silo its users into small cliques of influence. On Youtube if you watch gaming videos you get ads for gaming, get recommended gaming, and aren't going to see ads for makeup or power tools.
Any open access community will attract hostile actors seeking to usurp it. The problem with Facebook / Youtube / et al is that these corporations are more interested in trying to find a technical way to automate moderation than to actually moderate and police their platforms to weed out of the bad actors.
It isn't something that is inevitable or even made worse by scale, its just these companies are operating to maximize profit, and needing human faces at screens to moderate public comments is a substantial expense.
I bet it's more psychologically-based than that. From collegiate safe spaces to news media bubbles, to the social networking accounts themselves where nobody honestly follows people or ideas they themselves object to, everybody wants their own gated community/echo chamber.
Personally, as big as I am on free speech and anti-censorship, I don't believe that anybody should be allowed to post pix of minors online under any circumstances, even parents harmlessly to their personal FB profiles. UNLESS it's posting old pix of one's self. Otherwise I feel it betrays the privacy rights of those minors. They may well feel differently when they come of legal age, and by all means they can post whatever they like of themselves after the fact, but in the meantime privacy should be the autoset.
It's not about user siloing, it's about silencing the speech of "other". When the other is anti-vax and flight earth, that silencing seems morally right. But there are some significant unintended consequences to that.
Maybe an analogy can be drawn from two tribes being integrated into an empire. Usually the empire changes both tribes into its form, but some new gods get added to the pantheon.
Scalable web services and robust home internet with high upload rates should be the solution in my opinion. Any centralized corporate service is inherently limited. The internet should be the platform.
I agree there’s no reason it has to be a corporation. It just has to simpler for non-tech people to do creative stuff online and have others see it. YouTube makes it extremely easy, that’s why they have such dominance.
There's definitely a market for something that gets people up online in a straight forward way. I know a lot of art pros and they seem to have no trouble learning the tech they really need, whether it's cameras, synthesizers, or RAID setups.
Does YouTube even need comments? I can't remember any meaningful discussion happening in the comments, it's mostly meme type stuff. Long term I can see this being the final resolution for toxic comments.
Exactly, on a lot of technical/tutorial videos you'll see helpful discussion, like people asking questions that weren't covered, pointing out flaws, etc.
I think you need to say what you are watching to qualify your comment about the comments.
The more intellectual the content the more intellectual the comments. If you do a lot of your learning of code and development things on Youtube and watch conference talks online (rather than attend) then you are not going to find drivel in the comments.
Same with music, if you go through the back catalogue of your youth and find that record that only sold 5000 copies the first time around then the comments are going to be overwhelmingly positive.
Youtube doesn't recommend me any of these videos with meme grade comments or whatever these kiddo videos are that now can't have comments in them. I think I am watching different stuff to most people here who have a lot of hate for Youtube comments or maybe I am just not that sensitive.
To think there used to be a time when the BBC had a complete roster of paedophiles presenting all their music and TV shows and nobody noticed. The UK's most prolific paedophile worked for them for decades. How the pendulum of stranger-danger swings!
> I can't remember any meaningful discussion happening in the comments, it's mostly meme type stuff.
That's because you're watching pop-videos. There's a goldmine of educational content on YouTube which breed valuable comments. The problem isn't the concept of commenting, the problem is that shitty content attracts shitty comments.
A good example I've seen recently is Ronald Finger's Fiero restoration series. It's his first classic car restoration and the comments are full of good tips and friendly warnings about mistakes he makes.
Yes I read comments a lot in videos I watch. They tend to be more infotainment, so comments are filled with discussions surrounding the videos. Utopias don't exist. It's either we put up with toxicity, or live under a censorhip/dictatorship. Seems those are only two options in the long run.
As others have mentioned, it depends a lot on the video. Often technical or informative videos will have quite a few worthwhile comments, corrections or more in-depth explanations for instance. Other kinds of videos, especially if they are opinion pieces, not so much.
Probably. Comments provide feedback for content creators, which improves quality. That helps engagement, as so does being able to interact with the content creator and comment in general. That means viewers have a better experience and are likely to spend more time watching ads.
Legitimate question, does this apply to any video in which a minor appears for even a very brief period? If you're recording an hour-long video podcast, and a child of one of the hosts runs through the room in the background for one second, does that entire video get comments removed?
It means whatever Google wants it to mean, because they operate Youtube however they want. Almost nothing Google / Youtube / etc do is based on legal mandate or requirement, its all their own arbitrary decisions they think will maximize shareholder revenue / retain advertisers on the platform.
> "Is it worth to include this kid in my skit if it will reduce the revenue by 10%?"
Yeah, that answer is easy—it's not worth it. I'm more interested in what happens if it's accidental.
You mention vlogs. If a "vlogger" is recording themselves walking down the street, and a kid passes into the frame, what do they do? Do they need to scrap the clip and start over? Do they attempt to edit out the kid? Do you just never record in public anymore? Actually, that last option seems the most practical, but it robs the whole piece of its intimacy.
If Youtube elects to take the stricter approach to this, it's going to have unintended consequences that I'm not seeing anyone else talk about right now. I almost wonder if it would be better to remove public commenting entirely.
My vlogger example was rather lazy. I'm not really sure how to define the current meta youtube content but "creative" is definitely not a word would be using for majority of it.
I mean every youtuber thinks about how to avoid being demotized or just in general reduced profits.
I think your point about public filming is very valid as well. Even if the risk that your video will get caught in the filter is <5% it's still not a risk worth taking.
I watch a lot of skating videos and un-surprisingly enough they feature kids, sometimes as main subjects but often in the background. Filming at skatepark is suddenly a considerable risk because a falling kid might evoke some sexual conotation in extreme minority.
This is a lazy solution that unfortunately just be adapted into the collosal folder of other lazy youtube solutions until this medium loses the last bits of creativity it had.
1. YT has problem that generates a lot of bad press and loses money/face.
2. YT develops either easily-gameable or far-too-heavy-handed automated solution to said problem, because they can't afford to actually police their content.
3. Passionate creators (that don't get struck down mistakenly) have to jump through ever more hoops, while ad $ flocks back to YT.
4. Repeat steps 1-3.
Granted, I don't really see a solution, as their's just straight up too much content and it's ever growing as the barrier for entry is just an internet connection.
When we were young, we had the Internet and we'd share source code and run our tiny BBSes and then our geocities pages and enjoy sharing. We told each other how when we were older and in charge we'd have such free expression. As 11 year olds we discovered goatse and survived bypassing the 18+ porn warning. We're adults now and we go on all right but we made one big mistake.
We let you censorbots take over. Good job. You took this beautiful thing we had and you ruined it.
And now you're coming for everything under the guise of taking care of the children. It took so much work to get you guys to leave video games alone and now you're here for everything else.
I would love to see a solution for this in the browser. If there was a tag, or some metadata attribute that indicated where each part of the document originally came from, you could just configure your browser to not render parts you don't trust. So in this case, YouTube could just flag the comment section as being provided by other users, and you can choose to block it if you don't trust other users to provide appropriate content. I would love to say that websites would flag ads as coming from a 3rd party ad company, and we could choose to block those too, but of course they wouldn't.
I wonder how long before they totally disable all comments. I mean I mostly watch all my YouTube on my smart TV, and as far as I know, I can't read any comments on the TV. Additionally, I can't even read the description of the video, or the tags, or even the number of likes/dislikes. So I guess that is the direction YouTube is headed to.
It seems to me that yt could've implemented the perfect honey trap for pedos.
Make an inappropriate remark on children and (being Google), track you down using a variety of fingerprinting techniques and report you directly to the appropriate authorities of that country.
The latter part could probably even be automated.
And yet, they've taken this stance: one that puts the onus on the content creators themselves to moderate the comments. Unfortunately, the pedos are still out there, being predators.
They already do this and so does Microsoft. Many companies have shared image finger-prints (hashes of images that can deal with resizing) and special people granted permission by the Dept of Justice to report such images. It's a small number of people, leading to at least one Microsoft employee filing a lawsuit for the PTSD he got having to verify and report illegal images.
As far as these YouTube videos, Google does get rid of all content with clear abuse and reports it to the authorities in the US and possibly the origin country. For this particular case, none of these videos have illegal content. Most of them are videos of teens filming themselves. It's others who go through and place them into a creepy context by grouping all of them together in playlists and via comment-chain links.
Creating any type of honeypot is most likely unethical and illegal. Just look at what happened to the FBI in the Playpen case.
> It's others who go through and place them into a creepy context by grouping all of them together in playlists and via comment-chain links.
Wanted to pull this out as (IMO) the most important part of your comment. There's nothing for the police to go after, because nothing illegal was happening here. The videos themselves were 100% benign, and while the comments were creepy and awful, they were also free speech.
So they cannot be censored, sure. But can they not be probable cause for a temporary tap on their internet connection? And if that reveals stuff like https connections to unpopular sites whose content can't be viewed publicly, Tor traffic, maybe torrent traffic, then that might be suspicious enough to bust the door while the connections are going on.
I'm against dragnet surveillance and all, but comments that are clearly predatory is a red enough flag that even I start to think it warranted to escalate step by step, and continue depending on what is found in each step.
Of course, this entirely depends on whether the comments are "oh look at that sweetie, probably has a nice puss" or just "oh look at that sweetie" interpreted the wrong way. I'm on the second page of HN comments and read three linked articles about it, and nobody mentioned what it's all about other than "predatory comments on videos involving minors".
Edit: As expected, Dutch news coverage is more explicit than prudish American. Example comment is: "$time $camel_emoji toe... Then again at $time". Another example: "Hi honey baby where r u from??". Last example is just "Love you" with a bunch of emojis like hearts and presents. Another website mentions the kids were called "godess" or "barbie". This sounds like it would be extraordinarily easy to just build a blacklist of words that trigger new comments to land in a moderation queue (processed either by youtube itself or by the video owner)... not sure what the difficulty is exactly. It seems harder to detect kids in videos than to build a blacklist of these comments, except that the former can be automated and the latter is ongoing moderation (probably too simple for Google: "if you can't automate it, it can never scale, so it can never be a solution").
There is no financial incentive for Google to do this, so they won't. Googles job is making money for shareholders, not making the world a better place.
It would be nice if governments were both interested in the greater good and tech savvy and interested in their citizens wellbeings enough to legislate this, but thats even more farfetched.
You know they're an advertising company? Unless you're offering ways to improve their predation on consumers I can't imagine they'd care to hear what you have to say.
They're not protecting anyone, they're just sweeping the problem under the carpet. The real problem is naive/ignorant/reckless parents allowing kids to upload their stuff for the world to watch.
I'm pretty sure many don't even know; after a certain age, kids can be pretty sneaky if they want. We only had a single home computer, on the living room, yet I still accessed plenty of shit my parents have no idea. And nowadays, there's always those school friends with smartphones...
That's the easily solution, but the easy solution is hardly ever the right one. There are plenty of reasons why commenting on videos can be useful. Lectures and other educational content typically have errors and other useful information as the top comments. Those are useful.
I’m going to say something that will likely get me downvoted like crazy here on HN, to salvage some karma let me first say that I love the first amendment and free speech.
Now, as a parent I am absolutely disgusted that there was recently discovered a YouTube “gang” pushing comments that were promoting pedophilia, in plain sight, tagging comments so algorithms could pick them up and other folks could easily find them.
Secondly I am disgusted by the fact that there are videos embedded within Peppa pig videos on YouTube, in the middle, pushing kids to harm themselves and it is impossible to detect them. Google these things they are all real.
Look I get it, I’m a parent, I’m responsible for my child so the YouTube app is gone, she will not get access to it until she’s old enough. That still does not make this stuff ok, these are terrible human beings creating this stuff. I applaud YouTube for stepping in but I still won’t install their app anymore, this does not mean it is ok for their platform to become a great place for that sort of stuff no platform should ever serve that.
When folks on here comment and say one parents voice outnumbers 100 others, why the hell not if this content is targeted at our kids?
Edit:
Exhibit A:
(This one is apperantly fake news, sorry. I was fooled
myself a few days ago.)
While “momo” May be a hoax there are videos with adults in them embedded within these cartoon videos.. that was just the first result I got when I googled, while there may be a ton of fake news these days these mashups I have seen myself .. sorry if I hurt
my own point with that first link, but the point is very valid.
That said, not that comments will be missed. Been using the extension "Distraction Free for YouTube" for the last couple of weeks which includes an option to remove all comments on all videos. Plus every other distraction. Been enjoying my YouTube time a lot more since i started using it. Amazingly clean & refreshing experience. Now I am on the hunt for similar extension for other parts of the web.
Is this something that can be put on the shoulders of the video uploader? Make it incumbent on the uploader to moderate their comment sections instead of just yanking the comment sections altogether?
I understand YT's concern. The Feds find child porn discussions on your website, they're not gonna simply ignore it, any more than they ignored "escorts" on craigslist and backpage. I get that part. But instead of going down this road where you just delete it for everyone, tell people that they are now responsible for the content on their videos. We find these child porn talks on your videos we yank your channel, and report it to the Feds. Simple as that.
I don't know? That probably has some drawbacks too, I don't know? I'm just trying to find anything that can be done to protect us, not necessarily from the douchebags, but from the worsening consequences that the douchebags seem to have on everyone else in the community.
Here's another signal their abuse team could be looking into: On videos where a lot of people are following links to
a specific timestamp, grab a couple frames starting at that point and run them through the AI to see if they contain children, and if so, put them on a review queue that's sorted by "popularity".
Google manages to impose the Google Death Penalty on people for supposedly violating AdSense terms or having an unpaid Google Cloud account or any number of other reasons.
Is there some reason they can't require a verified account to comment and nuke any account found to be making inappropriate comments? That sure would put a stop to it.
I am reminded that we are still doing a distributed web all wrong. We don't need comments sections
Bear with me for a moment - but PageRank is the start of the problem. It's basically measuring link distance from Stanford University web site - the closer you are the more juice you get. But it assumes that a single domain is a single publisher (and so all articles on a given site should get the same google juice - the same rising tide)
But facebook (and many many other sites) lend their domains google juice to all comers - so that if you rationally want greatest reach (ie highest rank in search engine) you publish not on your domain but on facebooks (or medium etc).
But this leads to many unintended consequences- the facebook moderation problem being one.
it if we see facebook not as a monolith but as firstly a web space provider, then a indexer over that webspace and then a algorithmic feed provider, we see dramatically different ways of dealing with the problem - if we should give people webspace to put their kids pictures and share them (facebooks main use case arguably) then "normally" that space would be under the users own domain.
So why not have a more granular pagerank? Why not have google assign juice to facebook.com/paulbrian seperate to juice assigned to facebook.com/paulgraham.
Then we see no reason to host the webpages under facebook.com - if I need to "earn" my google juice just as i would on a normal domain it's no benefit
I see comments sections in a similar vein - having my comments stored on youtube es pages / domain is just not how it should be - they are my comments so they go
on my server / web space and they link up to the video
That way google juice tells us which people are worth reading and which are hate filled garbage - based on their reputation.
I need to flesh this out some
more but we do seem to have gone wrong
This is true, as it doesn't need to be a matter regarding only the creeps. I was thinking sociopolitically, where adults related or not make the decision to use minors as props, whether for a thing as big as ad campaigns or as small as for social brownie points. As a grownup, if someone were to post my image without my express permission I could pull some legality in my favor. Children generally don't have the same resources, lacking their own guardianship. I'm just saying that until they can legally make such decisions for themselves, privacy should be the standard, to save them from embarrassment or abuse. We must all decide for ourselves what to share of ourselves. http://www.samadhantutors.com/
If not a good idea then it seems to me to be a very practical one (despite it being censorship).
As nowadays no matter how carefully one chooses one's words or how innocent one's thoughts are about such matters, someone somewhere will misconstrue them and take offense.
Shame, but that's the way of the world these days.
This is just idiotic. Rather than removing videos that creeps are into, or in any way addressing the situation where their platform is being used by creeps, they are just removing the evidence that creeps are there so we won't be able to embarrass them by pointing out the creeps using their platform to be perverted creeps.
So the creeps will keep creeping on kids, and nobody will know except Youtube, since they have all the logs. Nobody will be able to screencap an advertisement on a video with some pedophile's comments anymore, so the problem is solved as far as Youtube is concerned.
> Why not require the watcher to enable a webcam & mic while watching videos containing children and create an algorithm to detect wanking?
Just how far are we willing to go to make our selves feel safer?! If you think that recording viewers is an acceptable and proportional solution to this issue, then your sicker than the creeps who get off videos of children.
I would start by closing all the accounts that made creep comments on children's videos, or showed a pattern of viewing videos that are obviously intended for that audience, and banning them from creating other accounts (hard to enforce, but at least they can hold that in their pocket if one of these creeps shows up on their radar again), and then forwarding their information to law enforcement. Lots, or almost all probably, of those accounts can be linked to a real person (either by the username or their IP). If police knock on some doors a lot of these guys are going to be too stupid to lawyer up and will probably confess to much worse things or agree to hand their laptop over for inspection after a couple of questions.
Additionally, by a simple analysis of what videos are overwhelmingly viewed by accounts that view other inappropriate videos they could detect and remove almost all the videos that creeps are into. The original video which blew the whistle on this even pointed out that the youtube recommendation algorithm had detected this pattern of behavior, and once the person viewed a couple of these child exploitation videos they were recommended hundreds and hundreds of others.
> I would start by closing all the accounts that made creep comments on children's videos, or showed a pattern of viewing videos that are obviously intended for that audience, and banning them from creating other accounts
While banning accounts that post multiple comments with questionable timestamps may be a somewhat reasonable way to proceed, just banning people based on the pattern of viewing they exhibit will lead to an unacceptably-large percentage of false positives. Just think back how many times you're feed got filled up with videos of a certain topic because youtube misclassified you somehow. The issue is further exasperated when the same account is used by multiple people (a family computer in some living room, an adults phone used as a pacifier, and other situations in which viewing patterns get muddled).
Just permanently banning people on patterns alone is a very bad idea.
>(hard to enforce, but at least they can hold that in their pocket if one of these creeps shows up on their radar again), and then forwarding their information to law enforcement. Lots, or almost all probably, of those accounts can be linked to a real person (either by the username or their IP). If police knock on some doors a lot of these guys are going to be too stupid to lawyer up and will probably confess to much worse things or agree to hand their laptop over for inspection after a couple of questions.
The thing is, what they're doing, while sickening, is not illegal. I strongly doubt law enforcement would be interested in getting flooded with reports of non-crimes that have a high false positive rate. I also doubt people will take kindly to being misreported to the police.
I personally believe that the issue of people creeping off videos can't be addressed without causing unacceptable (and sometimes significant) collateral damage to others. I think the damage done from banning comments on videos of children will far outweigh the damage caused by those comments themselves.
People should evaluate this issue rationally and not overreact.
I feel this is unfair to content creators and to the subset of viewers who comment decently (which I think is actually the majority of viewers).
I know of innocent content creators who have faced this, although their content was not child-oriented and showed their own children for barely 10-15 minutes across their total content time of 500+ videos. Banning comments has consequences such as viewers not being able to express sympathy or appreciation when it might help. As a viewer, comments introduced me to many new channels and concepts and cultures - all that is lost now.
Now noone will be able to verify for themselves the scale/severity of a comment problem anymore, or learn from it. The evidence is gone.
Might have been an interesting thing to dive into for a while, look dispassionately at, think about, and to learn something. But no.
All you can get now is some dramatized version of reality from third parties, who can barely distinguish between crime, pedophilia, child abuse, and inapropriate comments online, or be bothered to investigate the problem in any depth. Perhaps it's too soon for that.
He happened to be married and have children.
Youtube disabled all comments on all of his video because like one or two feature his kids ...
But 99% of his content doesn't feature them
It seems YouTube is doing a lot these days to the detriment of creators and vloggers. YouTube is by far the best platform out there for posting videos, but are there any viable alternatives? I think we need another YouTube. Vimeo comes to mind but it's a bit fringe (mostly weird videos), and I'm specifically looking for creator videos. Maybe this is a startup opportunity for an eager HN-er, although I suspect running a video site is insanely expensive.
Are they going to require an ID system so the age of everyone in the entire video can be verified? I guess 2/3 of YT vids will have comments disabled? What about all those super popular channels featuring family vacations? Not even documentaries can have comments?
I honestly don't believe that this headline is true, but if it is... Good opportunity for a YT competitor to spring up.
I don't have the stats to back this up but I imagine there would be a significant portion of unsavory comments that appear from people/bots that didn't actually watch a large portion of the video. I think it would work in YouTube's and creator's favor to disable adding a comment until you watch a large portion of the video or you load all the ads.
You think giving the attention-seeking 13 year old uploader the ability to approve comments that praise her ability to lick a popsicle is a solution? These kids don't understand what is going on.
If only they could develop some kind of machine learning AI tech that could make highly suspicious comments require approval.
I guess they just aren't sophisticated enough to do that... But I guess they must have AI that can identify kids in videos accurately (even 17 1/2 year olds).
Ah yes, halting conversation for our own good. We see this on reddit all the time. The conversation gets hot so the moderator locks the thread.
How does this help anything? I've actually asked. And I never got a straight answer. I figure it's some kind of moderator-loss-of-control-anxiety-reaction.
Have their been any studies on comment quality at free vs subscribed (for money) sites?
Perhaps Patreon should just add comments and subscribers can comment there. Content creators could link to it and it would be read-only unless you pay.
This doesn't seem to be getting a lot of play, but isn't it important to note that YouTube has been generating ad revenue by serving this content to this audience? It seems like they are only stopping this practice because the amount of revenue that they will lose from Nestle, Disney, etc. exceeds whatever amount of revenue they had been generating by aggregating these videos and serving them to this specific audience.
Preventing the aggregation and serving of this class of content seems like the larger problem.
It's also not clear to me if the removal of the comments will prevent the aggregation of these videos. My suspicion is that even with the comments removed, these videos will still be grouped together and will still garner the same undesirable audience. I don't see any talk of somehow clearing whatever history Google has collected that has caused them to be clumped together in the sidebar.
How is it possible that it's easier for YouTube to identify children (people under the age of 18) in videos, compared to identifying certain types of comments and automatically moderate those?
Or is it some law regarding censorship that's the issue?
Children's appearance is not likely to change, in ever more sophisticated ways, so as to still appear like children to humans but not to the algorithm. Thus, while not intrinsically harder than text analysis, it could very easily be the case that it is easier to flag videos as having children (and thus turn off comments) than it is to automatically police comments.
Note that they said it won't apply to videos of teenagers; my guess is that this is because it would be more difficult to automatically distinguish between teenagers and young (or just young-looking) adults.
Everyone seems to be blaming Google, but can we acknowledge that this wouldn't even be a problem if people weren't jerks? The only reason Google even has to do anything is because people are posting predatory comments.
And while at it, add a forced moderation for the comments for the rest of the videos where channel owner needs to approve of, and be responsible of for any comments shown on the videos. And Facebook too.
Here's a crazy idea, why not only let children comment on children videos? This way, kids that want to still express themselves and connect to their peers can do so on YT.
I don't know how to feel about this. The concept of "commenting on web page" is old, but with so much people connected to internet, do we really need a comment section on all pages ? From my point of view, comments are most of the time irrelevant, especially on Youtube. HN would be the exception.
Comments involves a lots of energy to moderate for no value most of the time. I know it's nice to have feedback when you create something and share it whit the world, but maybe we don't need a comment system to have user feedback !
Comments and other forms of p2p interaction are generally added to create a community. People don't come back to a site/store/place if they don't get a sense of community.
Comments are also generally very hard to moderate by an automated system (as also shown here).
I like twitch's model where the channel owner and its moderators are responsible for the kind of community they want in that channel. There are diverse kinds of community on twitch as a result.
Youtube could probably incorporate some version of this for smaller-medium channels and it would work better than current cesspool.
Twitch's model is terrible. Streamers have literally been banned for their chat going to other streams and causing trouble. Think about this for a bit. That's making the streamer responsible for the viewers, but they have no actual ability to limit what their viewers do outside of their stream.
Twitch only works, because they only apply their rules randomly to some people and only sometimes.
My thoughts were along the same lines. Why exactly are comments allowed on YouTube at all? Literally all humans are aware that YouTube comments are a cesspool.
I watched the original expose video of this and I found the comments be disturbing, but I don't expect any platform to perfectly prevent comments. The more damning part for YouTube was how the recommendation engine was clustering the videos based on pedo preferences and automatically creating little pedo playlists. To me, that is their bigger fault as a platform is that they are built on automatically giving people more of what they like. Even pedos.
Implying that Reddit comments are any less of a cesspool. Reddit hosts a lot of quality discussion from small niche communities but the same is true for YT.
If anyone can figure out how to maintain quality discussion at the scale of thousands of speakers they'll be a billionaire overnight.
Alright, cool. Now all the pedophiles can lurk in secrecy and collect all the recordings and repost them on pedophile websites, and YouTube can pretend it has nothing to do with it because none of the offending comments are under the videos making it abundantly obvious who views these videos.
They're worried about the Feds in the same way that craigslist was worried about the Feds when they scrapped their "escorts" section. (And the same way that Backpage should have been worried about the Feds when they hosted their "escorts" section.) If the Feds find that pedophilia stuff on your servers, you better have a good lawyer. Which I assume YT does, and just like craigslist, they have concluded that the nuclear option is the one that will keep the maximum number of executives out of federal court.
My question is this, is there a better way? Is there a way to prevent these undesirable comments other than just taking comments down for everyone?
I understand YT's position. They have to make money, and their executives would just as soon not receive the Backpage treatment. That said, is there a better way?
Fair enough. And I know companies need to act in the best interests of their shareholders. But shouldn’t a backpage exist? We need companies to provide services that the government (which is still stuck in the dark) doesn’t approve of: prostitution, drugs, etc.
The march to make the world a safe, sanitised place has so many bumps and slippery slopes on the road that it's hard to gauge whether we're making progress or not. Internet is hard to tame and so is human nature, both in good and bad.
The trigger for this was some big companies pulling ads. I suppose that in the context of "predatory and obscene" comments regarding children it's well understandable. The problem with that is what happens next time when the advertisers pull ads because there is something else they don't like? Eventually, it leads down to political correctness and self-censorship as there is always someone who gets offended, and even the possibility of getting kicked out of Youtube or the service losing ad revenue keeps everyone on their toes.
Further, disabling comments fails to address another issue: pictures and videos of children freely available on the internet. That cat is out of the bag for good. The technical response would be to have a separate (private or public) paedophile forums where they just link to Youtube videos and make comments on the videos. That's outside of Youtube's business and keeps BigCos from having bad PR but would still matter to any underage person shown on an internet video.
So the real enabler to this is the proliferation of user-published content. Especially when people seem to like to share stuff online, inevitably also with strangers, it is even conceptually pretty hard to limit the audience of that sort of material. Fighting that process is like the war on drugs: it will just claim casualties one after another, left and right, whereas nothing really changes. While trying to make things nice and clean for everyone, babies get thrown out with the bath water.
Dirt on internet is like a law of nature. There has always been dirt, sometimes more and sometimes less. While paedophilia is the true hot potato nobody really feels comfortable with, the other dirt is a valuable cross-section into the scale on which people are. Bad (Youtube) comments can be so revealing at times, and they really pull you out of your bubble.
I don't like the general process of eliminating dirt and trying to set a political correct and business-approved bar for quality because what remains after that is shallow reflections of actual persons and human psyche.
I don't know what Youtube should do either. The concept of brand companies disliking that their advertisements are shown next to offensive third-party comments is very much absurd in the first place, so how can you deal with that?
I can understand that companies want to whitelist or blacklist specific categories and channels where they want (or don't want) their ads but getting pissed by comments complete unrelated to the advertisements is where things get hard. That's like pulling ads from a magazine because the magazine happened to have pictures of children and a group of molesters happened to sit in a pub making predatory comments of those pictures next to the advertisement. What has previously been unimaginable apparently turns into reality when you do it on the internet.
Social media scandals like this one are a reminder that there are larger numbers of closet pedophiles, racists, Nazi sympathizers, and conspiracy theorists in our societies than many people realized.
Social media gives them a means to discover each other and collaborate, but they were always there.
I agree with Youtube’s move here, but the root cause of the problem is that our societies are worse than we thought they were. Fixing that is going to take more than policing comments on websites.
"Social media scandals like this one are a reminder that there are larger numbers of closet pedophiles, racists, Nazi sympathizers, and conspiracy theorists in our societies than many people realized."
No, they really don't. Unless you thought there were zero or something. You have no evidence that would allow you to place a number on those categories just from reading these stories, other than a lower bound somewhere in the dozens. (It really doesn't take many bad actors to make stories like this.)
It is likely, however, that the Internet and modern social media makes it easier for these folks to (relatively) safely find like-minded others, and to recruit others into their particular blends of extremism.
It's not quite as bad as it looks, people say and do things online they would never do in real life.
Might want to consider requiring permanent irrevocable identities, linked with their real-world identity, for certain forums. By removing anonymity a lot of this type of behavior is suppressed.
> By removing anonymity a lot of this type of behavior is suppressed.
Are you sure about that? This sort of thing is difficult to present hard data on, but I feel like arguing for real name policies as a way towards better user behavior is a 2012-era FB talking point that hasn't panned out in practice.
I'll admit that literal pedophilia might be a special case, but requiring (and aggressively enforcing) real names on FB and FB-only embedded comment sections hasn't been anything close to a silver bullet. People are surprisingly willing to say some horrifying things under their real identities, especially if those things are considered socially acceptable in their real-life environment.
Rather than pseudoanonymity vs verified identities, I think the big differentiator is quality of manual moderation. HN is pseudoanonymous. Wikipedia is pseudoanonymous. Neither is immune from trolls and creeps, but aggressive and consistent moderation is fairly successful in maintaining quality.
Of course, YT has scaled to the point where that's not an option. Being forced into this sort of cop-out is a sign that they've grown too big to be able to police user-submitted content in any meaningful way. Maybe I'm overly optimistic, but I see abuse and moderateability as, ironically, one of the factors that might push us away from centralized monoliths.
Are you sad that viewing the content in question isn't harmful? Or that it isn't illegal? Or that YouTube—Google, IOW—can’t unilaterally imprison people whose behavior it disapproves of?
Probably? Is there an actual question about YouTube’s authority to order imprisonment? Because it seems to me that should be a bigger story than them cutting off comments on videos with kids in them.
There's no law making creepy comments like these illegal, at least not in the US. (I can't speak for other countries, and US law is what's relevant here.)
This is an extremely dangerous tendency we have as a society, to throw out the concepts of freedom and due process and justice when children are involved. There's 0 reason to "investigate" someone that leaves a pervy comment on a kid's video, and they're free to have whatever fantasies they want. Anything coming close to investigating someone for something like that is too close to the thought police and sounds like 3rd world Big Brother crap.
A Google search was not sufficient in that case. The guy's employer tipped them off after firing him - it wasn't some sort of automated keyword monitoring at Google.
The search alone wasn't sufficient, but was enough to start it all. In YouTube's cases, there are plenty of comments that demonstrate someone is a sexual predator. In my view, if someone "tipped them off", it would be the exact same scenario.
"A random person on the internet is posting creepy comments" is likely not probable cause, especially as you'd have to convince someone national like the FBI to care without a specific location.
"Someone I know personally is posting creepy comments, and I've always gotten a bad vibe about their interactions with kids" is probably enough to get local cops interested enough to knock on a door.
I hope not. Unless there is evidence of a crime that has occurred or evidence that a crime is planned, they have no business (constitutionally) bothering a citizen based on that kind of tip. This is how lives are ruined when there are no victims.
The former is an ill person; the latter a criminal. Your call for no tolerance for anyone with a certain disease, no matter whether they know it's unacceptable and can cope with suppressing it, is a little too broad in my opinion.
How do you find them? Are you going to start forcing everyone to log in to view videos? And forcing everyone with a google account to verify themselves as a genuine person, with personally identifiable information?
Anonymity brings out the worst of some people, sure, but the damage of taking away anonymity is substantially more dangerous IMO.
And really, yeah, these people are complete creeps... but how does that affect anyone?
Well, the people leaving the comments aren't actually doing anything illegal. Nor are the people that aggregate these videos with the intent to appeal to pedophiles. That's what makes this such a difficult issue to combat.
> how about putting the
> people making the
> comments behind bars?
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Ultimately, YT comments are fairly horrible. Apart from an occasionally decent joke or even more occasional comment from the video's subject, youtube comments are either nasty or banal... with a high % of nasty. Racism, nasty politics, sexual harassment, other kinds of harassment... The median top comment is a troll.
Of all the major social media, I can't think of a worse commenting culture.
Banning comments on children sounds like a decent decision from "the suits" to curb the most harmful effects but... where are the product people?
Surely, someone, at some point, has had ideas for how to improve this travesty of a discussion feature. It's not like you're risking much. Even if changes kill commenting entirely...
How could google let this be so terrible for so long?
that’s simply not true. it depends entirely on what you are watching. i follow a number of artisan channels (blacksmith, for example) and the comments are overwhelmingly positive, and often helpful.
Theres something called a parental control and last i checked they were pretty effective for concerned parents.
Google does not need to be in the business of trying to provide parental oversight over creators who just so happen to be minors.
Plus dont we want to retain the ability to profile people who are showing a concerning level of indication that they may be somebody who is willing to act on an impulse to commit paedophilia.
IMHO i think the best course of action to deal with pedophiles on youtube who think they are protected by the annomity of the internet, is to try and develop a way that makes it possible for authorities to obtain enough solid evidence that enables a federal prosecutor to bring up validatably just charges
against youtube users who knowingly attempt to proposition a minor into an act of sex.
Also i need explicitly throw out an IANAL disclaimer right now and point out—— !!! I am NOT a lawyer nor a student of law !!!——before I delve into this more.
But theoretically speaking I think from a prosecutorial point of view (which again i stress that IANAL) it shouldn't be hard to prove a logically sound legal case that a sexually provactive comment that is targeted at a minor subject in a video's public forum (regardless of how vague, indirect, or seriously intended said comment may or may not be composed) this should all be considered knowingly propositioning a minor.
https://twitter.com/chrisulmer/status/1099366622329036801
"Last night I realized all of the comments on SBSK's YouTube channel were disabled. I saw I could manually turn them back on so I did. Then I read a post by YT saying that by turning comments on I risk our channel being deleted. I love and respect YT but IDK what to do.
"The beauty of SBSK is the love and acceptance in the comment section. It shows families and individuals across the world that their [sic] are people who accept them. Many people I interview have been socially isolated. Comments can change their self perception."
So it sounds as if YouTube content creators are now in the unenviable position where they need to actively moderate the comments section for videos featuring children, and if they don't do so to YouTube's satisfaction, they could have their entire channel nuked. Even if you're pretty darn sure that your commenters will behave themselves, that doesn't sound like a good deal.
Seems like YouTube will need to come up with some sort of "trusted subscriber" designation, and allow content creators to permit comments only from those subscribers, so that any random bad actor can't swoop in and destroy a channel.