I think that there's an element of validation and indoctrination that is a serious concern here. In addition to an overall significant lowering of the discourse around things by allowing garbage quality trolls and other horrible comments, and in addition to bullying, aggressive behavior, stalking, and other things that are broadly considered unpleasant, this kind of behavior breeds more of this behavior.
Consider someone growing up on the internet. The more they're exposed to this sort of content, the more it will be normalized in their mind. The more they're exposed to this sort of content, the more likely susceptible people are to be radicalized by it, and to grow into the same sort of troll. Hiding or suppressing low-quality content creates a herd immunity effect. It prevents a shift of the Overton Window to where "generally allowable discourse" suddenly includes timestamping the most "salacious" parts of a child's video, or telling people to actually kill themselves, or sharing their purely racist, hate-filled viewpoints.
While you may rather be exposed to it because you have the strength and capability to view it as a curiosity and a sociological study, the impressionable among us maybe deserve to not be bombarded with garbage, and we may have a moral duty to at least make some effort to minimize indoctrination and radicalization on these platforms.
Looks to me like you guys are talking past one another.
disillusioned is taking the position of The Internet. If we allow these things, things will just continue to get worse and worse. Standards exist for a reason, Gresham's Law, and the rest of it. (Including some wonderful pleas for civility and kindness that I completely agree with)
raxxorrax is taking the position of The Human Species. We have survived and evolved because we are a wild, aggressive, curious species. Whatever boundaries there are, we will push them as hard as we can. Life will find a way.
Both of you folks are correct. This is the crux of the problem with technologies like YouTube (or books, for that matter). If YouTube is a publisher, then it has an opinion, a political position, standards for what it thinks is civil, and so forth. That's great, but there's no freaking way in hell I'm going to agree to having a handful of companies decide that kind of thing for the entire planet. The idea alone is insane. I just read yesterday about people getting warnings from Twitter from things they posted 5, 6 years ago that somebody in a dictatorship found offensive last week. It's ludicrous.
If, however, they're a public forum, then they should just shut up and stop looking at the things that appear on their site. The public forum -- books -- require chaos, disorder, and evolutionary pressure. Can't fight mother nature. Life will evolve.
But they can't do that, can they? Because they're monetizing all of that content. So they have to keep close track of each little piece of everything.
Now they're stuck.
They want to have it both ways, and we feeble-minded folks watching this spectacle end up choosing one or the other. It's a sucker's game, and no matter which side we choose, it doesn't work. That's because the premise is broken. BigTech wants to be two things at the same time. Picking one of those things and arguing with folks who pick the other one just plays into their schtick and keeps the gravy train rolling along.
> I think there is no evidence of the mechanisms you describe.
Which part? If we are talking about this part:
>> I think that there's an element of validation and indoctrination that is a serious concern here.
Then I think there is evidence of that being true.
Take for example “incels”. They gather in online forums where they construct a worldview built entirely on misogyny, entitlement and hate. The ideas they are spreading among themselves are much worse than probably most of those people would come up with on their own. In their echo chambers they validate these ideas to one-another and make them seem acceptable.
Those horrible people still exist either way, they just dont say out loud what they think. You dont restrict people thinking that way but people talking out loud about it. The no platform approach was rather successful in Germany when it came to pushing the extreme right out of the public discourse and banning their parties. But that only works as long as they dont organize a platform of their own. Then you have a far right party rising into parliament and entering as the second largest party in some states.
People dont change you just filter them from your view of reality. And its no surprise people who bothered to look knew rather well that they existed.
No, I am literally talking about the group of “incels” that are specifically engaging in hate speech.
The people I am talking about here are outright promoting rape, violence and child abuse. Not indirectly. Not between the lines. Just straight up stating those kinds of things.
> don't we have decades of evidence from online communities showing that?
In my experience the community changes as a whole, but not the individual actors. Once a community tips the member base changes. People displeased with the behavior leave and people attracted to it join.
As you are clearly blurring the lines between this specific issues and a more general one, I'm going to reference the latter and exclusively the latter. There's a wide gulf between pedophilia and then jumping into appeals against the ever more amorphous 'hate speech.' So on that note it's interesting that in the early 20th century if you look at support in the US for eugenics it was practically a who's who of academia: The Carnegie Institution, Rockefeller Foundation, W.E.B. Du Bois, Harvard, Stanford, etc, etc. [1] One might argue that such views were "purely racist, hate-filled viewpoints." The very reason freedom of speech was such a revolutionary concept is because authority figures, since time immemorial, have been able to propagate bad ideas and manipulate the general population by making false statements which could not be challenged.
In contemporary times a good example of this is Iraq. Our invasion was precipitated based on fabricated evidence and appeals to authority of the sort 'x intelligence agencies have proven beyond any doubt that Iraq has or is pursuing nuclear weapons of mass destruction.' After the fact when an individual [2] tasked with investigating whether Iraq was or was not trying to purchase uranium (he found they absolutely were not and reported as such - the government would go on to claim they were trying to purchase uranium), a government official outed his wife as a covert CIA operative (which the Washington Post would go on to publish) not only potentially endangering her, but terminating her career.
In the case of eugenics it wasn't so much a conspiracy rather than that authority and academia were simply collectively wrong, as has also often occurred. Censorship benefits those in power and only those in power. Those in power choosing to try to inhibit censorship, as happened with the first amendment, was quite the revolution! The point of free speech is to ensure that not only with the good comes the bad, but also that with the bad comes the good. When you begin to tolerate censorship by one side - you very much risk that the side doing the censoring is the bad. And in efforts to obtain only the good, you end up with only the bad.
In today's world, free speech and corporations are becoming a major new issue. An ever larger percent of all human communication is digital, and digital communication is extremely monopolized. This means, for instance, that the US government could effectively circumvent the first amendment by simply pressuring a single company rather than passing a law against the dissemination of any given viewpoint. It also means that a very small handful of people could end censoring or otherwise manipulating public discourse for billions. That is exactly what the first amendment sought to prevent; only the founding fathers could never have imagined a corporation (under which two people have majority control) having more power over speech than any government in the world.
This is one of the reasons I think anonymity on the Internet is a bad thing. Not that I am saying it should be banned, but it should not be the norm. Much of the toxicity simply would not happen, could be prosecuted, or could be filtered if a "real person" attribute was widely available.
Toxic comments can be just as common in non-anonymous forums and venues. Facebook produces a lot of toxicity, for example. Further consider; the Internet is much less anonymous then twenty years ago and that hasn't stopped the overall level of toxicity imo. The type of person who spontaneously makes toxic comments spontaneously will still make them when forced to use their real name (they'll just suffer more from it). The provocateur, those who calculatedly elicit toxicity, is always going to be here too - no matter many Russians Facebook filters.
One factor is that once one person begins attack another person, both using real names, both people have a hard time backing down, especially if they know each in real life or if they are semi-public figures. For a lot of people, admitting that they are wrong is a huge hurdle - and these tend to be the people who engage in toxicity in the first place.
I think it's the other way round. Anonymity (or rather: the right to present different personas to different audiences and to give up a burned persona when you see fit) makes the internet a bearable place. It also allows people to make up their mind without standing in their own way, and disregard hurtful comments as trolling (which they often are). Facebook painfully shows that real name policies do not necessarily lead to more civil discussions, instead they facilitate ad hominems, and expose vulnerable groups to hate everywhere, as aspects of their identity can not be selectively hidden anymore. This even extends to their real lifes, with their names being publicly known. Now speech needs to be controlled, because people lost control of their personas and need to fight everywhere, instead of just when they choose to.
Facebook comments aren't much better than YouTube, especially non-English ones that Facebook doesn't bother moderating at all. With about 97% of social market share in my country and a real name policy, it should be a brease to use it. It's absolutely not.
As for the prosecution, only if your country decides to give a fuck. There's a case in which a soldier called for a journalist to be raped and killed in a public Facebook post using his real name. He didn't face any consequences despite the story being picked up and screenshots of his public Facebook status shared all over the news.
Death threats, hate crimes, separatist movements, fake news... Real name policy stops absolutely nothing.
You are not the first person to think this, and some even put it into practice [0] hoping it would frontload forum moderation, but couldn't follow through with it [1].
As another commenter pointed out, facebook (and facebook-based commenting) has plenty of trolls in public comment sections. Anonymity is not what makes people jerks. More real reasons why internet forums enable trolls include:
- the asynchronicity of negative feedback: our primitive brains don't get to closely associate the negative feedback with the moment we expressed something in a socially unacceptable manner
- the lower bandwidth of negative feedback: IRL involves awkward silences, dirty looks, snickering, people turning up their nose at you, telling their children not to be like you within earshot, etc. All of this is suppressed over the internet, at most summarised by a downvote (which some trolls feed off of as positive feedback, knowing they ticked someone off, but don't face immediate social consequence). The trolls don't get conditioned to avoid doing things that hurt other people or are socially unacceptable.
I intentionally didn't list that, because that's something else that's often cited as a reason why people aren't jerks in real life compared to the internet, but I don't think that's true in most civilized cultures.
If I'm a jerk in public, outside of racist/prejudiced violence or a mentally unstable antagonist, I'm not at serious risk of being physically attacked unless I start breaking laws or pose a notable threat to someone else. Because about 90% of the less edgy side of trolling (just being annoying or verbally mean, mostly) won't really draw physical violence in meatspace, you'll just be treated like a jerk or risk being politely asked to shut up or leave by an authority figure. Yes there's a risk of physical violence, but it's overplayed as a reason why people aren't trolls in real life as much as they are online, and I think it's significantly superseded by the things I mentioned, and probably other things too, which stop people from being jerks far before physical violence needs to be brought to the scene.
you need only look as far back as the covington incident to see that 'real people' are hardly inhibited from vile and abusive behavior just as long as they believe themselves to be on the right side of the mob. this has been the case since time immemorial. in my admittedly anecdotal experience, fully anon communities are far more respectful & a have a better standard of decorum than communities with high standards of verification, even and perhaps especially in dealing with the sorts of topics that tend to make more highly regulated forums collapse into flame wars. real name communities are prone to witch hunts, purity tests and all sorts of hysteria full of very real life consequences. anons just call you a retarded faggot.
Not sure why this was flag killed. The Covington incident is a good example. A controversial event occurred and everyone from every 'side' was inflammatory and no one maintained civil standards.
EDIT
I guess flagging indicates part of the problem. Society is so busy screaming at ourselves that the only thing reasonable people think they can do is check out of controversial topics.
EDIT
Would any downvoters care to engage about why they disagree?
HN has many great qualities, but some of the most important conversations we need to have are too emotional and inherently divisive to be connected to any kind of social currency system. there are times when identity itself is simply too much of a barrier.
the heavily moderated upvote/downvote system of prioritizing content is great when dealing with technical topics that aren't too emotional and don't challenge cultural established norms too heavily. however there are many aspects of life that are too messy, too heated & too shameful to be effectively dealt with in this format. these tend to turn into shadow topics that are better out of sight & out of mind, only discussed behind closed doors where the risk of vulnerability is limited and even then they still carry potential repercussions. in 'real name only' networks there are far too many risks to ever be a real person at all so we cultivate a brand that is the best & most superficial version of ourselves. we hide our pain, our doubts, our regrets. this extends to pseudonymous networks as well and anywhere that a cumulative identity is present. very few people want to be recognized as assholes and no one wants to be a pariah, a victim or a failure even if they really are. in a sense the only time you're really talking to anyone is when you've both stripped your personhood away and have nothing to lose.
Toxicity is a non issue compared the the huge disadvantage of losing anonymity. In fact you're feeding the trolls by giving them hints of ethnicity, gender, etc.
A real name policy also wouldn't actually address the actual problem. Toxic behavior is possible because on the internet you can write angry comments to a person on the other side of the globe and then forget about that person tomorrow. If that person does come back tomorrow then it doesn't matter if they are called "hahaurmom" or "Jonathan Willis".
The word you want is "impunity" not "anonymity". Because awful behaviour persists in actively de-anonymized contexts (eg: Facebook) when there's a power imbalance that puts society (or a local subset of it) on the side of the troll and against the victim.
Consider someone growing up on the internet. The more they're exposed to this sort of content, the more it will be normalized in their mind. The more they're exposed to this sort of content, the more likely susceptible people are to be radicalized by it, and to grow into the same sort of troll. Hiding or suppressing low-quality content creates a herd immunity effect. It prevents a shift of the Overton Window to where "generally allowable discourse" suddenly includes timestamping the most "salacious" parts of a child's video, or telling people to actually kill themselves, or sharing their purely racist, hate-filled viewpoints.
While you may rather be exposed to it because you have the strength and capability to view it as a curiosity and a sociological study, the impressionable among us maybe deserve to not be bombarded with garbage, and we may have a moral duty to at least make some effort to minimize indoctrination and radicalization on these platforms.