You only think that's a good idea because you don't believe in flat earth. They could equally put a link to flat earth sites under NASA/SpaceX videos. Do you really want to decree that whatever they link is "true" just because you agree with them in this particular instance?
I try not to consider things in terms of absolutes, so to answer your direct question: yes, I am pretty comfortable believing the vast majority of such links would be substantially factual.
Google already does this, I don't see the issue with it hypothetically happening on YouTube. If you search for a well-known person, place or thing, you'll be presented with information aggregated from trustworthy sources. Can they be wrong? Sure, and it happens. It's not sinister.
It sounds like you're envisioning someone manually tagging things. That would be odd and suboptimal. Instead, just tag helpful Wikipedia pages about the broad scientific consensus of things on relevant videos.
When I am objectively wrong, I want my mind to be changed. I expect this to be the case for around 10% of my ‘knowledge’, even on topics I care to educate myself about, and much worse on other issues.
Putting a link under the videos isn’t likely to achieve that, but that’s a separate issue.
The only problem I have with Google doing this, is that I trust corporations and government about the same — i.e. that both will lie and dissemble as much as they are allowed to get away with for their own or their leader’s benefit, without regard for my interests.
You're missing the point. Most things in life aren't settled as easily as the flat Earth debate. Google could also do this on something that's far more controversial (eg political) and justify it in the same manner. Imagine if Google were against climate change and every video about climate change gets a link to some website that says it's not true. That wouldn't be acceptable, would it? But being okay with Google doing this for flat-earth also makes it more acceptable for Google to do it for climate change.
Recent past (e.g. in US politics) has shown that our society needs mechanisms to incentivize consensus. Google promoting fake information would get appropriate push-back and in the end help form consensus. Yes, even on political topics I'd be fine with that as long as they try to stay fact based. There is a lot of room for honest actors to discuss ideas. But I'm fine with society (including companies like google) pushing back once people (including politicians) go crazy with stupid positions (likely trying to widen the overton window to redefine the "reasonable center")
> But being okay with Google doing this for flat-earth also makes it more acceptable for Google to do it for climate change.
Sounds great, I'm all for it.
Sure the truth can be complicated, but I fail to see how implementing software that auto-links to a relevant article on Wikipedia causes Google to be the arbiter of truth.
No, but I reject the implied premise. I don't just think Google should endorse the things I agree with, and I think characterizing this that way is disingenuous. Rather Google should endorse the truth which substantially recognized experts have consensus on.
There aren't sources backing up that climate change isn't real. That's a hill I'll happily die on. There are plenty of alternative sources which make that claim, but Google does not aggregate facts from them for presentation to its users. Because they don't have evidence.
Like I've said elsewhere in this thread, this isn't a revolutionary idea. Google does it on its search engine and the sky hasn't fallen. I remain unconvinced it would fail if they pushed it out to YouTube.
That sounds like a leading question for rhetorical purposes - is this something Google actually did, or are we speaking purely of hypotheticals here?
Note the solution I posed is something which Google already does on its search engine without calamity. Therefore I don't see a reason why it would fail for YouTube. In contrast the example you're giving seems pretty hard to just link to an authoritative source.
Put another way, I'm not advocating for Google to arbitrate the truth on a case by case basis. I'm advocating for Google to identify ahead of time which sources are well-researched and trustworthy, then outsource its fact-linking system to those sources.
If Google were to supply facts on a case by case basis that would be suspect. But that's not how the company operates, so I'm deeply skeptical they would become some kind of arbiter of truth.
You can’t abstract away from the content to the pure form of an action, and then posit that there’s no observable difference between two (very much different) examples.
Ex: „If banks stop allowing strangers to withdraw money from my account, how will I ever get my money“
And, no, questions of fact aren’t different. The earth is round, not flat. One tree does not a forest make, but a thousand does. Even if we can’t agree on the specific cutoff (is 15 trees a forest? 50? 500?), that does not prevent us from accurately describing the extremes.
Do you realise how silly this line of argument is? Why exactly should we (or Google for that matter) not recognise that there are differences in some actions? That some things are good, and some things are bad? In what world do you conceive it to be the same to advertise flat earth theories as legitimate unless you are yourself a flat earther?
This idea that because nobody has a monopoly on the truth then we can't make decision is utterly futile and silly; I have no idea where it originates, except perhaps in the darkest places where reason has utterly collapsed.
Of course, every authority since the beginning of civilization has carried this mantle in justification for censorship of all sorts. Having Google decide what information can and can't be shared on their platform (read: utility) is a dangerous state of affairs. What new social movement, or recognition of a current injustice will be stifled due a status-quo bias that is codified by such top down control over the media? We can't know from where we currently stand, which is what makes such control dangerous.
The fact that a reason is used (and has been used) to justify censorship unjustly does not mean that the reason itself is invalid, or that there is no such thing as good censorship; most people agree that some censorship (of threats or child pornography etc.) can be a great positive force.
There is no reason behind the idea that because we can't totally differentiate between good and bad where the line is blurry then we can't do anything at all. I'd also question whether free speech is intrinsically valuable, more than other actions are. I have seen no convincing reason to think so.
We can approve of good actions (like putting a Wikipedia link about earth science under a flat earther's video) and disapprove of bad ones (like putting flat earth propaganda underneath a scientific video). I see no issue here.
The debate isn't whether we can do any sort of "good" censoring, but whether we should do any censoring at all. (Just to be clear, I'm narrowing the scope under discussion to ideas. Of course things like child pornography should be censored due to the direct harm.) I reject the idea that society should welcome some authority having control over ideas such that ones deemed "bad" enough by a large enough majority should be actively suppressed. The "good" we presume can be done by shielding people from bad ideas does not outweigh the fundamental right of expression and communication.
Your claim is that some form of censorship may be permissible if the harm is direct, but I think this carries with it a certain ideological slant - what harm is 'direct' and 'indirect' has vastly different consequences; for instance, prevention of direct harm may be sufficient to protect children, but it probably isn't enough to protect the proliferation of racist or sexist ideas which have historically led to widespread oppression on those fronts. What is your threshold for harm?
Here we see the vacuity of the harm principle: one can claim anything is (or isn't) harmful in order to attach their favorite idea to it. As an example, some people may be said to be harmed simply by the knowledge that someone is watching pornography in their house. You'd likely say that doesn't "count" as harm - well then what does? As it turns out, controlling speech under your schema is simply a matter of defining what counts as harm and what doesn't. Yet philosophers such as Joel Feinberg and Catherine MacKinnon have pointed out, very few people (if any) would like to live in a society in which only harmful speech (or acts, since there is no meaningful distinction between speech and acts other than invoking body-mind dualism) is not permitted.
Then we get to ideas: who's to say that threats or child porn can't carry ideas in them? In censoring them, aren't we censoring ideas too? Some would say the idea that "it's not so bad to have sex with children" is encoded in every instance of child pornography. What if I made my threat into an art piece?
Your argument is unmoving. Child pornography isn't speech nor it is an idea. Images are records of events and the dissemination of such records can be directly harmful. There is no ambiguity about the harm principle to be mined from this example.
> In censoring them, aren't we censoring ideas too?
Ideas are by definition abstract and so they should be communicable through some other medium.
You've managed to circumvent my entire post and you're still wrong; my point was that ideas are communicated through a medium, and they can even be communicated through, for instance, threats and pornography. You have given no convincing reason to single out child-pornographic images for censorship while allowing others, such as regular pornography. What differentiates the free speech content of child pornography from other pornography, or even art which required harm in its creation?
Obviously I'm not defending child pornography here, but I think there's a logical flaw in your reasoning.
The fact that ideas can be communicated through other media is irrelevant, since it would mean that we can censor whatever ideas we like in any major category (e.g ideas conveyed in photography and film) but thereby only farcically allow them otherwise (e.g the expression of the idea is only allowed through physical speech).
>and they can even be communicated through, for instance, threats and pornography.
But censoring one particular medium is not censoring the idea. So your attempt at finding a contradiction doesn't hold water.
>What differentiates the free speech content of child pornography from other pornography, or even art which required harm in its creation?
Consent.
>since it would mean that we can censor whatever ideas we like in any major category
This doesn't follow from my argument that censoring one particular medium is OK. Child porn is a genuine special case (direct harm in its production, lack of consent in dissemination) that doesn't transfer to other mediums that don't have the same problems.
Censoring a medium is an instance of censoring the idea, and if censoring one particular medium is permissible, then any number of media may be therefore censored.
>This doesn't follow from my argument that censoring one particular medium is OK.
It does, since by your own admission, censoring a particular medium does not entail censoring the idea.
>Child porn is a genuine special case (direct harm in its production, lack of consent in dissemination)
So this is what I was getting at - you say it's fine to censor a particular way of conveying an idea due to other harms being associated with that particular way of conveying. In child pornography it's the violation of consent in its production and violation of privacy in its reproduction. Extending this argument from child pornography to regular pornography, some would say there are significant harms involved there too (e.g it conveys the idea that women ought to be subservient to men), and then to hate speech.
The core idea is that speech is not absolute, just like actions aren't absolute. You're free to swing your fist so long as it doesn't hurt anyone, and you're free to say things so long as they don't hurt anyone (or require anyone to be hurt, of course). This means that with a sufficiently convincing empirical dataset, we can outlaw regular pornography and hate speech.
If addressing a current harm leads to worse harm in the future, then yes it is an argument against addressing the current harm. I see no reductio here if that was the intent.