> Utilitarianism leads to the bureaucratic tyranny Arendt discusses and deontology is just as hollow as belief in belief.
Can you sketch your reasoning behind these claims? And put forth the most common criticisms against virtue ethics, and defend your position against these criticisms? I don't agree with you, but you have not given me a sufficiently tight argument to convince me or take issue with.
Utilitarianism has two issues. The first is the utility monster which is a classic criticism that boils down to "what if there was something which gains infinite utility from the suffering of others". I think it's a rather weak criticism however it is currently what the AI/Crypto effective altruists have almost fully given in to. They argue utter destruction of humanity by an AI is a reason you should donate all your time and money to AI safety research. But the more useful criticism is that it's too simple to actually model reality and leads to an economic approach to ethics. This is what leads to a primacy of the bureaucratic state. It becomes a moral imperative to gather as much information as possible to enable the state to maximize utility. The best example of this is that Brave New World is a utilitarian utopia.
Deontology is answered by the parent article. Most deontological frameworks are defined as correct in and of themselves. If you genuinely believe in one you're likely a moral individual by other definitions so that's not necessarily a bad thing.
Virtue Ethics is commonly criticized as not being normative enough, it doesn't really tell you explicitly how to act it instead tells you what a good person looks like, and too complex so at to be actually useful, the definition of virtues aren't simple. I like Hursthouse's defense and formation of it in her paper on Abortion[0][1]. It boils down the the criticisms being the reason that it's a useful theory. I like these two points she makes:
"Second, the theory is not trivially circular;it does not specify right action in terms of the virtuous agent and then immediately specify the virtuous agent in terms of right action. Rather,it specifies her in terms of the virtues, and then specifies these, not merely as dispositions to right action, but as the character traits(which are dispositions to feel and react as well as act in certain ways) required for eudaimonia"
"Third, it does answer the question "What should I do?"as well as the question "What sort of person should I be?" (That is, it is not, as one of the catchphrases has it, concerned only with Being and not with Doing.)"
In short the theory isn't trivial, unlike utilitarianism which can easily fall victim to being over trivialized as shown by today's AI effective altruists and utopias like Brave New World, and it provides answers to both what you should do and who you should be, unlike deontology which just answers what you should do. I like this paper because it makes a genuinely unique case for the ethics of abortion. She argues that you can both grant personhood to a fetus, or more accurately that personhood is irrelevant, and still ethically justify many abortions.
That likely is too nuanced for today's discourse but that's the point. As we've moved to either utilitarianism or deontology as the driving motivation behind our moral actions we've over simplified the world.
Anyways hope that's interesting, helpful and provides some insights into my thought.
Thanks a lot for outlining your thoughts on the matter! I'll have a look at the paper you have linked as well.
Regarding EA, I agree broadly with the basic ideas, but disagree with the focus on AI safety (in fact, I think that runaway AI is impossible due to fundamental limitations of computational complexity and of data ingestion required for accurate models).
I agree that utilitarianism itself is not very useful for any one entity in modeling reality. In practice, I use fallible intermediate principles that are more applicable to the situation at hand, and when they seem inadequate, or when they seem to contradict other principles, I would use as the yardstick to examine these principles some hedonistic consequentialist calculus that is heavily hedged by acknowledging a lack of good information and inability to predict the future well. I understand that there is always a risk that I do not do good by this calculus, but I try to maximize the expectation of it under what I know at hand, understanding that this does not account for what I do not know.
> It becomes a moral imperative to gather as much information as possible to enable the state to maximize utility.
It seems to me that the unsavoriness at this thought arises from the prospect of the state developing a surveillance apparatus that does not serve the citizens as well as it should. I assume that what you call bureaucracy refers to surveillance (via forms, regulations, procedures, amid others.) But I think we agree that we prefer to live in a society that has some surveillance and monopoly on violence, and the question is, to what extent. Then I don't think utilitarianism immediately asks the state to maximize surveillance on its citizens, but this is usually conditional on that the state has sufficiently robust institutions that its well-intentioned surveillance today will not be misappropriated in future, and that the surveillance apparatus is still net-pleasurable to run today, amid other considerations.
Can you sketch your reasoning behind these claims? And put forth the most common criticisms against virtue ethics, and defend your position against these criticisms? I don't agree with you, but you have not given me a sufficiently tight argument to convince me or take issue with.