A Million Save Two Utils-Ten Thousand Lose A Hundred
I think in such a circumstance pure utilitarianism breaks down. You shouldn't just multiply and thus come to the conclusion that the first option is better. Not once the downside to one group is very high versus very low to the other. The problem of course arises if the decision is arrived at via people's votes. Once the downside to those harmed reaches a certain point, adding more people who will very slightly benefit from that downside should not swing the decision. The problem, of course is that if the decision depends on people's votes, the wrong side will win unless half of the people who stand to gain a little, vote against their "interests". They often do do that, either because they evaluated the situation incorrectly, they were lied to by experts who persuaded them that the other side was better for them, they already had more than enough utils so it was no big deal to be magnanimous (eg celebrities) or they simply wanted to be nice.
But often less than half vote this way, especially if the small group who are hurt are people they don't like. Yet another problem with "democracy". "One person, one vote" sounds nice. But is it really a good thing if a voter who very slightly prefers x completely negates a voter (or someone who can't vote) who desperately needs NOT x?
I'd only care so I could place my 2 votes against havign them. i'd even wear a tie just this once if it was needed to please the chaps in charge
I dont want more votes than others.
People who strongly disagree with me cannot simultaneously correctly think that the fervent anti trumpers were correct in their assessment of Trump voters or Trump himself. Will you at least admit that?
The later may be true.
I think you seriously underestimate how the 80% will react to being considered 2nd class citizens.
Not most of the ones who missed out only because they didn't study the manual. (By the way I actually agree with Luciom that exempting the highly educated is wrong. I just said that so they wouldn't cause too much trouble.
9
note
People who strongly disagree with me cannot simultaneously correctly think that the fervent anti trumpers were correct in their assessment of Trump voters or Trump himself. Will you at least admit that?
What is the assessment you are referring to specifically? Pretty sure there have been many and varied assessments of both.
People who strongly disagree with me cannot simultaneously correctly think that the fervent anti trumpers were correct in their assessment of Trump voters or Trump himself. Will you at least admit that?
I do but only because fervent anti-trumpers (or fervent anything similar) assessment of voters is more bollocks than correct
Not most of the ones who missed out only because they didn't study the manual. (By the way I actually agree with Luciom that exempting the highly educated is wrong. I just said that so they wouldn't cause too much trouble.
Your only hope is to convince a large enough group that they are in top 20%. That's a reasonable hope for a while but it doesn't bode well
If you make it anyone who understands the manual well enough then most likely eventually the manual would require a good knowledge of the bible.
I have an idea. Every year, for those that wish to participate, voters take a test and make various predictions relating to politics. It would be open-ended questions as well as an essay section. Depending on the accuracy and difficulty of the predictions, along with the reasoning, they score points. Points carry over from year to year, and those with the highest scores get the most voting power.
By the way, I'm not serious about this, and even though it would be more difficult to calculate than intelligence, it does seem like a better way to determine one's ability to make informed decisions.
...or understand the issues and arrive at logical conclusions.
I'd like to propose a prom system where there are a series of votes over a period of many weeks, on dry uninteresting topics. Those who vote a lot get to vote in the big one.
I have an idea. Every year, for those that wish to participate, voters take a test and make various predictions relating to politics. It would be open-ended questions as well as an essay section. Depending on the accuracy and difficulty of the predictions, along with the reasoning, they score points. Points carry over from year to year, and those with the highest scores get the most voting power.
By the way, I'm not serious about this, and even though it would be more difficult to calculate than
Excellent, so we're an agreement on what we need to calculate and how to use it, just not how to calculate it.
Perhaps we could have some sort of reality TV mock election where voters are given information about various candidates and get to choose their "horse" in advance. The candidates are then given various tasks like "deign a particle accelerator" or "start world war 3" or "play a round of golf without shitting your pants" to see how they do. Voters who chose badly are given 300 hours of community service and their IRL election votes go to the winners. That seems to be more in keeping with real elections in the modern era.
Just in case anyone is interested, this guy is probably the best known utilitarian. This is a long talk, but he discusses a lot of common objections starting at about the 29 minute mark.
Excellent, so we're an agreement on what we need to calculate and how to use it, just not how to calculate it.
No, I think everyone should have an equal vote. However, if I did think we should give a more voting power to those who are more intelligent, I would integrate that with other qualities.
Perhaps we could have some sort of reality TV mock election where voters are given information about various candidates and get to choose their "horse" in advance. The candidates are then given various tasks like "deign a particle accelerator" or "start world war 3" or "play a round of golf without shitting your pants" to see how they do. Voters who chose badly are given 300 hours of community service and their IRL election votes go to the winners. That seems to be more in keeping with real elec
Co-hosted by Monica Lewinsky and Rod Blagojevich.
What you describe is not a problem ot utilitarianism imho. Utilitarians will torture, maim, mutilate, rape if it's for the "common good". If you even start thinking there might be occasions when that's bad, you are denying utilitarianism can be a good moral framework.
Which usually happens around age 16, but for some people it never happen.
In utilitarianism , definitionally NOTHING except a decrease of aggregate utility is inherently bad, nothing at all, ever, no exceptions. Even imagining someth
I guess it's more interesting to talk about consequentialism writ large than utilitarianism if we're going to accept this as a real reductio rather than a strawman.
FWIW I think we can make sense of this in a different framing of consequentialism that allows for thresholds.
or we have multple principles that sometimes conflict. That I would argue is how morality actually works. It's naive to think there is always a moral answer.
I dont believe it's possible to have a serious set of moral principles that never has conflicts. Generally it' sa good moral idea to try to avoid these conflict situations happening rather than try to resolve them.
So 'minimise harm' is good. 'Dont torture inncocent people' is good. Sometimes these will conflict. The best moral solution is to avoid this happening as much as possible. When it happens there may be no good answer.
or we have multple principles that sometimes conflict. That I would argue is how morality actually works. It's naive to think there is always a moral answer.
I dont believe it's possible to have a serious set of moral principles that never has conflicts. Generally it' sa good moral idea to try to avoid these conflict situations happening rather than try to resolve them.
So 'minimise harm' is good. 'Dont torture inncocent people' is good. Sometimes these will conflict. The best moral solution is to
Intuitive morality has incoherence because biological pulsions can overlap yes.
That's why you are supposed to study the topic with calm and relaxation when nothing morally relevant is happening in your life, to untie the knots and come up with rules that do clarify priorities for you.
It's not naive to think you can define an answer as more moral than another basically every time (with the info at your disposal).
When it's actually legitimately very close, well then it's like when the EV of two choices in poker is very close : it doesn't matter much what you do.
If you don't clarify morals conflicts, even exceptional ones, will abound, especially in politics if not in your private life.
You can't deny a very relevant, omnipresent moral conflict for example, when you decide allocation for public spending:
every time you allocate resources to something, you are declaring that allocation as morally superior to *all other possible allocations*.
To a lesser extent the same is true when you donate to charity.
You know how people avoid that? By not thinking about it.
People go very mad very quickly when you tell them they are choosing to kill elders and sick people, and that they are saying it is preferable than saving them, everytime they prefer to allocate money to something that isn't medicare (or the NHS) but they think is important (like education or refugees or police or green energy and so on).
Lives vs lives tradeoffs are impossible to avoid in politics.
Other way round. Emotionally (biologicaly) we feel ther must be a good answer to everything and yearn for a rule or set of rules that govern it.
Calm logic helps us realise that this can work very well, but that for any reasonable moral system there will always be points we cannot resolve. We can also have a set of moral meta rules which are about avoiding bad moral situations but there will also be bad meta spots. And so on.
Other way round. Emotionally (biologicaly) we feel ther must be a good answer to everything and yearn for a rule or set of rules that govern it.
Calm logic helps us realise that this can work very well, but that for any reasonable moral system there will always be points we cannot resolve. We can also have a set of moral meta rules which are about avoiding bad moral situations but there will also be bad meta spots. And so on.
Or you have clear priorities (like your children are more important than your parents, your family is always more important than friends who are more important than neighbours who are more important than strangers you share a political unit with that are more important than the rest of humanity) so that in tradeoffs considerations everything is smooth.
Yes. the point is that even with clear priorities there will be problems you can't resolve.
Sophies choice being a simplistic example. There's not always a good answer. The meta good answer is to avoid such situations occuring.
Other way round. Emotionally (biologicaly) we feel ther must be a good answer to everything and yearn for a rule or set of rules that govern it.
Calm logic helps us realise that this can work very well, but that for any reasonable moral system there will always be points we cannot resolve. We can also have a set of moral meta rules which are about avoiding bad moral situations but there will also be bad meta spots. And so on.
Singer discusses some of these points.
But, I'm not sure this is biological, so much as the legacy of monotheistic thought.
Greek, Romans (pre Christian), Indians and more or less Chinese were more into virtue ethics. Morality was more about developing traits to lead a fulfilling life.
Utilitarianism, deontology etc are more about turning to an external source to get the exact right answer to every queston.
There is a buddhist story where 2 monks, forbidden from touching women, see a woman drowning. One saves her. 2 hours later, the other, who has been stewing, says "how could you violate our rules?" The other says "Oh, you're still thinking about that?"
I guess the stewing monk suggests some people do want absolute rules, but the other monk is right.
Utilitarianism, deontology etc are more about turning to an external source to get the exact right answer to every queston.
There is a buddhist story where 2 monks, forbidden from touching women, see a woman drowning. One saves her. 2 hours later, the other, who has been stewing, says "how could you violate our rules?" The other says "Oh, you're still thinking about that?"
I guess the stewing monk suggests some people do want absolute rules, but the other monk is right.
There's a big difference between morality and sociopolitical ethics. A lot of people don't really understand that morality isn't a rational concept. Our deeply felt moral intuitions are just that. We sense them. They come from our emotional responses to subjective experiences, and they vary in both individuals and cultures. We can use guiding principles and rationality to set up a functioning society, but the foundation itself is nonrational, and there's no way of creating a flawless ethical system that won't contradict the morality of individuals or control every aspect of its citizens' lives.
Have you seen Martin Scorsese's Silence? It's a crushing movie based on the Shusaku Endo novel of the same name. I think you'd like it. It's not so clear which monk is right. Perhaps they both are.
The reason it isn't clear is because the moral rule in that story seems trivial, but if the stewing monk had a deep moral intuition, then his lack of action would be justified. Let's up the ante and use a real world example. In that story, the monk who jumped into the water is analogous to the pilot of the Enola Gay, and the woman represents the Allies. The stewing monk chose to not drop the bomb.
The logic of principles conflicting is the same for virtue ethics as well. I'm far more into virtue ethics anyway.
I'd argue it would be best if everyone was (that's partly a joke)
I saw enough of this to come to the conclusion that the important part of this debate is the way these guys are analyzing things. If more people discussed things this way the world would be fine.
They can analyze things as much as they want, but if they have different logical starting points based on what they subjectively value, they'll never agree. Your point isn't lost on me, but it's not certain that more conversations like this would help things when it comes to the big picture. Theoretically, more debates like this could further cement both party's positions and lead to more conflict. Besides, most people aren't capable of having discussions like this even if we could do better in that regard.
or we have multple principles that sometimes conflict. That I would argue is how morality actually works. It's naive to think there is always a moral answer.
I dont believe it's possible to have a serious set of moral principles that never has conflicts. Generally it' sa good moral idea to try to avoid these conflict situations happening rather than try to resolve them.
So 'minimise harm' is good. 'Dont torture inncocent people' is good. Sometimes these will conflict. The best moral solution is to
Sometimes morally principles we hold come into conflict, but that’s why it’s a good idea to have a foundational moral principle through which your second order applications of this moral principle are analyzed. In the case of the OP, Sklansky is attempting to problematize utilitarian normative ethics.
The issue is that two utilitarians might have different ideas as to how to apply the general moral principle that we ought to maximize utility. That’s why in the literature there’s a general debate on whether to apply utility particularly through acts or universally through rules (even if they are fictive rules).
For instance in the case of the OP we might say from a utilitarian perspective that although the tradeoff seems worth it from a pure numbers perspective, most actions don’t occur within a perfect moral vacuum. Therefore it might be the case that the collective should want to build rules that protect the minority from egregious harm in the case that this egregious harm could be visited upon the majority arbitrarily. Translated into plain english, we should give people certain rights when they are defendants even though most of society might benefit from occasionally ignoring or trampling on their rights because we fear a possible world in which our rights would likewise be trampled on. However if we are in a dire enough case such as a civil war or a world war, we might shift our priorities to the common good. The founding fathers figured as such which is why there are contingencies for martial law or other times of great social upheaval or danger.
I don’t think that this poses a problem for systematizing moral systems because for every contradictory ethical principle we encounter we benefit from having studied and prepared for these situations. The fact that they are difficult questions only presents all the more reason to study them.
It's a foundational set of principles though. The attempt to make it one is clearly flawed. We understand it's wrong to kill an innocent person to use a few organs to save some others - those atempting to say it's good because of some founding principle of utilitarianism are clearly missing too much about morality
The problem isn't a difference between utilitarians (although that's a very real problem). it's that it's ignored something that is fundamentally part of human morality. It's one of the reasons that no-one is a utilitarian. it's cleary not sufficient to cover morality.
Just for kicks I'll add chezlaw's hard dilemma theorem that there are no very hard moral dilemmas. The harder they seem the more correct it just to flip a coin which is very easy.