Me stabbing myself in the foot right now. Not justified, because I derive utility from my health
Don't you think a word like "unintentional" or "undesirable" would make a lot more sense here than unjustified? Me going out and stealing some fruit. I am hungry, but theft goes against my principles. Not justified.
This sounds like a more reasonable use of the word. But for me, the whole point of using any kind of principles is to have a rule of thumb, a strategy that works in many situations (but not all) so that you don't have to think as hard about what the optimal strategy is in every situation. So if I did something that went against a principle that I generally followed, it may or may not mean I'm doing something undesirable or bad, even in my own system of values. And even if it is bad in my system, it may be good in someone else's system. So I don't feel like using the word "justified" to describe an action that follows some principle that someone has really adds much useful. What? That makes the entire difference! One basic assumption to what I am proposing is that we are developing a system of principles of action (see earlier reply about the change to "principles" instead of "values") which would be followed by a group of sentient beings. If they are only each given one fundamental desire and it is in direct conflict with the others, of course there's nothing one can do.
This seems to contradict a number of times that you talked about wanting to find--in your words "the fundamental ought", as if there can only ever be one fundamental ought. I gave you that example where there is only one fundamental ought for the AI, in part because it was simplest, and in part because it seemed to fit with your own ideas that there only needs to be one ought rather than many. So because I still can't quite tell which you believe, are you saying that there should be one fundamental principle from which all others are derived, or are you saying there should be a lot of different principles that are all fundamental? The AI would be an example where it had just one fundamental principle from which all others are derived, which is what I thought you wanted originally. This is where we get into the question of the "fundamental ought", and whether it naturally applies to all sentients or not.
Again, you go back to saying there is only one fundamental ought. Notice you say "the fundamental ought" not "a fundamental ought" or "one of many fundamental oughts". So what is wrong with "kill all humans" as your fundamental ought? Why are you saying that can't count as a fundamental ought, but whatever you have in mind can?
I would say, first of all, that it's obvious there is more than one for most humans. And it's also obvious that the fundamental oughts are going to be different for every sentient creature. The most clear example of this would be the AI or the alien race which has radically different fundamental principles, that have very little or no overlap with ours. But if you agree in that case, then why would you disagree in the case when you have lesser differences between members of our species? Is there some magic point where suddenly, if everyone is in approximate enough agreement, you just pretend that ought is the same for everyone? In the end, I don't think these examples illustrate a whole lot, because they depend on extremely unlikely scenarios
They illustrate that radically different moral principles can make sense to highly intelligent creatures with very different goals. The aliens and the AI seem unrealistic because their goals are more different than the relatively minor differences between humans. But you seem to be blatently denying the fact that humans do not have identical interests and goals, so there is still a lot of subjectivity in human morals, just not as much as the drastic differences that can arise between morals of different species.
no subject
Date: 2010-12-06 07:35 pm (UTC)Me stabbing myself in the foot right now. Not justified, because I derive utility from my health
Don't you think a word like "unintentional" or "undesirable" would make a lot more sense here than unjustified?
Me going out and stealing some fruit. I am hungry, but theft goes against my principles. Not justified.
This sounds like a more reasonable use of the word. But for me, the whole point of using any kind of principles is to have a rule of thumb, a strategy that works in many situations (but not all) so that you don't have to think as hard about what the optimal strategy is in every situation. So if I did something that went against a principle that I generally followed, it may or may not mean I'm doing something undesirable or bad, even in my own system of values. And even if it is bad in my system, it may be good in someone else's system. So I don't feel like using the word "justified" to describe an action that follows some principle that someone has really adds much useful.
What? That makes the entire difference! One basic assumption to what I am proposing is that we are developing a system of principles of action (see earlier reply about the change to "principles" instead of "values") which would be followed by a group of sentient beings. If they are only each given one fundamental desire and it is in direct conflict with the others, of course there's nothing one can do.
This seems to contradict a number of times that you talked about wanting to find--in your words "the fundamental ought", as if there can only ever be one fundamental ought. I gave you that example where there is only one fundamental ought for the AI, in part because it was simplest, and in part because it seemed to fit with your own ideas that there only needs to be one ought rather than many. So because I still can't quite tell which you believe, are you saying that there should be one fundamental principle from which all others are derived, or are you saying there should be a lot of different principles that are all fundamental? The AI would be an example where it had just one fundamental principle from which all others are derived, which is what I thought you wanted originally.
This is where we get into the question of the "fundamental ought", and whether it naturally applies to all sentients or not.
Again, you go back to saying there is only one fundamental ought. Notice you say "the fundamental ought" not "a fundamental ought" or "one of many fundamental oughts". So what is wrong with "kill all humans" as your fundamental ought? Why are you saying that can't count as a fundamental ought, but whatever you have in mind can?
I would say, first of all, that it's obvious there is more than one for most humans. And it's also obvious that the fundamental oughts are going to be different for every sentient creature. The most clear example of this would be the AI or the alien race which has radically different fundamental principles, that have very little or no overlap with ours. But if you agree in that case, then why would you disagree in the case when you have lesser differences between members of our species? Is there some magic point where suddenly, if everyone is in approximate enough agreement, you just pretend that ought is the same for everyone?
In the end, I don't think these examples illustrate a whole lot, because they depend on extremely unlikely scenarios
They illustrate that radically different moral principles can make sense to highly intelligent creatures with very different goals. The aliens and the AI seem unrealistic because their goals are more different than the relatively minor differences between humans. But you seem to be blatently denying the fact that humans do not have identical interests and goals, so there is still a lot of subjectivity in human morals, just not as much as the drastic differences that can arise between morals of different species.