Date: 2010-12-06 02:35 am (UTC)

In your morality scheme, the WTC towers were justifiably demolished, it just happened to be your value system on the losing end.

I guess I'm not really sure what you even mean when you say "justified". I mentioned that I don't see any need to justify actions, but maybe I should have said... I don't understand what it would mean to justify an action. It seems self evident to me that you can't justify actions... I expect people to act on their beliefs and values. And it's unfortunate if many of those beliefs are wrong, or if many of those values seriously conflict with mine... but I'm not sure what else there is to say about it.

"Legitimacy" is a word that is used within a legal context, when there is some system of laws set up and agreed upon. Their actions were illegitimate in that they violated international law, however I think plenty of illegitimate actions have been a good thing (for example, the American Revolution). I don't know what legitimacy (or justification) would mean outside of a legal framework.

A more interesting and more relevant case would be if the AI was not hard-coded, but given the ability to develop its values, starting from some basic set that involved wanting to kill us. Here we could demonstrate how, through our history, we have developed from a more anarchic, brutal society into one with a tighter set of values that mutually supports each other in our pursuit of happiness. Thus, I think a super-intelligent AI could figure out a scheme where it can develop a value set that utilizes us as a continued source of mutual happiness, versus just killing us and potentially being killed itself or deprived of its other pleasures.

Interesting response. True, there are usually multiple fundamental values that people have. But I'm not sure why that would make such a huge difference. Let's take another example, since perhaps the AI example is somewhat artificial. Suppose a few years from now, a super advanced alien race suddenly arrives on Earth. Like us, they have a lot of different instincts and values and goals (unlike the AI). But the problem is, it happens that humans are really ideal tasty food for their species, and they're here to grab a snack, en route to wherever they are going. They're so much more advanced than us that we have nothing to offer them in the way of technology or knowledge that they don't already have a million times over. Hence, our only real value to them is nutrition. They have plenty of other values, but that's really the only thing they can get out of interacting with humans. Or let's even say perhaps there are a few other things they might get out, like studying us as a species and our history... but the value they derive from getting a good snack on the way to wherever they're going is greater. It would still be pretty useless to try to talk them out of it, no?

Yes, these are extreme examples and the values of humans are all closer together than the values of a completely different intelligent species or an AI might be. But if you agree that an alien species could have radically different values (and not be objectively wrong about them) then it seems like you would have to agree that different humans can at least have slightly different values and not be wrong about them.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

spoonless: (Default)
Domino Valdano

May 2023

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 17th, 2025 08:32 pm
Powered by Dreamwidth Studios