Date: 2010-12-04 08:21 pm (UTC)

Not all conflict arises from value system conflicts--simple lack of information could lead to conflict, even with completely aligned values--but it would seem a large percentage do come from value conflict. Establishing a fundamental ought would be, I think, a necessary step to really align global values (without resorting to atrocity).

Your point that not all conflict arises from value system conflicts is an important one. Even if people all had the same values, there would still be conflicts in the world of just as nasty a nature... so trying to force everyone to all have the same values would not solve the problem you're trying to solve. Admittedly it might reduce some conflicts, but I see the goal of forcibly "aligning global values" itself as unethical and imperialist.

As I mentioned in my post, I'd be willing to support some kind of minimal intervention in the cases where people have values that are so wrong by my standards that I feel I'm doing more good than harm in intervening. But I'd be much more careful with that than you or Sam Harris, and would never expect or want *everyone's* values to come into perfect alignment.

I hope this answers your previous question about there being some people who have really radical and dangerous values... in some cases I would be ok with using physical force to stop them from acting out their values.

As an argument against the view that you're describing, consider this thought experiment. Let's build an AI that hates every single human alive and its only goal is it wants to torture and kill people as much as it can. That's what makes it happy and that's what gives life meaning for it, for this AI the only fundamental ought is to make people suffer. But, let's make this AI extremely intelligent. Now, my question for you, assuming you think this is possible to do (not that I'm suggesting we actually do it!) is do you think there is anything the AI is mistaken about? In other words, is there some fact about the world the AI is objectively wrong about, such that you could say "hold on, wait Mr. AI... let me explain to you why it's actually wrong to kill humans. You just don't understand this fact, let me enlighten you!" Do you actually think you could convince it through some kind of logical argument? Do you think Sam Harris could walk up and do a science experiment, and then the AI would say "oh, you're right! I just didn't understand, that killing people is wrong because it makes them suffer. My whole fundamental reason for existence is based on something that's just wrong. I will now kill myself."

If I encountered such an AI, I would not try to reason with it, I would try to kill it. Plain and simple. If your view is correct, and there is something objective about your morality, then reasoning with it might be the right strategy. If my view is correct, then killing it is obviously the right strategy. I hope you wouldn't be busy trying to reason with it while I would be killing it!
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

spoonless: (Default)
Domino Valdano

May 2023

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 1st, 2025 06:31 am
Powered by Dreamwidth Studios