spoonless: (friendly)
[personal profile] spoonless
via [livejournal.com profile] crasch,

In this highly anticipated new book, the bestselling author of The End of Faith and Letter to a Christian Nation calls for an end to religion’s monopoly on morality and human values.

"In this explosive new book, Sam Harris tears down the wall between scientific facts and human values, arguing that most people are simply mistaken about the relationship between morality and the rest of human knowledge. Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.” Because there are definite facts to be known about where we fall on this landscape, Harris foresees a time when science will no longer limit itself to merely describing what people do in the name of “morality”; in principle, science should be able to tell us what we ought to do to live the best lives possible." - The Free Press

"I was one of those who had unthinkingly bought into the hectoring myth that science can say nothing about morals. The Moral Landscape has changed all that for me." - Richard Dawkins

Very interesting! This caught my eye because of my recent debate with [livejournal.com profile] easwaran over whether science might ever be able to bridge the "is-ought" gap and give moral prescriptions:

http://spoonless.livejournal.com/180836.html?thread=1532772#t1532772

As I argue in the thread with [livejournal.com profile] easwaran, I do not think science will ever be able to say anything about fundamental values, and I do not believe there are objectively right or wrong answers to questions like "how many kittens lives is one human life worth?" I've never believed that moral "truths" are the same kinds of truths that we talk about when we talk about facts about the world--rather, I think they are facts about our personal desires and whims, which are inherently subjective. But I have great respect for Richard Dawkins, and if he says this book (which just came out a month ago) has completely changed his mind on such an important issue, then I will surely give it a chance--perhaps it can change my mind too. Somehow I doubt it, but nevertheless I look forward to reading it! While I've never agreed with the idea of objective morality, I have always found the possibility positively tantalizing and have often thought "I'd like nothing more than for that to be true--I wish it was, but I know it couldn't possibly be."

Date: 2010-12-06 02:35 am (UTC)
From: [identity profile] spoonless.livejournal.com

In your morality scheme, the WTC towers were justifiably demolished, it just happened to be your value system on the losing end.

I guess I'm not really sure what you even mean when you say "justified". I mentioned that I don't see any need to justify actions, but maybe I should have said... I don't understand what it would mean to justify an action. It seems self evident to me that you can't justify actions... I expect people to act on their beliefs and values. And it's unfortunate if many of those beliefs are wrong, or if many of those values seriously conflict with mine... but I'm not sure what else there is to say about it.

"Legitimacy" is a word that is used within a legal context, when there is some system of laws set up and agreed upon. Their actions were illegitimate in that they violated international law, however I think plenty of illegitimate actions have been a good thing (for example, the American Revolution). I don't know what legitimacy (or justification) would mean outside of a legal framework.

A more interesting and more relevant case would be if the AI was not hard-coded, but given the ability to develop its values, starting from some basic set that involved wanting to kill us. Here we could demonstrate how, through our history, we have developed from a more anarchic, brutal society into one with a tighter set of values that mutually supports each other in our pursuit of happiness. Thus, I think a super-intelligent AI could figure out a scheme where it can develop a value set that utilizes us as a continued source of mutual happiness, versus just killing us and potentially being killed itself or deprived of its other pleasures.

Interesting response. True, there are usually multiple fundamental values that people have. But I'm not sure why that would make such a huge difference. Let's take another example, since perhaps the AI example is somewhat artificial. Suppose a few years from now, a super advanced alien race suddenly arrives on Earth. Like us, they have a lot of different instincts and values and goals (unlike the AI). But the problem is, it happens that humans are really ideal tasty food for their species, and they're here to grab a snack, en route to wherever they are going. They're so much more advanced than us that we have nothing to offer them in the way of technology or knowledge that they don't already have a million times over. Hence, our only real value to them is nutrition. They have plenty of other values, but that's really the only thing they can get out of interacting with humans. Or let's even say perhaps there are a few other things they might get out, like studying us as a species and our history... but the value they derive from getting a good snack on the way to wherever they're going is greater. It would still be pretty useless to try to talk them out of it, no?

Yes, these are extreme examples and the values of humans are all closer together than the values of a completely different intelligent species or an AI might be. But if you agree that an alien species could have radically different values (and not be objectively wrong about them) then it seems like you would have to agree that different humans can at least have slightly different values and not be wrong about them.

Date: 2010-12-06 04:53 am (UTC)
From: [identity profile] geheimnisnacht.livejournal.com
I guess I'm not really sure what you even mean when you say "justified".

I take justified to simply mean your action has been arrived at rationally, sourced by your "values" and not contradicting any "principles". Examples:

Me stabbing myself in the foot right now. Not justified, because I derive utility from my health (even though it doesn't necessarily go against any principles I have)

Me going out and stealing some fruit. I am hungry, but theft goes against my principles. Not justified.

Me going out and buying lunch. I am hungry, and this action doesn't go against any principles, so it is justified.

I expect people to act on their beliefs and values.

So those beliefs/values, to me, are the justification for the subsequent actions.

True, there are usually multiple fundamental values that people have. But I'm not sure why that would make such a huge difference

What? That makes the entire difference! One basic assumption to what I am proposing is that we are developing a system of principles of action (see earlier reply about the change to "principles" instead of "values") which would be followed by a group of sentient beings. If they are only each given one fundamental desire and it is in direct conflict with the others, of course there's nothing one can do.

Your new example hits at a different aspect, however. Although, let me change it to make it more sensible (it's quite a stretch to assume an alien race developed a taste for a species from a different planet, and that they couldn't just make something that tastes like us without needing to move between stars, and that there are no roughly equivalent dietary substitutes, etc)

Instead, an advanced alien race arrives at Earth expecting to terraform it for their use, since their homeworld is now uninhabitable. They needed an oxygen-rich environment with water, and found Earth was the closest such, but they need to crank the temperature up to about 85C and introduce some other trace gases that would be toxic to us and all other life. They didn't realize there was an intelligent species on Earth. The point being, it is posed as their survival vs ours.


This is where we get into the question of the "fundamental ought", and whether it naturally applies to all sentients or not. If we take it to apply only to ourselves, then we have to attempt to prevent the aliens from their terraforming. If we take it to apply to all sentients, then assuming they are a more "fit" species, we nominally should yield to them and accept our fate, assuming there is no middle ground or other solution.

In the end, I don't think these examples illustrate a whole lot, because they depend on extremely unlikely scenarios, while I think the more likely scenarios are better handled with this system than our current one.

Date: 2010-12-06 07:35 pm (UTC)
From: [identity profile] spoonless.livejournal.com

Me stabbing myself in the foot right now. Not justified, because I derive utility from my health

Don't you think a word like "unintentional" or "undesirable" would make a lot more sense here than unjustified?

Me going out and stealing some fruit. I am hungry, but theft goes against my principles. Not justified.

This sounds like a more reasonable use of the word. But for me, the whole point of using any kind of principles is to have a rule of thumb, a strategy that works in many situations (but not all) so that you don't have to think as hard about what the optimal strategy is in every situation. So if I did something that went against a principle that I generally followed, it may or may not mean I'm doing something undesirable or bad, even in my own system of values. And even if it is bad in my system, it may be good in someone else's system. So I don't feel like using the word "justified" to describe an action that follows some principle that someone has really adds much useful.

What? That makes the entire difference! One basic assumption to what I am proposing is that we are developing a system of principles of action (see earlier reply about the change to "principles" instead of "values") which would be followed by a group of sentient beings. If they are only each given one fundamental desire and it is in direct conflict with the others, of course there's nothing one can do.

This seems to contradict a number of times that you talked about wanting to find--in your words "the fundamental ought", as if there can only ever be one fundamental ought. I gave you that example where there is only one fundamental ought for the AI, in part because it was simplest, and in part because it seemed to fit with your own ideas that there only needs to be one ought rather than many. So because I still can't quite tell which you believe, are you saying that there should be one fundamental principle from which all others are derived, or are you saying there should be a lot of different principles that are all fundamental? The AI would be an example where it had just one fundamental principle from which all others are derived, which is what I thought you wanted originally.

This is where we get into the question of the "fundamental ought", and whether it naturally applies to all sentients or not.

Again, you go back to saying there is only one fundamental ought. Notice you say "the fundamental ought" not "a fundamental ought" or "one of many fundamental oughts". So what is wrong with "kill all humans" as your fundamental ought? Why are you saying that can't count as a fundamental ought, but whatever you have in mind can?

I would say, first of all, that it's obvious there is more than one for most humans. And it's also obvious that the fundamental oughts are going to be different for every sentient creature. The most clear example of this would be the AI or the alien race which has radically different fundamental principles, that have very little or no overlap with ours. But if you agree in that case, then why would you disagree in the case when you have lesser differences between members of our species? Is there some magic point where suddenly, if everyone is in approximate enough agreement, you just pretend that ought is the same for everyone?

In the end, I don't think these examples illustrate a whole lot, because they depend on extremely unlikely scenarios

They illustrate that radically different moral principles can make sense to highly intelligent creatures with very different goals. The aliens and the AI seem unrealistic because their goals are more different than the relatively minor differences between humans. But you seem to be blatently denying the fact that humans do not have identical interests and goals, so there is still a lot of subjectivity in human morals, just not as much as the drastic differences that can arise between morals of different species.

Date: 2010-12-06 08:09 pm (UTC)
From: [identity profile] geheimnisnacht.livejournal.com
But for me, the whole point of using any kind of principles is to have a rule of thumb...

Exactly. And I have been using the word "justification" in both senses perhaps, both relative to the rules of thumb and in the exact sense. But I agree we would end up with some error between the system of principles (rules of thumb) and some exact moral calculus, so the justification would be inconsistent sometimes. I think this is just a detail, though, we still haven't agreed on how to arrive at the system in the first place.

I gave you that example where there is only one fundamental ought for the AI

The thing you gave me was not what I would refer to as a fundamental ought! What you stated is what I would call a desire, motivation, intrinsic drive of the AI. I mean, it could technically fill the role of a fundamental ought, but it would be trivially stupid.

How are you categorically confusing "wanting to eat/kill humans" with the type of example I mentioned of Harris's of "create a thriving global community"? Your example is a proximate "urge", something an individual could be instinctively motivated to do. No one instinctively "creates a thriving global community", it is an assumed axiom that the holder hopes would yield a good system of principles, by which a society can live to their mutual betterment.

What I am referring to as "fundamental ought" is more like a way of measuring our success as sentients. We have the power to determine our actions apart from simply acting on our biological instincts.

Date: 2010-12-06 11:51 pm (UTC)
From: [identity profile] spoonless.livejournal.com

No one instinctively "creates a thriving global community", it is an assumed axiom that the holder hopes would yield a good system of principles

This is a circular statement. How do we know whether it's a good system of principles? According to you and Sam Harris, you do a science experiment and see how well it measures up according to your fundamental arbitrary metric, "creating a thriving global community".

So, summarizing how you think a fundamental ought should be chosen is that you pick one arbitrarily, then work out the principles that follow, and then "hope" that those principles will lead to a world that satisfies the fundamental ought. Obviously it will satisfy the fundamental ought, because you chose them so they would! There's no science going on here, just pure circular reasoning.
Edited Date: 2010-12-06 11:53 pm (UTC)

Date: 2010-12-07 04:57 am (UTC)
From: [identity profile] geheimnisnacht.livejournal.com
It's not circular, it is just not rigorous in finding a "good choice". You are forgetting to include that, to determine whether or not the system is good requires actually using the system. What you have incorrectly assumed is happening is circular, yes, but I did forget to put that in my diagram.

I obviously don't have it figured out yet either. I have been considering whether there should be some kind of "self-consistent" loop which would feed back into the "global metric" (see email). Perhaps that makes more sense.

You are missing where the science comes in, which is in evaluating the fitness of the system of principles. Harris essentially said that it's no stretch to say that "honor-killing" is objectively wrong. However, in order to actually say "honor-killing" is wrong, we need some way to measure our values.

Date: 2010-12-07 12:01 am (UTC)
From: [identity profile] spoonless.livejournal.com

The thing you gave me was not what I would refer to as a fundamental ought! What you stated is what I would call a desire, motivation, intrinsic drive of the AI. I mean, it could technically fill the role of a fundamental ought, but it would be trivially stupid.

Your statement that "kill all humans" is a trivially stupid fundamental ought, while "create a thriving global community is a great one is very obviously motivated by an emotional response, not be reason or science.

What you mean to say is not that it's stupid, but that it is morally wrong, ie that you feel anger and outrage over it, rather than warm fuzzy feelings like you feel about "create a thriving global community". I have similar feelings about both of those statements, but it seems like I'm more aware that my emotions are what is making me feel that way. You've somehow convinced yourself through circular reasoning that you don't need any foundation for a fundamental ought and that it's somehow self-derived.

Profile

spoonless: (Default)
Domino Valdano

May 2023

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 24th, 2025 02:50 am
Powered by Dreamwidth Studios