Sam Harris's "The Moral Landscape"
Nov. 11th, 2010 08:50 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
via
crasch,
In this highly anticipated new book, the bestselling author of The End of Faith and Letter to a Christian Nation calls for an end to religion’s monopoly on morality and human values.
"In this explosive new book, Sam Harris tears down the wall between scientific facts and human values, arguing that most people are simply mistaken about the relationship between morality and the rest of human knowledge. Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.” Because there are definite facts to be known about where we fall on this landscape, Harris foresees a time when science will no longer limit itself to merely describing what people do in the name of “morality”; in principle, science should be able to tell us what we ought to do to live the best lives possible." - The Free Press
"I was one of those who had unthinkingly bought into the hectoring myth that science can say nothing about morals. The Moral Landscape has changed all that for me." - Richard Dawkins
Very interesting! This caught my eye because of my recent debate with
easwaran over whether science might ever be able to bridge the "is-ought" gap and give moral prescriptions:
http://spoonless.livejournal.com/180836.html?thread=1532772#t1532772
As I argue in the thread with
easwaran, I do not think science will ever be able to say anything about fundamental values, and I do not believe there are objectively right or wrong answers to questions like "how many kittens lives is one human life worth?" I've never believed that moral "truths" are the same kinds of truths that we talk about when we talk about facts about the world--rather, I think they are facts about our personal desires and whims, which are inherently subjective. But I have great respect for Richard Dawkins, and if he says this book (which just came out a month ago) has completely changed his mind on such an important issue, then I will surely give it a chance--perhaps it can change my mind too. Somehow I doubt it, but nevertheless I look forward to reading it! While I've never agreed with the idea of objective morality, I have always found the possibility positively tantalizing and have often thought "I'd like nothing more than for that to be true--I wish it was, but I know it couldn't possibly be."
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
In this highly anticipated new book, the bestselling author of The End of Faith and Letter to a Christian Nation calls for an end to religion’s monopoly on morality and human values.
"In this explosive new book, Sam Harris tears down the wall between scientific facts and human values, arguing that most people are simply mistaken about the relationship between morality and the rest of human knowledge. Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a “moral landscape.” Because there are definite facts to be known about where we fall on this landscape, Harris foresees a time when science will no longer limit itself to merely describing what people do in the name of “morality”; in principle, science should be able to tell us what we ought to do to live the best lives possible." - The Free Press
"I was one of those who had unthinkingly bought into the hectoring myth that science can say nothing about morals. The Moral Landscape has changed all that for me." - Richard Dawkins
Very interesting! This caught my eye because of my recent debate with
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
http://spoonless.livejournal.com/180836.html?thread=1532772#t1532772
As I argue in the thread with
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
no subject
Date: 2010-12-03 02:53 am (UTC)Anyway, yeah, I would change "civilization" to "all humans" at this point. Why all humans and no more? Because that implicitly takes into account the rest, to as much as we care about. People instinctively like kittens, they are valuable to us, therefore their well-being is included in ours.
No, no, no... it does not take into account the Kitten's interests, and that's a really important point.
That's like saying we don't need to worry about black people thriving because it's already taken into account by the fact that white people love their fellow man.
Nearly all of the important ethical disagreements that people have are based on how large of a group to include in the pool of interests you're considering. Ethical individualists say each person should only care about themselves. Many conservatives tend to care only about their families and other rich people like themselves. Some people care only about civilized culture and they don't give a crap about uncivilized cultures. Some people care about humans, but don't give a crap about animals. The people who do care about animals all have different weights to how much they care, that's my whole point... there's no objective standard because everyone has a different morality metric they use based on their own personal desires. A lot of people happen to have similar metrics, and that's the only times where we agree on things. It is not similar at all to the case in science, where when we agree it's because we're both on the right track. In ethics, when people agree it is only because they happen to have similar desires, not because they are both right or anything.
no subject
Date: 2010-12-03 03:24 am (UTC)Put succinctly, what we are doing is determining a course of action for those beings able to determine their own course of action from first principles.
*As I am no expert on other primates, dolphins, etc I admit it is possible for something to emerge there
no subject
Date: 2010-12-03 02:49 pm (UTC)Consider what we are talking about, what we are trying to establish: some fundamental axiom that solves the problem of the "purpose" of the universe.
Wow, I did not expect you to say that! Are you serious--you really think the universe has a purpose? In my understanding, this is something religious nuts believe and has nothing to do with reality.
That's why I don't like neoconservatives. Their argument is that, since the universe has no purpose, we need to invent powerful and inspiring myths that give people's lives meaning. But the problem with that--aside from the obvious fact that I don't think we should lie to people--is that their lives already have meaning, it's just a different meaning for each person. To pretend that every person's life has the same meaning, or that there is some overall meaning for life, is completely stupid.
So it sounds like you are trying to invent an axiom to lie to create one of these neo-conservative style myths, to convince them that certain religious delusions that have been long discredited should be revived.
The idea that the universe has a purpose comes from the mistaken idea that the universe is a sentient being. Sentient beings can have goals, inanimate things cannot.
no subject
Date: 2010-12-03 07:44 pm (UTC)I assume most of us default to a sort of subconscious, natural set of axioms, as well as ones we have that are philosophically or otherwise derived. You eat because you are hungry, and assume that as axiomatic. We don't kill each other, despite some urges to do so, because we have established it is wrong, i.e. "Don't kill each other" is one of our philosophical/religious axioms. And of course we are irrational a lot of the time, but we aren't trying to justify that, it is just "thermal noise".
But we can arrive at many different sets of these axioms, and different people clearly do so. As your newest entry talks about, we each end up with some set of values that can vary wildly between individuals. You seem to be fine with that, and just leave it as it is. But as anyone can tell you, values change based on your experiences, and with new information you need to re-evaluate your stance. How do we go about evaluating the information available and arriving at a value? Use our other values as metrics? This is circular and doesn't give you a way to establish your initial value(s)--however it seems to be how we go about it. In order to have a well-defined value structure, you need a starting point, and that is where this "fundamental ought" comes in.
Hopefully this clears up your other post as well, since you can reread my previous post in light of this recent clarification.
no subject
Date: 2010-12-03 08:30 pm (UTC)I shouldn't have said it like that. I'm certainly not declaring any sort of real purpose for the universe derived from some outside force
Ok, glad to hear it =)
. It's perhaps better put as the "fundamental ought" from which all other oughts can be derived. Or more pedagogically, "how can I determine what I should do with my time?", and if you work backwards from that, you eventually need something(s) that serve as axioms.
I would say the answer to the question "how can I determine what I should do with my time" is simple... just listen to your heart. That's where the source of all values is, not in some externally imposed axiomatic system.
How do you justify any of the actions you take?
Why would you want to justify actions you take? I just take whatever actions I feel like taking, I don't see any need to justify them. It does seem like you have a more external view of where morality comes from, whereas I have an entirely internal view.
And in Sam Harris's case, he used the example of "creating a thriving global civilization" as the thing we should assume to be correct, our fundamental ought.
I've probably made this clear by now, but just to reiterate--I think there are many fundamental oughts, not one fundamental ought... and those oughts are different for different people, because each person has a different set of things they consider important. There just happens to be a lot of overlap for cases like "thriving of the global civilization" so it's to our mutual benefit to work together on it.
But as anyone can tell you, values change based on your experiences, and with new information you need to re-evaluate your stance. How do we go about evaluating the information available and arriving at a value?
This is one thing I ended up saying a little bit wrong in my post, I should have clarified this more. I said something about people only ever reaching a consensus on values if they start out close together. This sort of implied that values cannot change over time and just are fixed from birth. I do think they can change over time as you have new experiences, but I think your values are shaped a lot by genetics and by your personal history. And probably they can change a lot more early in life than later in life. But my main point is, you don't arrive at values through some kind of reasoning, your values are what motivates you, they're your emotions. They're not determined by logic, they're determined by emotions which is why I said "listen to your heart". It has nothing to do with a system of axioms or logical deduction.
no subject
Date: 2010-12-04 06:40 pm (UTC)First, I take what you say to be "heart" as, perhaps, moral instinct, or intuition.
But then we haven't gotten anywhere! We do that now, and thus have no way to rationalize stopping such apparent atrocities as honor-killing, because those are based on that societies unique brand of values. In their "heart", they believe that their religious doctrine is of great value, and thus act out whatever nonsense it contains. You are legitimizing those actions with your stance.
At the minimum, Sam Harris is describing how we can distinguish between such value systems already, given an assumption about a fundamental ought. I think he effectively shows your stance to be worse than his because of the inherent conflict yours will entail.
Consider all of the dilemmas that arise when values conflict, as they surely will if people assume values in the way you describe. The honor-killing dilemma (Fundamentalist Islamic values vs most other value systems) is only one such. The abortion debate is another public example. Not all conflict arises from value system conflicts--simple lack of information could lead to conflict, even with completely aligned values--but it would seem a large percentage do come from value conflict. Establishing a fundamental ought would be, I think, a necessary step to really align global values (without resorting to atrocity).
But my main point is, you don't arrive at values through some kind of reasoning, your values are what motivates you, they're your emotions.
I disagree. I feel that I have thought out most, if not all, of my values and made sure they are consistent. I am wondering if we are talking about the same things here. You say they are your "emotions", which I don't think have much to do, directly, with values. I would say we have "desires", which are directly influenced by our emotions, and this might be what you are thinking about. Of course each person has different desires, which will be affected by your genetics and experiences. Heterosexuality for homosexuality is a clean example. There are no values here, just desires. These would be what make you "feel like doing something", as in your I just take whatever actions I feel like taking, I don't see any need to justify them. So when I asked about justification, I wasn't saying you should justify your desires, but justify your action as being ethical. If you felt like shoplifting, would you do it, and if so would it be justified in your view?
Also, earlier you listed things like "honesty" as a value, which I agree with, but that has nothing (directly) to do with emotions, or coming from the "heart". I still feel like lying sometimes, even though I have conditioned myself not to do so. My heart says "Lie!", while my reasoning center says, "wouldn't you rather tell the truth because it tends to lead to better outcomes in the future?"
And as an example, I think we can derive honesty as a value from a fundamental ought such as Harris's, as I would guess honest societies fare better than dishonest ones.
no subject
Date: 2010-12-04 07:58 pm (UTC)First, I take what you say to be "heart" as, perhaps, moral instinct, or intuition.
Right, your emotions, your conscience, etc.
But then we haven't gotten anywhere!
That's because your stated goal is to justify your actions, and I don't see any need for that. So I don't mind that we haven't gotten anywhere, because I don't agree with where you want to get =)
In their "heart", they believe that their religious doctrine is of great value, and thus act out whatever nonsense it contains. You are legitimizing those actions with your stance.
I think there are two ways of looking at their behavior--one is that they are really getting their justifications from their religious doctrine rather than from their heart, and that's the problem. Thinking that morality is something external that is written down in some document, or made up by some system of axioms is what makes people do horrible things, in this way of looking at it. If instead they just listened to their intuition while they were about to throw rocks at someone, and thought "maybe this isn't such a good idea" perhaps they wouldn't act that way.
However, I'll admit that there is another way of looking at it, and perhaps more accurate: maybe they *are* listening to their intuitions and they're just using the book as a justification for them. Notice however that the book was written by someone trying to do exactly what you're doing, and codify someone's subjective intuitions into an external system of axioms/laws.
I think some combination of these two ways of looking at it is true. And actually, I would argue that anyone who listens to a book of axioms rather than their own conscience, whether that book is written by a religious person or a scientist, is objectively wrong in that they don't understand the source of human values. And incidentally, the Nazis got a lot of justification for what they did not from religious books but from racist "science" that was seen as mainstream at the time but is now seen more as pseudoscience. The communists actually went too far in rejecting it though, and labeled all genetics as "pseudoscience".
At any rate, forgetting about the parts of why they're acting that have to do with a misunderstanding of the facts, and that have to do with listening more to a book than to their own conscience, I think we can reduce your complaint here to basically a concern that there will always be some people in the world that have a radically different value system from the rest of us, so much so that what they're doing seems very dangerous and destructive to most of us. I'll get to that in a minute.
no subject
Date: 2010-12-05 05:06 pm (UTC)I presume most people act as you say, and don't really rationalize many of their actions, just going "with their gut". However, this framework that I am talking about with fundamental oughts and deriving morality is not just for the individual. It also informs us about lawmaking, and this is where we are necessarily enforcing some common value system on people. So, I argue that you can't escape requiring justification for certain actions, ones where our current system deems it necessary to establish laws.
one is that they are really getting their justifications from their religious doctrine rather than from their heart
In a way, I this is also "from the heart". That is because their faith in the religious doctrine comes from their heart. The morals themselves are externalized, but the problem still lies in "going with the heart". Here it is a contest, "inside the heart", between their instinctive morals and their faith in the religion.
Notice however that the book was written by someone trying to do exactly what you're doing, and codify someone's subjective intuitions into an external system of axioms/laws.
Don't presume to know the motivations of the writers of the religious texts. Furthermore, you have yet to really understand what I am trying to do here, in many aspects. One that I want to clear up, I'm not proposing any kind of "value system imperialism", any more than physicists are imposing "natural law imperialism". I am suggesting only a strictly academic pursuit. Secondly, I am not trying to codify anything that is subjective (in my mind). Of course, since you've declared all values subjective, it seems that way to you. We'll need to get into that later...
And actually, I would argue that anyone who listens to a book of axioms rather than their own conscience, whether that book is written by a religious person or a scientist, is objectively wrong in that they don't understand the source of human values.
You say "THE source of human values". Are you arguing that external sources don't even count now? You seemed to agree earlier that some people are deriving their values directly from a religious text. Besides that, you might be saying that we can objectively show that the PROPER source for values is subjective... that doesn't sound well-defined.
no subject
Date: 2010-12-06 03:29 am (UTC)You say "THE source of human values". Are you arguing that external sources don't even count now? You seemed to agree earlier that some people are deriving their values directly from a religious text. Besides that, you might be saying that we can objectively show that the PROPER source for values is subjective... that doesn't sound well-defined.
Now this is a good point you make. It does make me stop and question whether I've been contradicting myself here.
First, note that I never said that I agreed people were deriving their values directly from a religious text. I said there were two ways of looking at it and that was the first way, and that the second way which was "perhaps more accurate" is that they are deriving their values from their heart but using the book to try to justify their intuitions. Then I said I thought some combination of them was true. But if there first one is at all true it does seem to contradict the other stuff I've been saying =)
For the most part, what I've been saying all along is that all people get their fundamental values from their emotions (or instincts). And so if they think they get their values from somewhere else, they are mistaken. However, I admit that it does seem like a lot of people get their values from elsewhere, for instance from the Bible. So maybe the issue here is just fundamental values versus derived values. This also brings in the question of to what extent your fundamental values can change over time, and if they can be influenced by different experiences... and if one of those experiences might be say, reading the Bible. I'll have to think about this more.
no subject
Date: 2010-12-04 08:21 pm (UTC)Not all conflict arises from value system conflicts--simple lack of information could lead to conflict, even with completely aligned values--but it would seem a large percentage do come from value conflict. Establishing a fundamental ought would be, I think, a necessary step to really align global values (without resorting to atrocity).
Your point that not all conflict arises from value system conflicts is an important one. Even if people all had the same values, there would still be conflicts in the world of just as nasty a nature... so trying to force everyone to all have the same values would not solve the problem you're trying to solve. Admittedly it might reduce some conflicts, but I see the goal of forcibly "aligning global values" itself as unethical and imperialist.
As I mentioned in my post, I'd be willing to support some kind of minimal intervention in the cases where people have values that are so wrong by my standards that I feel I'm doing more good than harm in intervening. But I'd be much more careful with that than you or Sam Harris, and would never expect or want *everyone's* values to come into perfect alignment.
I hope this answers your previous question about there being some people who have really radical and dangerous values... in some cases I would be ok with using physical force to stop them from acting out their values.
As an argument against the view that you're describing, consider this thought experiment. Let's build an AI that hates every single human alive and its only goal is it wants to torture and kill people as much as it can. That's what makes it happy and that's what gives life meaning for it, for this AI the only fundamental ought is to make people suffer. But, let's make this AI extremely intelligent. Now, my question for you, assuming you think this is possible to do (not that I'm suggesting we actually do it!) is do you think there is anything the AI is mistaken about? In other words, is there some fact about the world the AI is objectively wrong about, such that you could say "hold on, wait Mr. AI... let me explain to you why it's actually wrong to kill humans. You just don't understand this fact, let me enlighten you!" Do you actually think you could convince it through some kind of logical argument? Do you think Sam Harris could walk up and do a science experiment, and then the AI would say "oh, you're right! I just didn't understand, that killing people is wrong because it makes them suffer. My whole fundamental reason for existence is based on something that's just wrong. I will now kill myself."
If I encountered such an AI, I would not try to reason with it, I would try to kill it. Plain and simple. If your view is correct, and there is something objective about your morality, then reasoning with it might be the right strategy. If my view is correct, then killing it is obviously the right strategy. I hope you wouldn't be busy trying to reason with it while I would be killing it!
no subject
Date: 2010-12-05 05:41 pm (UTC)The thing is, your view gives legitimacy to intervention the other way as well, things such as the terrorist attacks occurring around the world. Say group A have their set of values, which they claim to be "from the heart", so you notionally accept that. They view group B's values as horrible, and subsequently A are as justified in attacking B. A could be you and B the Taliban, or A could be the Taliban and you B. In your morality scheme, the WTC towers were justifiably demolished, it just happened to be your value system on the losing end.
**********
On the AI:
If the desire to kill humans is as hard-coded as is our necessity to eat, and it has no other sources of pleasure, then there is by definition no stable coexistence with humans. We would have to try and destroy it immediately. This is negligibly different than trying to convince a super-virus not to kill us all. If humans had only one sole desire, we would not require a value system! So here, the fact that the AI is super intelligent is wasted because it doesn't require a motivational decision, just a tactical one (HOW to kill us, not WHY).
A more interesting case would be if the AI did like killing us, but also liked other things as well (again, assuming all hard-coded). Then, we could argue with it saying it is favorable to be at peace and devote our mutual energies to maximizing its other pleasures, instead of facing possible destruction. This is more or less what we do with humans anyway. Those that might have some passing desire to kill face harsh consequences socially and criminally if they act on it.
A more interesting and more relevant case would be if the AI was not hard-coded, but given the ability to develop its values, starting from some basic set that involved wanting to kill us. Here we could demonstrate how, through our history, we have developed from a more anarchic, brutal society into one with a tighter set of values that mutually supports each other in our pursuit of happiness. Thus, I think a super-intelligent AI could figure out a scheme where it can develop a value set that utilizes us as a continued source of mutual happiness, versus just killing us and potentially being killed itself or deprived of its other pleasures.
This last case is how I imagine humans are, minus the super-intelligence. We come in with some instinctive values, but are trained into a new set through our upbringing. To me, our modern-day values reflect more our external value systems than "blank-slate" human values.
no subject
Date: 2010-12-06 02:35 am (UTC)In your morality scheme, the WTC towers were justifiably demolished, it just happened to be your value system on the losing end.
I guess I'm not really sure what you even mean when you say "justified". I mentioned that I don't see any need to justify actions, but maybe I should have said... I don't understand what it would mean to justify an action. It seems self evident to me that you can't justify actions... I expect people to act on their beliefs and values. And it's unfortunate if many of those beliefs are wrong, or if many of those values seriously conflict with mine... but I'm not sure what else there is to say about it.
"Legitimacy" is a word that is used within a legal context, when there is some system of laws set up and agreed upon. Their actions were illegitimate in that they violated international law, however I think plenty of illegitimate actions have been a good thing (for example, the American Revolution). I don't know what legitimacy (or justification) would mean outside of a legal framework.
A more interesting and more relevant case would be if the AI was not hard-coded, but given the ability to develop its values, starting from some basic set that involved wanting to kill us. Here we could demonstrate how, through our history, we have developed from a more anarchic, brutal society into one with a tighter set of values that mutually supports each other in our pursuit of happiness. Thus, I think a super-intelligent AI could figure out a scheme where it can develop a value set that utilizes us as a continued source of mutual happiness, versus just killing us and potentially being killed itself or deprived of its other pleasures.
Interesting response. True, there are usually multiple fundamental values that people have. But I'm not sure why that would make such a huge difference. Let's take another example, since perhaps the AI example is somewhat artificial. Suppose a few years from now, a super advanced alien race suddenly arrives on Earth. Like us, they have a lot of different instincts and values and goals (unlike the AI). But the problem is, it happens that humans are really ideal tasty food for their species, and they're here to grab a snack, en route to wherever they are going. They're so much more advanced than us that we have nothing to offer them in the way of technology or knowledge that they don't already have a million times over. Hence, our only real value to them is nutrition. They have plenty of other values, but that's really the only thing they can get out of interacting with humans. Or let's even say perhaps there are a few other things they might get out, like studying us as a species and our history... but the value they derive from getting a good snack on the way to wherever they're going is greater. It would still be pretty useless to try to talk them out of it, no?
Yes, these are extreme examples and the values of humans are all closer together than the values of a completely different intelligent species or an AI might be. But if you agree that an alien species could have radically different values (and not be objectively wrong about them) then it seems like you would have to agree that different humans can at least have slightly different values and not be wrong about them.
no subject
Date: 2010-12-06 04:53 am (UTC)I take justified to simply mean your action has been arrived at rationally, sourced by your "values" and not contradicting any "principles". Examples:
Me stabbing myself in the foot right now. Not justified, because I derive utility from my health (even though it doesn't necessarily go against any principles I have)
Me going out and stealing some fruit. I am hungry, but theft goes against my principles. Not justified.
Me going out and buying lunch. I am hungry, and this action doesn't go against any principles, so it is justified.
I expect people to act on their beliefs and values.
So those beliefs/values, to me, are the justification for the subsequent actions.
True, there are usually multiple fundamental values that people have. But I'm not sure why that would make such a huge difference
What? That makes the entire difference! One basic assumption to what I am proposing is that we are developing a system of principles of action (see earlier reply about the change to "principles" instead of "values") which would be followed by a group of sentient beings. If they are only each given one fundamental desire and it is in direct conflict with the others, of course there's nothing one can do.
Your new example hits at a different aspect, however. Although, let me change it to make it more sensible (it's quite a stretch to assume an alien race developed a taste for a species from a different planet, and that they couldn't just make something that tastes like us without needing to move between stars, and that there are no roughly equivalent dietary substitutes, etc)
Instead, an advanced alien race arrives at Earth expecting to terraform it for their use, since their homeworld is now uninhabitable. They needed an oxygen-rich environment with water, and found Earth was the closest such, but they need to crank the temperature up to about 85C and introduce some other trace gases that would be toxic to us and all other life. They didn't realize there was an intelligent species on Earth. The point being, it is posed as their survival vs ours.
This is where we get into the question of the "fundamental ought", and whether it naturally applies to all sentients or not. If we take it to apply only to ourselves, then we have to attempt to prevent the aliens from their terraforming. If we take it to apply to all sentients, then assuming they are a more "fit" species, we nominally should yield to them and accept our fate, assuming there is no middle ground or other solution.
In the end, I don't think these examples illustrate a whole lot, because they depend on extremely unlikely scenarios, while I think the more likely scenarios are better handled with this system than our current one.
no subject
Date: 2010-12-06 07:35 pm (UTC)Me stabbing myself in the foot right now. Not justified, because I derive utility from my health
Don't you think a word like "unintentional" or "undesirable" would make a lot more sense here than unjustified?
Me going out and stealing some fruit. I am hungry, but theft goes against my principles. Not justified.
This sounds like a more reasonable use of the word. But for me, the whole point of using any kind of principles is to have a rule of thumb, a strategy that works in many situations (but not all) so that you don't have to think as hard about what the optimal strategy is in every situation. So if I did something that went against a principle that I generally followed, it may or may not mean I'm doing something undesirable or bad, even in my own system of values. And even if it is bad in my system, it may be good in someone else's system. So I don't feel like using the word "justified" to describe an action that follows some principle that someone has really adds much useful.
What? That makes the entire difference! One basic assumption to what I am proposing is that we are developing a system of principles of action (see earlier reply about the change to "principles" instead of "values") which would be followed by a group of sentient beings. If they are only each given one fundamental desire and it is in direct conflict with the others, of course there's nothing one can do.
This seems to contradict a number of times that you talked about wanting to find--in your words "the fundamental ought", as if there can only ever be one fundamental ought. I gave you that example where there is only one fundamental ought for the AI, in part because it was simplest, and in part because it seemed to fit with your own ideas that there only needs to be one ought rather than many. So because I still can't quite tell which you believe, are you saying that there should be one fundamental principle from which all others are derived, or are you saying there should be a lot of different principles that are all fundamental? The AI would be an example where it had just one fundamental principle from which all others are derived, which is what I thought you wanted originally.
This is where we get into the question of the "fundamental ought", and whether it naturally applies to all sentients or not.
Again, you go back to saying there is only one fundamental ought. Notice you say "the fundamental ought" not "a fundamental ought" or "one of many fundamental oughts". So what is wrong with "kill all humans" as your fundamental ought? Why are you saying that can't count as a fundamental ought, but whatever you have in mind can?
I would say, first of all, that it's obvious there is more than one for most humans. And it's also obvious that the fundamental oughts are going to be different for every sentient creature. The most clear example of this would be the AI or the alien race which has radically different fundamental principles, that have very little or no overlap with ours. But if you agree in that case, then why would you disagree in the case when you have lesser differences between members of our species? Is there some magic point where suddenly, if everyone is in approximate enough agreement, you just pretend that ought is the same for everyone?
In the end, I don't think these examples illustrate a whole lot, because they depend on extremely unlikely scenarios
They illustrate that radically different moral principles can make sense to highly intelligent creatures with very different goals. The aliens and the AI seem unrealistic because their goals are more different than the relatively minor differences between humans. But you seem to be blatently denying the fact that humans do not have identical interests and goals, so there is still a lot of subjectivity in human morals, just not as much as the drastic differences that can arise between morals of different species.
no subject
Date: 2010-12-06 08:09 pm (UTC)Exactly. And I have been using the word "justification" in both senses perhaps, both relative to the rules of thumb and in the exact sense. But I agree we would end up with some error between the system of principles (rules of thumb) and some exact moral calculus, so the justification would be inconsistent sometimes. I think this is just a detail, though, we still haven't agreed on how to arrive at the system in the first place.
I gave you that example where there is only one fundamental ought for the AI
The thing you gave me was not what I would refer to as a fundamental ought! What you stated is what I would call a desire, motivation, intrinsic drive of the AI. I mean, it could technically fill the role of a fundamental ought, but it would be trivially stupid.
How are you categorically confusing "wanting to eat/kill humans" with the type of example I mentioned of Harris's of "create a thriving global community"? Your example is a proximate "urge", something an individual could be instinctively motivated to do. No one instinctively "creates a thriving global community", it is an assumed axiom that the holder hopes would yield a good system of principles, by which a society can live to their mutual betterment.
What I am referring to as "fundamental ought" is more like a way of measuring our success as sentients. We have the power to determine our actions apart from simply acting on our biological instincts.
no subject
Date: 2010-12-06 11:51 pm (UTC)No one instinctively "creates a thriving global community", it is an assumed axiom that the holder hopes would yield a good system of principles
This is a circular statement. How do we know whether it's a good system of principles? According to you and Sam Harris, you do a science experiment and see how well it measures up according to your fundamental arbitrary metric, "creating a thriving global community".
So, summarizing how you think a fundamental ought should be chosen is that you pick one arbitrarily, then work out the principles that follow, and then "hope" that those principles will lead to a world that satisfies the fundamental ought. Obviously it will satisfy the fundamental ought, because you chose them so they would! There's no science going on here, just pure circular reasoning.
no subject
Date: 2010-12-07 04:57 am (UTC)I obviously don't have it figured out yet either. I have been considering whether there should be some kind of "self-consistent" loop which would feed back into the "global metric" (see email). Perhaps that makes more sense.
You are missing where the science comes in, which is in evaluating the fitness of the system of principles. Harris essentially said that it's no stretch to say that "honor-killing" is objectively wrong. However, in order to actually say "honor-killing" is wrong, we need some way to measure our values.
no subject
Date: 2010-12-07 12:01 am (UTC)The thing you gave me was not what I would refer to as a fundamental ought! What you stated is what I would call a desire, motivation, intrinsic drive of the AI. I mean, it could technically fill the role of a fundamental ought, but it would be trivially stupid.
Your statement that "kill all humans" is a trivially stupid fundamental ought, while "create a thriving global community is a great one is very obviously motivated by an emotional response, not be reason or science.
What you mean to say is not that it's stupid, but that it is morally wrong, ie that you feel anger and outrage over it, rather than warm fuzzy feelings like you feel about "create a thriving global community". I have similar feelings about both of those statements, but it seems like I'm more aware that my emotions are what is making me feel that way. You've somehow convinced yourself through circular reasoning that you don't need any foundation for a fundamental ought and that it's somehow self-derived.
no subject
Date: 2010-12-04 08:36 pm (UTC)You say they are your "emotions", which I don't think have much to do, directly, with values. I would say we have "desires", which are directly influenced by our emotions, and this might be what you are thinking about.
I find it strange that you don't consider desire an emotion. I seem to remember us having this conversation a while back, and I was equally surprised then. I've always thought of it as an emotion, but I guess if you want to call it something else that's fine... I'm no expert on this subject so I don't know whether it is officially classified as an emotion. At any rate, it's not the only emotion involved in value. Other emotions like empathy and guilt are also heavily involved.
Heterosexuality or homosexuality is a clean example. There are no values here, just desires.
I think there are values involved in this if you look at it the right way. Indeed, the issue of gay marriage is often considered a "values" issue in politics. Incidentally, I do not consider abortion a values issue since it is really about whether people believe the soul exists, and if it does at what time it enters the body. I think both pro-lifers and pro-choicers care about preserving life and about freedom of choice, they differ in their understanding of the world. But I do think gay rights is a lot more of an actual values issue. First, the basic difference between a heterosexual and a homosexual is that a homosexual values long term relationships with a same sex partner more than opposite sex, whereas a heterosexual places a very different value on it. Second, many heterosexuals claim to be motivated by "traditional family values" or "preserving marriage" when they try to pass anti gay marriage bills. So it does to some extent come from a conflict of values, although I also think there is an issue of not understanding the facts on the part of the anti-gay marriage crowd... in that, many of them believe God commanded that a man not lie with another man, and therefore they think everyone should obey.
no subject
Date: 2010-12-03 03:13 pm (UTC)