Of course, you could just go my way and say that "likely" here is a Bayesian probability, and that to say that an observation makes something more likely for someone is to say that that person should become more confident in that thing.
I do think of "likely" as meaning a Bayesian probability, although I would have not used the words "should become more confident" there. Those who believe in Bayes' assumptions do become more confident about something that can be inferred through Bayes' theorem.
So the normative question really boils down to the question of "should one be a Bayesian"? I don't know if he still has it there, but for a long time gustavolacerda had his Religion on Facebook set to "Bayesian". I got a real kick out of that, and I do think there is sometimes something faith based about it.
But I also think there is something objectively true about Bayesianism. Namely, that if you did an experiment and put a Bayesian side by side an agent using some other epistemology, each in the same environment, the Bayesian will be able to make more accurate predictions and end up with a more accurate model of the environment at the end of the day. I think this is something testable, and positively verifiable (of course you could never test all possible environments, but you can at least get a good idea of how well Bayesians do compared to other epistemologies for a wide variety of environments).
But Bayesian epistemology is aimed at a specific goal--namely, being able to make accurate future predictions based on past observations. In principle, you could program an agent to not have this goal--it could have other goals. And perhaps for its goals, making predictions doesn't matter. Maybe its goal is simply to kill itself after running for 5 minutes, regardless of what its environment is. For that agent, that represents a good life, and is the ultimate good. Who is to say that the agent is "wrong"? That's what it was programmed to do.
Re: the is-ought gap
Date: 2010-10-14 02:26 am (UTC)Of course, you could just go my way and say that "likely" here is a Bayesian probability, and that to say that an observation makes something more likely for someone is to say that that person should become more confident in that thing.
I do think of "likely" as meaning a Bayesian probability, although I would have not used the words "should become more confident" there. Those who believe in Bayes' assumptions do become more confident about something that can be inferred through Bayes' theorem.
So the normative question really boils down to the question of "should one be a Bayesian"? I don't know if he still has it there, but for a long time
But I also think there is something objectively true about Bayesianism. Namely, that if you did an experiment and put a Bayesian side by side an agent using some other epistemology, each in the same environment, the Bayesian will be able to make more accurate predictions and end up with a more accurate model of the environment at the end of the day. I think this is something testable, and positively verifiable (of course you could never test all possible environments, but you can at least get a good idea of how well Bayesians do compared to other epistemologies for a wide variety of environments).
But Bayesian epistemology is aimed at a specific goal--namely, being able to make accurate future predictions based on past observations. In principle, you could program an agent to not have this goal--it could have other goals. And perhaps for its goals, making predictions doesn't matter. Maybe its goal is simply to kill itself after running for 5 minutes, regardless of what its environment is. For that agent, that represents a good life, and is the ultimate good. Who is to say that the agent is "wrong"? That's what it was programmed to do.