Aug. 23rd, 2015

spoonless: (Default)
I've continued to think about a lot of interconnected topics in physics this past month. Lots of explaining/debating on a mailing list I joined recently that's filled mostly with biologists and systems theorists (with one astrophysicist and one philosopher of science who each became pretty central to the discussion) how quantum mechanics works and how the reversible microphysics of quantum mechanics gives rise to the irreversible macrophysics of thermodynamics. I was disappointed by how little they understood about the subject. There was really only one person on the list who fully understood how thermodynamics works, a biophysicist. The rest of them had lots of half baked ideas and notions about it which took us in all different directions. But having to explain from first principles how all of this works, and to correct their various mistakes one at a time, was great practice for me in making sure that I understood it fully myself. It's the most I've thought about the Arrow of Time in a while. While we do fully understand most of the things they were confused about (things that were considered deeply mysterious in the 1800's, and gradually became better understood later), I admitted that there were two things remaining about the Arrow of Time which we really don't yet understand (or at least, I personally don't yet understand). One is why the universe started in such a special initial state. And the other is what the specific properties of the human brain are which cause us to remember the past rather than the future. I think we know the broad outlines of the second one (erasure of information is connected to learning and memory), but a lot of the details seem fuzzy to me and it's something that I want to make a mental note of, to read, think, and write about it more at some point.

Meanwhile I've discovered a few more goodies, such as this video of Sidney Coleman's "Quantum Mechanics in Your Face" lecture:

Quantum Mechanics In Your Face

I had always heard about this famous lecture on quantum mechanics given by Sidney Coleman, but never watched it myself. I knew most everything in it, but was both surprised and pleased with seeing the way he presents it. I was especially interested to hear him say at the end of the lecture that the view of quantum mechanics he considers most correct follows the spirit of Hugh Everett. By invoking Everett's name rather than Bohr, he seems to be aligning pretty strongly with the Many Worlds Interpretation, although in all fairness he does say that "many people have taken Everett's ideas and run in different directions with them" which possibly implies that he thinks people like Bryce DeWitt (who coined the term "many worlds" or David Deutsch distorted Everett's original ideas.

The list of high profile physicists I've seen now willing to come out in favor of Everett is pretty impressive. So let's see, we've got at least... Leonard Susskind, Raphael Bousso, Sean Carroll, Sidney Coleman, Max Tegmark, David Deutsch. They've all made comments that to me put them more in Everett's camp than Bohr's. And yet interestingly, Lubos Motl and Tom Banks both consistently identify with the Copenhagen camp and invoke Bohr's name over Everett when asked to explain quantum mechanics. But today I happened to run across Lubos linking to this very Sidney Coleman lecture saying it was a great explanation of quantum mechanics. So if Coleman invokes Everett and Motl invokes Bohr, but they both think they're on the same page... there must be a lot less difference between them than people realize.

Last week I read a really fascinating paper by Sean Carroll, where he presents a new derivation of the Born rule (something that's considered necessary for Many Worlds to make sense, but not for Copenhagen):

Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics

This is no question my favorite derivation so far. I had always felt the strange combination of feeling like the derivation ought to be obvious, and that surely it wasn't as difficult or obscure as Deutsch, Zurek, and others made it sound. Carroll's derivation agrees much more with my intuition that the Born rule is epistemic (not something that requires decision theory or a notion of value to derive), and is implied by some pretty basic obvious assumptions (specifically: the fact that changing the environment shouldn't change where you think you're located in a system, a principle he refers to as ESP = epistemic separation principle).

This fit in very nicely to the ongoing conversation I've been having with a coworker about the Everett interpretation based movie we've been talking about making for the past year or so (no idea if that will ever happen, but he says each week that any week we'll probably start filming by next week). It spawned a whole side discussion on observer selection effects, the anthropic principle, quantum suicide experiments, and all kinds of related stuff.

The main thing to come out of it was a new thought experiment my coworker devised, which I must say, is totally brilliant. He was convinced that it undermined my view of quantum suicide experiments and meant that you can't reason about them the way that most people (like myself) who believe in the anthropic principle think you should reason about them. It did manage to confuse me a lot, and took a couple days of thinking for me to eventually realize how to make sense of it within my usual framework of thinking. But I did eventually feel like I resolved the paradox. I don't think he *quite* accepted my resolution though, and even I would admit that despite feeling confident about how it's supposed to work there is still something that seems pretty surprising, spooky, or counter-intuitive about it.

I mentioned that possibly, if nobody has proposed this particular thought experiment before, we ought to write an academic paper on it. Tentatively, I'm calling it the "coin operated quantum suicide booth". I can't think of a shorter name, but that basically describes exactly what goes on in it.

I think I'll get to describing the actual thought experiment in part 6, but for now I just want to note the relevance to this whole instrumentalism/realism series of posts I'm writing. What it boils down to I think is that Copenhagen and Many Worlds have both evolved over time (especially Copenhagen) and today they stand incredibly close to each other, such that it's sometimes hard to tell them apart. As I've mentioned many times, the only real difference is that Many Worlders consider the wave function "real" while Copenhagenists do not. But I think there is a specific reason Copenhagenists take the point of view they do rather than accepting fully Many Worlds. It's because they are cautious about making metaphysical commitments yes, but they are cautious in one particular way. It turns out that if you really take Many Worlds seriously, then you have to use a lot of reasoning that's very deeply connected to what's known as the "anthropic principle". Nick Bostrom, one of my personal heros, whom I was delighted to meet and talk with briefly at the Stanford Singularity Summit years ago, is a philosopher at Oxford who wrote the influential book Anthropic Bias: Observer Selection Effects in Science and Philosophy. (Yes, he owns the domain anthropic-priciple.com, and yes, he's also known for the Simulation Argument, his reviews and analysis of the Doomsday Argument, his other popular book Superintelligence, and for the organization he co-founded as the World Transhumanist Association, now known as Humanity+). I read most of Anthropic Bias a long time ago (around 2002 I think? Maybe earlier?), and have since always thought about any kind of observer selection effects in the way he suggests (through what he calls the Self Sampling Assumption).

Anyway, in physics there tends to be a big divide over people who take anthropic reasoning seriously, and those who see it as a specious. I've always been one to take it very seriously. And what I've found is that the people who don't like the anthropic principle tend to be the Copenhagenists, while those who tend to take it seriously tend to be the Many Worlders. Why? Because if you don't view the other branches of the wave function as merely mathematical (as the Copenhagenists do), then you have to think about the splitting of one observer into many. And reasoning about such splitting necessarily involves a lot of observer selection effects, otherwise known as "anthropic bias".

I'll explain the actual thought experiment we came up with in the next part.
spoonless: (Default)
I realized after writing part 5 that by continuing on to the anthropic principle and observer selection effects, I've skipped over a different issue I planned to write more about, which was how statistical mechanics and quantum mechanics are actually the same thing. I think I actually covered most of what I'd wanted to cover in part 4, but then forgot to finish the rest in part 5. However, in thinking more about that it has led to lots more thoughts which make all of this more complicated and might change my perspective somewhat from what I said earlier in this series. So let me just briefly note some of the things I was going to talk about there, and what complications have arisen. Later, we'll get to the quantum suicide booth stuff.

The first time I used Feynman diagrams in a physics class, believe it or not, was not in Quantum Field Theory, where they are used most frequently, but in graduate Statistical Mechanics, which I took the year before. We weren't doing anything quantum, just regular classical statistical mechanics. But we used Feynman diagrams for it! How is this possible? Because the path integral formulation of quantum mechanics looks nearly identical mathematically to the way in which classical statistical mechanics is done. In both cases, you have to integrate an exponential function over a set of possible states to obtain an expression called the "partition function". Then you take derivatives of that to find correlation functions, expectation values of random variables (known as "operators" in quantum mechanics") and to compute the probability of transitions between initial and final states. This might even be the same reason why the Schrodinger Equation is sometimes used by Wall Street quants to predict the stock market, although I'm not sure about that.

One difference between the two approaches is what function gets integrated. In classical statistical mechanics, it's the exponential of the Boltzmann factor for each energy state e^(-E/kT). You sum this over all accessible states to get the partition function. In Feynman's path integral formalism for quantum mechanics, you usually integrate e^(iS) where S is the action (Lagrangian for a specific path integrated over time) over all possible paths connecting an initial and final state. Another difference is what you get out. Instead of the partition function, in quantum mechanics, you get out a probability amplitude, whose magnitude then has to be squared to be interpreted as a transition probability.

I was going to write about how these are very close to the same thing, but as I read more in anticipation of writing this, I got more confused about how they fit together. In the path integral for quantum mechanics, you can split it up into a series of tiny time intervals, integrating over each one separately. Then taking the limit as the size of these time intervals approaches zero. When you look at one link in the chain, you find that you can split the factor e^{iS} into a product of 2 factors. One is e^{ip*\delta_x} which performs a Fourier transform, and the other is e^{-iHt} which tells you how to time-evolve an energy eigenstate in quantum mechanics into the future. The latter factor can be viewed as the equivalent of the Schrodinger Equation, and this is how Schrodinger's Equation is derived from Feynman's path integral. (There's a slight part of this I don't quite understand, which is why energy eigentstates and momentum eigenstates seem to be conflated here. The Fourier transform converts the initial and final states from position into momentum eigenstates, but in order to use the e^{-iHt} factor it would seem you need an energy eigenstate. These are the same for a "free" particle, but not if there is some potential energy source affecting the particle! But let's not worry about that now.) So after this conversion is done, it looks even more like statistical mechanics. Because instead of summing over the exponential of the Lagrangian, we're summing over the exponential of the Hamiltonian, whose eigenvalues are the energies being summed over in the stat mech approach. However there are still 2 key differences. First, there's the factor of "i". e^{-iEt} has an imaginary exponent, while e^{-E/(kT)} has a negative exponent. This makes a pretty big difference, although sometimes that difference is made to disappear by using the "imaginary time" formalism, where you replace t with it (this is also known as "analytic continuation to Euclidean time). There's a whole mystery about where the i in quantum mechanics comes from, and this seems to be the initial source--it's right there in the path integral, where it's missing in regular classical statistical mechanics. This causes interference between paths which you otherwise wouldn't get. The second remaining difference here is that you have a t instead of 1/kT (time instead of inverse-temperature). I've never studied the subject known as Quantum Field Theory at Finite Temperature in depth, but I've been passed along some words of wisdom from it, including the insight that if you want to analyze a system of quantum fields at finite temperature, you can do so with almost the same techniques you use for zero temperature, so long as you pretend that time is a periodic variable that loops around every 1/kT seconds, instead of continuing infinitely into the past and the future. This is very weird, and I'm not sure it has any physical interpretation, it may just be a mathematical trick. But nevertheless, it's something I want to think about more and understand better.

Another thing I'd like to think about more, in order to understand the connection here, is what happens when you completely discretize the path integral? That is, what if we pretend there's no such thing as continuous space, and we just want to consider a quantum universe consisting solely of a finite number of qubits. Is there a path integral formulation of this universe? There's no relativity here or any notion of space or spacetime. But as with any version of quantum mechanics, there is still a notion of time. So it should be possible. And the path integral usually used (due to Dirac and Feynman) should be the continuum limit of this. I feel like I would understand quantum mechanics a lot more if I knew what the discrete version looked like.

Oh, one more thing before we move on to the quantum suicide booth. While reading through some Wikipedia pages related to the path integral recently, I found something pretty interesting and shocking. Apparently, there is some kind of notion of non-commutativity, even in the classical version of the path integral used to compute Brownian motion. In this version of the path integral, you use stochastic calculus (also known as Ito calculus I think?) to find the probabilistic behavior of a random walk. (And here again, we find a connection with Wall Street--this is how the Black Sholes formula for options pricing is derived!) I had stated in a previous part of this series that non-commutativity was the one thing that makes quantum mechanics special, and that there is no classical analog of it. But apparently, I'm wrong, because some kind of non-commutativity of differential operators does show up in stochastic calculus. But I've tried to read how it works, and I must confess I don't understand it much. They say that you get a commutation relationship like [x, k] = 1 in the classical version of the path integral. And then in the quantum version, where there's an imaginary i in the exponent instead of a negative sign, this becomes [x, k] = i or equivalently, [x, p] = ih. So apparently both non-commutativity and the uncertainty principle is directly derivable from stochastic calculus, whether it's the quantum or the classical version. So this would indicate that really the *only* difference between classical and quantum is the factor of i. But I'm not sure that's true if looked at from the Koopman-von-Neumann formalism. Clearly I have a lot more reading and thinking to do on this!

Profile

spoonless: (Default)
Domino Valdano

May 2023

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 18th, 2025 03:29 pm
Powered by Dreamwidth Studios