spoonless: (Default)
So here we come finally to the quantum suicide booth. I'm getting rid of the "instrumentalism vs realism" title now because I think we've strayed far enough from the original topic that this is pretty tangential. Truth be told, that whole series ended up being a lot of tangential stuff and not a whole lot directly related to the debate between instrumentalism and realism. That's the way things go I guess. But everything is connected, and in surprisingly intricate ways.

So there's this classic thought experiment called the "quantum suicide" experiment, originated by AI researcher Hans Moravec, and developed further by cosmologist Max Tegmark. It's a derivative of the Schrodinger's Cat and Wigner's Friend thought experiments, but not quite as ancient or well known.

The Quantum Suicide experiment is the closest thing we have to an experimental test that distinguishes between Copenhagen and Many Worlds. However, it is very one-sided in that if Many Worlds is correct, you can be certain it is (however only the person performing the experiment can appreciate that certainty, nobody else in the world can). But if Copenhagen is correct, you will die so you will never know which is correct.

The experiment goes something like this: Construct a device with connects the outcome of a truly random quantum observable, similar to the one used in the Schrodginer's Cat experiment, to a lethal device that kills the experimenter. Have the experimenter (you, let's say) repeat this many times in a row. Assuming Many Worlds is true, then what should you expect to happen? Well, in most of the worlds you will die, but there is guaranteed to be at least one world where you survive. You won't experience anything in the worlds where you die, so the rational thing to do would be to expect your next experience will be seeing yourself miraculously survive the experiment many times in a row. If you perform the experiment and witness this, as expected, then you can be pretty sure Many Worlds is true and Copenhagen is false. Unfortunately, if anyone else witnesses this, without being in the machine themselves, you will be unable to convince them that it wasn't just crazy luck or some kind of divine intervention, since neither of the 2 theories predict that they should see you survive that many times in a row. (Many Worlds only predicts that you should see yourself survive, from your own perspective.)

There's a variant of this which leads to something called Quantum Immortality, but that's considered much more controversial, and depends on other assumptions, so we won't get into that. (My personal feelings are that quantum suicide makes sense and would work assuming Many Worlds is correct, but that Quantum Immortality is an exaggeration/distortion of the thought experiment which leads to nonsensical conclusions.)

At work, we've talked about a lot of different versions of the quantum suicide experiment, because it's pretty relevant to themes we want to explore in the movie we're making (or rather, discussing making). And the most interesting version that's come up is one I've decided to call the "coin operated quantum suicide booth". It was proposed by my coworker as a way of attacking some of the assumptions I hold about observer selection (mostly stemming from Nick Bostrom's Self Sampling Assumption). We argued about it for a while, and at some point, he had succeeded in convincing me that I was probably wrong. But then after a lot more thinking about it, I changed my mind and decided I had been right all along, and do not have to give up the Self Sampling Assumption or any of my views on the anthropic principle.

In this version, there is a booth which contains the above described quantum suicide device. But there is also a coin that gets automatically flipped once you step into the booth. If it comes up heads, then after 10 minutes of waiting in the booth, the quantum suicide experiment is performed on you 1 million times in a row in rapid succession. If it comes up tails, then nothing else happens for 10 minutes, and then either way the door opens and you're allowed to walk out (if you're still alive).

Now imagine that your friend bets you that the coin will land on heads. What odds should you be willing to take on that bet? This hinges on what you think the chances are you'll win the bet, assuming you survive and walk out of the booth. Another related way of asking it is: what do you expect to see happening after you walk out of the booth? There are strong arguments to support several things being true here. 1.) you should be extremely certain that if you end up surviving, you will remember the coin coming up tails and you will win the bet. 2.) you should expect that when you walk into the booth, if you look at the coin, you will see it come up heads with 50% chance and tails with 50% chance, and 3.) whether you look at the coin should not affect in any way the chances that it will come up tails, or that you will remember it having come up tails, and 4.) if you do look at the coin while you're still in the booth, and you see heads, you should expect with 100% probability that after you walk out of the booth, you'll remember seeing heads and lose the bet.

The interesting thing about this thought experiment is, that the 4 things above do not seem consistent with each other at first glance. And yet, I do think they are consistent, and if Many Worlds is true and there is any consistent notion of "what to expect to see next", then it implies they are all true. Before you walk into the booth, you should expect you will win the bet. This should be true regardless of whether you intend to look at the coin. If you don't look at the coin, there's nothing more to be said. In nearly all of the worlds where you survive, you do win the bet. If you do choose to look at the coin while you're in the booth, there's a 50% chance you'll see heads and a 50% chance you'll see tails. If you see tails, then you know you have won the bet. If you see heads, then you're in a really weird situation. You should expect now that you will *lose* the bet (even though before you looked at the coin, you thought you would win it). However, a more nuanced thing to say is that you should expect yourself to be killed in nearly all of the branches, and the only one where you survive you'll lose the bet. So therefore, it's useful to be prepared for that one case where you do survive... so be prepared to pay up! It seems as though, if you were really in this situation, you might be a bit afraid to look at the coin. Like it would ruin your chances of winning the bet. Because then there's a 50% chance you'll be in this weird situation thinking you're either going to die or lose the bet. But if you don't look at the coin, you'll never be in that situation. But the illusion here is that looking at the coin has somehow influenced what's going to happen. The only thing it really affects though is whether you know you're about to die and/or lose the bet. If you don't know, then you don't have to deal with as many weird feelings. Although you still know that a lot of your future selves (roughly half of them) will die, and that in some very obscure branch of the multiverse, you will survive but lose the bet. (For simplicity, we can assume that the outcome of the coin itself is chosen based on some other quantum random variable, which could have been determined far ahead of time. If it's a purely classical coin, then I don't think it changes anything about this analysis, but it makes the whole thing a bit more difficult to think about since the outcome is pseudorandom rather than purely random.)

I'd like to write a bit more about the mind/body problem and the anthropic principle. Clearly things like cloning and the quantum suicide experiments described here tie in heavily to that, and call into question how to tie consciousness (in the sense of subjective personal experience or identity) to physical copies of our bodies. If nobody has written up a paper on this coin operated quantum suicide booth, I'm thinking it might be worthwhile.
spoonless: (Default)
I realized after writing part 5 that by continuing on to the anthropic principle and observer selection effects, I've skipped over a different issue I planned to write more about, which was how statistical mechanics and quantum mechanics are actually the same thing. I think I actually covered most of what I'd wanted to cover in part 4, but then forgot to finish the rest in part 5. However, in thinking more about that it has led to lots more thoughts which make all of this more complicated and might change my perspective somewhat from what I said earlier in this series. So let me just briefly note some of the things I was going to talk about there, and what complications have arisen. Later, we'll get to the quantum suicide booth stuff.

The first time I used Feynman diagrams in a physics class, believe it or not, was not in Quantum Field Theory, where they are used most frequently, but in graduate Statistical Mechanics, which I took the year before. We weren't doing anything quantum, just regular classical statistical mechanics. But we used Feynman diagrams for it! How is this possible? Because the path integral formulation of quantum mechanics looks nearly identical mathematically to the way in which classical statistical mechanics is done. In both cases, you have to integrate an exponential function over a set of possible states to obtain an expression called the "partition function". Then you take derivatives of that to find correlation functions, expectation values of random variables (known as "operators" in quantum mechanics") and to compute the probability of transitions between initial and final states. This might even be the same reason why the Schrodinger Equation is sometimes used by Wall Street quants to predict the stock market, although I'm not sure about that.

One difference between the two approaches is what function gets integrated. In classical statistical mechanics, it's the exponential of the Boltzmann factor for each energy state e^(-E/kT). You sum this over all accessible states to get the partition function. In Feynman's path integral formalism for quantum mechanics, you usually integrate e^(iS) where S is the action (Lagrangian for a specific path integrated over time) over all possible paths connecting an initial and final state. Another difference is what you get out. Instead of the partition function, in quantum mechanics, you get out a probability amplitude, whose magnitude then has to be squared to be interpreted as a transition probability.

I was going to write about how these are very close to the same thing, but as I read more in anticipation of writing this, I got more confused about how they fit together. In the path integral for quantum mechanics, you can split it up into a series of tiny time intervals, integrating over each one separately. Then taking the limit as the size of these time intervals approaches zero. When you look at one link in the chain, you find that you can split the factor e^{iS} into a product of 2 factors. One is e^{ip*\delta_x} which performs a Fourier transform, and the other is e^{-iHt} which tells you how to time-evolve an energy eigenstate in quantum mechanics into the future. The latter factor can be viewed as the equivalent of the Schrodinger Equation, and this is how Schrodinger's Equation is derived from Feynman's path integral. (There's a slight part of this I don't quite understand, which is why energy eigentstates and momentum eigenstates seem to be conflated here. The Fourier transform converts the initial and final states from position into momentum eigenstates, but in order to use the e^{-iHt} factor it would seem you need an energy eigenstate. These are the same for a "free" particle, but not if there is some potential energy source affecting the particle! But let's not worry about that now.) So after this conversion is done, it looks even more like statistical mechanics. Because instead of summing over the exponential of the Lagrangian, we're summing over the exponential of the Hamiltonian, whose eigenvalues are the energies being summed over in the stat mech approach. However there are still 2 key differences. First, there's the factor of "i". e^{-iEt} has an imaginary exponent, while e^{-E/(kT)} has a negative exponent. This makes a pretty big difference, although sometimes that difference is made to disappear by using the "imaginary time" formalism, where you replace t with it (this is also known as "analytic continuation to Euclidean time). There's a whole mystery about where the i in quantum mechanics comes from, and this seems to be the initial source--it's right there in the path integral, where it's missing in regular classical statistical mechanics. This causes interference between paths which you otherwise wouldn't get. The second remaining difference here is that you have a t instead of 1/kT (time instead of inverse-temperature). I've never studied the subject known as Quantum Field Theory at Finite Temperature in depth, but I've been passed along some words of wisdom from it, including the insight that if you want to analyze a system of quantum fields at finite temperature, you can do so with almost the same techniques you use for zero temperature, so long as you pretend that time is a periodic variable that loops around every 1/kT seconds, instead of continuing infinitely into the past and the future. This is very weird, and I'm not sure it has any physical interpretation, it may just be a mathematical trick. But nevertheless, it's something I want to think about more and understand better.

Another thing I'd like to think about more, in order to understand the connection here, is what happens when you completely discretize the path integral? That is, what if we pretend there's no such thing as continuous space, and we just want to consider a quantum universe consisting solely of a finite number of qubits. Is there a path integral formulation of this universe? There's no relativity here or any notion of space or spacetime. But as with any version of quantum mechanics, there is still a notion of time. So it should be possible. And the path integral usually used (due to Dirac and Feynman) should be the continuum limit of this. I feel like I would understand quantum mechanics a lot more if I knew what the discrete version looked like.

Oh, one more thing before we move on to the quantum suicide booth. While reading through some Wikipedia pages related to the path integral recently, I found something pretty interesting and shocking. Apparently, there is some kind of notion of non-commutativity, even in the classical version of the path integral used to compute Brownian motion. In this version of the path integral, you use stochastic calculus (also known as Ito calculus I think?) to find the probabilistic behavior of a random walk. (And here again, we find a connection with Wall Street--this is how the Black Sholes formula for options pricing is derived!) I had stated in a previous part of this series that non-commutativity was the one thing that makes quantum mechanics special, and that there is no classical analog of it. But apparently, I'm wrong, because some kind of non-commutativity of differential operators does show up in stochastic calculus. But I've tried to read how it works, and I must confess I don't understand it much. They say that you get a commutation relationship like [x, k] = 1 in the classical version of the path integral. And then in the quantum version, where there's an imaginary i in the exponent instead of a negative sign, this becomes [x, k] = i or equivalently, [x, p] = ih. So apparently both non-commutativity and the uncertainty principle is directly derivable from stochastic calculus, whether it's the quantum or the classical version. So this would indicate that really the *only* difference between classical and quantum is the factor of i. But I'm not sure that's true if looked at from the Koopman-von-Neumann formalism. Clearly I have a lot more reading and thinking to do on this!
spoonless: (Default)
I wrote two posts in January of this year ("instrumentalism vs realism" and "instrumentalism vs realism part 2") and wrote "to be continued..." at the end of part 2. The main point of both of them was really to voice my thoughts on the Copenhagen Interpretation of quantum mechanics, to see if I could make any sense of it, and to compare it to the Many Worlds Interpretation, which has always seemed easier for me to understand.

Several new things have happened since then. I started thinking along these lines to try to come up with some material for the microtalk I gave at FreezingWoman in March 2015. I ended up deciding to avoid the dicey subject of realism vs instrumentalism, and even for the most part avoided the entire topic of quantum mechanics. Instead focusing on the question of "what is the universe made of?" and keeping to things I feel that I understand well such as relativity and some aspects of quantum field theory. By the time I finished putting it together, I realized that I had a pretty good case that something more like neutral monism is the right way to look at metaphysics rather than materialism. The idea that metaphysics is even meaningful sort of presumes realism over instrumentalism. And yet because I defined "neutral monism" in my talk as "none of the above" (metaphysical theories) I felt like I left it a little bit open that perhaps instrumentalism is true after all and we just need to give up metaphysics entirely.

After returning from Freezing Woman, I spent a month and a half expanding my 5 minute microtalk into a 15-minute video presentation, which I released on Vimeo and linked to on Facebook and Google+. A handful of my friends viewed it and gave me positive feedback, some of them resharing it, but overall it didn't get a lot of attention. Then later, I found out that someone on Youtube had downloaded the Vimeo video and uploaded it to their Youtube channel, where it did get a lot of attention. (13,467 views, 164 upvotes, and only 5 downvotes... with lots of positive comments from people, many asking if there will be a sequel!):

Materialism and Beyond: What is Our Universe Made Of?

This weekend I uploaded it to my own Youtube channel, which I had been meaning to do (apparently, hardly anyone is on Vimeo; I original chose it primarily because I don't like the idea of ads being inserted in the middle of my video). So far not much action there either, but we'll see I guess.

I can't remember when it was, but at some point this year (maybe around May?) I ran across a *really* interesting post that my adviser in graduate school, Tom Banks, made defending the Copenhagen Interpretation of quantum mechanics:

http://www.preposterousuniverse.com/blog/2011/11/16/guest-post-tom-banks-on-probability-and-quantum-mechanics/

(There's another version of it hosted by Discover magazine but the mathematical equations don't show up right there.) I was shocked that this has been online since 2011 and I somehow managed to not find it until 2015. Not only because it is written by someone I knew personally and hold in great regard, but because it basically explains almost everything I've ever wanted to understand about quantum mechanics in one shot. I often wanted to ask him about this subject, but I was always too shy to do it. I guess I felt like to him, it might be considered a waste of time. But if he could have summed it up this well in one sitting, I would have surely asked and gotten a lot of benefit out of it. Sadly, I finally find it now long after I've quit physics.

So, it took me a while to understand everything he says there. He does make a lot of simple mistakes in his explanation, which confuses things. (For example, he uses the term "independent" several times to mean "mutually exclusive", something anyone--including him--who knows anything about probability knows are two very different things.) Nevertheless, there is a core of what he's saying that turns out to be very important. At first when I read it I sensed that, but it hand't fully sunk in. Since then, I have read a lot more things, gotten into some discussions and debates with people coming from different perspectives on this (one being a mailing list I got invited to as a consequence of people liking my video), and mulled it over in my head. And gradually, it sunk in and I feel like I have now absorbed the message. And it's a really important message that I had sort of suspected before but hadn't really understood.

This week I was thinking through this stuff again and went back to read the Koopman-von Neumann (KvN) formulation of classical mechanics Wikipedia page again (for like the third time since reading my advisor's post about KvN, which I had never heard of until then). (And in connection with the mailing list I'm on, just before that reading some more about Quantum Darwinism and Zurek's existential interpretation of QM). And suddenly halfway through the week, I felt like everything clicked. After all of these years, I finally understand Copenhagen. And it's a lot more coherent than I had imagined.

This doesn't mean I have converted now to a Copenhagenist. I'm still not sure whether I prefer Copenhagen, Many Worlds, or something in between. (And almost certainly, the right answer is somewhere in between, at least compared to what Bohr's original ideas were and what Everett's original ideas were.) And while I call my advisor a Copenhagenist, I'm not even sure he uses that term. I think his view is a modern version of Copenhagen, but does include all of the insights that have been gleaned since the time of Heisenberg and Bohr.
(Although I think he denies that those new insights have significantly changed anything about the interpretation.) I've also read a bit more about consistent histories lately and decided that there are slight differences between it and Copenhagen, it's not just a clarification of Copenhagen because in some ways, it does away with the idea of quantum measurement (or makes it less central/important to the theory). I still think QBism is a form of Copenhagen, although some of its advocates seem to think it has features which distinguish it from Copenhagen.

At any rate, using the broad definition of Copenhagen which I have always used (to include modern versions of it rather than a more narrow one focusing strictly on Bohr and Heisenberg's writings), I'd like to try to sum up the new insights I've absorbed. This was my intention in writing this post, but since I've only introduced that intention and not gotten there yet, I'll start my summary in part 4.

To be continued...
spoonless: (orangegray)
When I wrote part 9 I was thinking I was pretty confused and I was not sure I had really made any progress on answering the basic question that I had set out to answer with this series. But I'm pleased to announce that within 24 hours after writing that one, I started thinking and piece by piece, it just all came together. I think I have pretty much solved the mystery, aside from some minor threads that might still need to be wrapped up. (Just didn't get a chance to write it down until today.) That was way faster than I'd imagined it would take.

The main thing I wasn't seeing is how mixing (whether the ordinary process of two gasses mixing in a box, or the more esoteric quantum measurement process) relates to wandering sets. And the lynchpin that was missing, that holds everything together, and explains how mixing relates to wandering sets, is "what is the identity of the attractor?"

I realized that if I could pinpoint what the attractor was in the case of mixing, then I would see why mixing is a wandering set (and hence, a dissipative process). Soon after I asked myself that question, the answer became pretty obvious. The attractor in the case of mixing--and indeed, in any case where you're transitioning from a non-equilibrium state to thermodynamic equilibrium--is the macrostate with maximal entropy. In other words, the macrostate that corresponds to "thermodynamic equilibrium".

I think the reason I wasn't seeing this is because I was thinking too much about the microstates. But from the point of view of a microscopic description of physics, any closed system is always conservative--all of the physics is completely reversible. You can only have dissipation in two ways. One is fairly trivial and uninteresting, and that's if the system is open and energy is being sucked out of it. Sucking out energy from a system reduces its state space, so from within that open system, ignoring the outside, you start in any corner of a higher dimensional space and then you get pulled into an attractor that represents the states which have lower total energy. If energy keeps getting sucked out, it will eventually all leave and you'll just be left in the ground state (which would in that case be the attractor).

But there's a much more interesting kind of dissipation, and that's when you course grain a system. If you don't care about some of the details of the microscopic state, but you only care about the big picture, then you can use an approximate description of the physics, you can just keep track of the macrostate. And that's where the concept of entropy comes into play, and that's when even closed systems can involve dissipation. There's no energy escaping anywhere, but if you start in a state that's not in thermodynamic equilibrium, such as two gasses that aren't mixed at all, or that are only halfway mixed, or only partially mixed anywhere in between... from the point of view of the macrostate space, you'll gradually get attracted towards the state of maximal entropy. So it's the macrostate phase space that is where the wandering sets comes in, in this case. Not the microstates! The physics of the evolution of the macrostate involves a dissipative action, meaning it contains wandering sets; and it is an irreversible process because you don't have the microstate information that would be required in order to know how to reverse the process.

So how does this work in the case of a quantum measurement? It's really the same thing, just another kind of mixing process. Let's say you have a quantum system that is just a single spin (a "qubit") interacting with a huge array of spins comprising the "environment". Before this spin interacts, it's in a superposition of spin-up and spin-down. It is in a pure state, similar to the state where two gasses are separated by a partition. Then you pull out the partition (in the quantum case, you allow the qubit to interact with its environment, suddenly becoming entangled with all of the other spins). In either case, this opens up a much larger space, increasing the dimensionality of the microstate space. Now in order to describe the qubit, you need a giant matrix of correlations between it and all of the other spins. As with the mixing case I described earlier, you could use a giant multidimensional Rubik's cube to do this. The only difference is that classically, each dimension would be a single bit "1" or "0", while this quantum mechanical mixing process involves a continuous space of phases (sort of ironic that quantization in this case makes something discrete into something continuous). If this is confusing, just remember that a qubit can be in any superposition of 1 and 0, and therefore it takes more information to describe it's state than a classical bit requires.

But after the interaction, we just want to know what state the qubit is in--we don't really care about all of these extra correlations with the environment, and they are random anyway. They are the equivalent of thermal noise, non-useful energy. So therefore, we shift from our fine grained description to a more course grained one. We define the macrostate as just the state of the single qubit, but averaged over all of the possibilities for the environmental spins. Each one involves a sum over its up and its down state. And if we sum over all of those different spins, that's accomplished by taking the trace of the density matrix, which I mentioned in part 9. Tracing over the density matrix is how you course grain the system, averaging over the effects of the environment. As with the classical mixing case, putting this qubit in contact with the environment suddenly puts it in a non-equilibrium state. But if you let it settle down for a while, it will quickly reach equilibrium. And the equilibrium state, the one with the highest entropy, is one where all of the phases introduced are essentially random, ie there are no special extra correlations between them. So the microstate space is a lot larger, but there is one macrostate that the whole system is attracted to. And in that macrostate, when you trace over the spins in the environment, you wind up with a single unique state for the qubit that was measured. And that state is a "mixed state", it's no longer a coherent superposition between "0" and "1" but a classical probability distribution between "0" and "1". The off diagonal elements of the density matrix have gone to zero. So while the microstate space has increased in dimensionality, the macrostate space has actually *decreased*! This is why I was running into so much confusion. There's both an increase in dimensionality AND a decrease in dimensionality, it just depends on whether you're asking about the space of microstates or the space of macrostates.

Mystery solved!

I'm very pleased with this. While I sort of got the idea a long time ago listening to Nima Arkani-Hamed's lecture on this, and I got an even better idea from reading Leonard Susskind's book, it really is all clear to me now. And I have to thank wandering sets for this insight (although in hindsight, I should have been able to figure it out without that).

I would like to say "The End" here, but I must admit there is one thread from the beginning--Maxwell's Demon--which I never actually wrapped up. I suspect that my confusion there, about why erasure of information corresponds to entropy increase, and exactly how it corresponds, is directly related to my confusion between macrostate and microstate spaces. So I will write a tenative "The End" here, but may add some remarks about that in another post if I think of anything more interesting to say. Hope you enjoyed reading this series as much as I enjoyed writing it!

The End
spoonless: (orangegray)
I have to admit, in trying to tie all of this together, I have realized that there still seems to be something big that I don't understand about the whole thing. And there is at least one minor mistake I should correct in part 8. So from here on out, we're treading on thin ice, I'm doing something more akin to explaining what I don't understand rather than describing a solution.

It seemed that if I could understand wandering sets, then all of the pieces would fit together. And it still seems that way, although the big thing I still don't get about wandering sets is how they related to mixing. And that seems crucial.

The minor mistake I should correct in part 8 is my proposed example of a completely dissipative action. I said you could take the entire space minus the attractor as your initial starting set, and then watch it evolve into the attractor. But this wouldn't work because the initial set would include points that are in the neighborhood of the attractor. However, a minor modification of this works--you would just need to start with a set that excludes not only the attractor but also the neighborhood around it.

In thinking about this minor problem, however, I realized there are also some more subtle problems with how I presented things. First, I may have overstated the importance of dimensionality. In order to have a completely dissipative action, you could really just use any space which has an attractor that is some subset of that space, where it attracts any points outside of it into the attractor basin. The subset wouldn't necessarily have to have a lower dimension--my intuition is that in thermodynamics that would be the usual case, although I must admit that I'm not sure and I don't want to leave out any possibilities.

This leads to a more general point here that the real issue with irreversibility need not be stated in terms of dimension going up or down--a process is irreversible any time there is a 1-to-many mapping or a many-to-1 mapping. So a much simpler way of putting the higher/lower dimensionality confusion on my part is that I often am not sure whether irreversible processes are supposed to time evolve things from 1-to-many or from many-to-1. Going from a higher to lower dimensional space is one type of many-to-1 mapping, and going from lower to higher is one type of 1-to-many mapping. But these are not the only types, just types that arise as typical cases in thermodynamics, because of the large number of independent degrees of freedom involved in macroscopic systems.

Then there's the issue of mixing. I still haven't figured out how mixing relates to wandering sets at all. Mixing very clearly seems like an irreversible process of the 1-to-many variety. But the wandering sets wiki page seems to be describing something of the many-to-1 variety. However, they say at the top of the page that wandering sets describe mixing! I still have no idea how this could be the case. But now let's move on to quantum mechanics...

In quantum mechanics, one can think of the measurement process in terms of a quantum Hilbert space (sort of the analog of state space in classical mechanics) where different subspaces (called "superselection sectors") "decohere" from each other upon measurement. That is, they split off from each other, leading to the Many Worlds terminology of one world splitting into many. Thinking about it this way, one would immediately guess that the quantum measurement process therefore is a 1-to-many process. 1 initial world splits into many different worlds. However, if you think of it more in terms of a "collapse" of a wavefunction, you start out with many possibilities before a measurement, and they all collapse into 1 after the measurement. So thinking about it that way, you might think that quantum physics involves the many-to-1 type of irreversibility. But which is it? Well, this part I understand, mostly... and the answer is that it's both.

The 1-to-many and many-to-1 perspectives can be synthesized by looking at quantum mechanics in terms of what's called the "density matrix". Indeed, you need the density matrix formulation in order to really see how the quantum version Lioville's theorem works. In the density matrix formulation of QM, instead of tracking the state of the system using a wavefunction--which is a vector whose components can represent all of the different positions of a particle (or field, or string) in a superposition--you use a matrix, which is sort of like the 2 dimensional version of a vector. By using a density matrix instead of just a vector to keep track of the state of the system, you can distinguish between two kinds of states--pure states and mixed states. A pure state is a coherent quantum superposition of many different possibilities. Whereas a mixed state is more like a classical probability distribution over many different pure states. A measurement process in the density matrix formalism, then, is described by a mixing process that evolves a pure state into a mixed state. This happens due to entanglement between the original coherent state of the system and the environment. When a pure state becomes entangled in a random way with a large number of degrees of freedom, this is called "decoherence". What was originally a coherent state (nice and pure, all the same phases), is now a mixed state (decoherent, lots of random phases, too difficult to disentangle from the environment).

What happens is that you originally represent the system plus the environment by a single large density matrix. And then, once system becomes entangled with environment, the matrix decomposes into the different superselection sectors. These are different sub matrices, each of which represents a different pure state. The entire matrix is then seen as a classical distribution over the various pure states. As I began writing this, I was going to say that because it was a mixing process, it went from 1-to-many. But now that I think of it, because the off-diagonal elements between the different sectors end up being zero after the measurement, the final space is actually smaller than the initial space. And I think that's even before you decide to ignore all but one of the sectors (which is where the "collapse" part comes in, in collapse based interpretations). From what I recall, the off-diagonal elements wind up being exactly zero--or so close to zero that you could never tell the difference--because you assume the way in which the environment gets entangled is random. As long as each phase is random (or more specifically--as long as they are uncorrelated with each other), when you sum over a whole lot of them at once, they add up to zero--although I'd have to look this up to remember the details of how that works.

I was originally going to say that mixed states are more general and involve more possibilities than pure states, so therefore evolving from a pure state to a mixed state goes from 1-to-many, and then when you choose to ignore all but one of the final sectors, you go back from many-to-1, both of these being irreversible processes. However, as I write it out, I remember 2 things. The first is what I mentioned above--even before you pick one sector out you've already gone from many-to-1! Then you go from many-to-1 again if you were to throw away the other sectors. And the second thing I remember is that, mathematically pure states never really do evolve into mixed states. As long as you are applying the standard unitary time evolution operator, a pure state always evolves into another pure state and entropy always remains constant. However, if there is an obvious place where you can split system from environment, it's tradition to "trace over the degrees of freedom of the environment" at the moment of measurement. And it's this act of tracing that actually takes things from pure to mixed, and from many to 1. I think you can prove that from a point of view of inside the system, whether you trace over the degrees of freedom in the environment or not is irrelevant. You'll wind up with the same physics either way, the same predictions for all future properties of the system. It's just a way of simplifying the calculation. But when you do get this kind of massive random entanglement, you wind up with a situation where tracing can be used to simplify the description of the system from that point on. You're basically going form a fine grained approximation of the system+environment to a more course grained approximation. So it's no wonder that this involves a change in entropy. Although whether entropy goes up or down in the system or in the environment+system, before or after the tracing, or before or after you decide to consider only one superselection sector--I'll have to think about and answer in the next part.

This is getting into the issues I thought I sorted out from reading Leonard Susskind's book. But I see that after a few years away from it, I'm already having trouble remembering exactly how it works again. I will think about this some more and pick this up again in part 10. Till next time...
spoonless: (orangegray)
In parts 2 through 5 I explained a bit about how wandering sets and thermodynamics works. And in part 6 I explained a bit about how quantum mechanics works. Now we can begin to bridge the gap and see how the two different angles from which I've been approaching this question intersect.

One of the biggest confusions I've had in trying to piece this together over the years is in mixing up whether the process of dissipation involves a transition from a higher dimensional space to a lower, or from a lower to a higher. I think it is both depending on how you look at it, but you have to keep straight what space you're talking about and what you mean.

If you look at it from the point of view of quantum mechanics, dissipation comes from the measurement process which involves projection matrices (or projection "operators" more generally) which take many possibilities and collapse them down to one. It's common to hear people use the word "reduction" in phrases like "the reduction of the state vector" to mean measurement in quantum mechanics. And measurement is the only time something irreversible happens, the rest of the laws of quantum mechanics are entirely reversible. So you would think intuitively, that a reduction or a collapse involves going from a higher dimensional space to a lower dimensional space. That's what a projection is mathematically. For example, when you walk in the sun outside and it's not directly overhead, you are followed around by a shadow. Your shadow is a 2-dimensional projection of your 3-dimensional self on the ground. A shadow is one of the simplest kinds of projections, but mathematically a projection refers to anything that reduces a higher dimensional object to a lower dimensional image. That's what the measurement operators used in quantum mechanics do, but because they are acting within the quantum Hilbert space, they project a space of ridiculously large dimensionality down to one of slightly lower dimensionality (but usually, still infinite).

On the other hand, if you look at it from the point of view of thermodynamics, dissipation happens only when entropy increases. The microscopic laws of physics, even in classical mechanics, are completely reversible and non-dissipative. The only irreversibility that comes into play is when the available phase space of a system increases. Let's walk through a concrete example of a mixing process step by step and see why it is irreversible and why it increases entropy.

First, imagine that you just have 3 classical particles in a box. They just bounce around in the box according to Newton's laws of physics. They move in straight lines unless they bounce off of a wall, in which case their angle of refraction equals their angle of incidence, just as a billiard ball bounces off of the wall of a pool table. It's easy to see that these laws are reversible, and that if you applied them backwards, you'd see basically the same thing happening, it's just that all 3 particles would be moving backward along their original paths instead of forward. Nothing weird or spooky or irreversible about it. But now let's conceptually divide that box into a left side and a right side, and keep track of which side each of the 3 particles are in. If the microstate of this system is the exact positions of all 3 particles plus the exact direction that each of them is moving in, then let's call the "macrostate" a single number between 0 and 3 that equals how many particles are on the right side of the box. To get at this number, we can construct a simplified microstate phase space which is a list of 3 booleans specifying which side of the box they are on. For example, if particles A and B are on the left side of the box, and particle C is on the right side, our list would be (left,left,right). If they were all on the right side, it would be (right,right,right). The macrostate can be deduced from the microstate (by summing up the number of right's in our list), but the reverse is not true as some of the macrostates correspond to more than one microstate. For example, the macrostate "2" could either be (left,right,right) or (right,right,left).

The full microstate phase space is what we talked about earlier--it's an 18-dimensional space, 3 times 3 coordinates for position, and 3 times 3 coordinates for momentum. But in order to understand maxing, we only really have to visualize a simplified microstate phase space based on our list of 3 right/left booleans--in order to do so, you need to picture something that looks like a mini Rubik's cube. A regular Rubik's cube consists of 3x3x3 = 27 cubes (if you include the center cube, which doesn't actually have any colors painted on it, and you can't actually see). But they also sell "mini" Rubik's cubes that are only 2x2x2 = 8 cubes. They are much easier to solve, but not completely trivial if I recall. Each of the 8 cubes in a mini Rubik's cube corresponds to one of the 8 microstates of our system: (left, left, left), (left, left, right), (left,right,left), ... etc., ... (right,right,right). Each of the particles can be in one of 2 possible states, but there are 3 particles so the space is 3-dimenionsal. But because we're only concerned with left-versus-right the space is discrete rather than continuous.

Now imagine that instead of 3 particles, we had an entire mole of particles--that is, Avagadro's number of particles, 6.02x10^23 particles. Any quantity of gas that you could fit in an actual box and hold in your hand would realistically have to have at least this number of particles, and probably a lot more! So what happens to our space of states? Now, instead of being 3 dimensional it is 6.02x10^23 dimensional--quite a bit larger. But still each dimension can only have 2 possible states. Our 3-particle system had only 2^3 = 8 microstates. But this system has an unimaginably large number of microstates, 2^(6.02x10^23) states! Equilibrium for this system means that you have allowed the particle to bounce around long enough that they are nice and randomly mixed (roughly equal numbers on the left and the right). The entropy of any system is simply the natural log of the number of accessible microstates it has. If the system is in equilibrium, then nearly all states are accessible so the entropy is log(2^(6.02x10^23)) = 4x10^23. A very small number of those states correspond to most of the particles being on the left, or most being on the right, but these are such a negligible fraction out of the number above, that it doesn't change the answer.

But what if we started the system in a state where all of the particles just happened to be on one side of the box? In other words, in the state (left,left,left,left,...,left,left) where there are 6.02x10^23 left's? This is a very special initial condition, similar to the special state the universe started in which I mentioned earlier. Because there is only 1 microstate where all of the particles are on the left, this state is extremely unlikely compared to the state where roughly equal numbers are on the left and on the right. The entropy of this state is just log(1) = 0. The true entropy of a gas like this is of course more than 0, but for our purposes here, we only care about the entropy associated with its mixing equally on either side of the box. The rest of the entropy is locked up in the much larger microstate phase space mentioned earlier, before we simplified it down to only the part that cares about which half of the box the particles are in.

The main point of all of this, which I'm getting to is... if all of the particles start out on one side of the box, and then later they are allowed to fill the whole box, you've drastically increased the entropy, because there are a lot more possible places for the particles to be. A slightly more complicated version of what we just went through is if you had two different kinds of particles, let's call them blue and red. Imagine all of the red particles started on the left side, and all of the blue particles started on the right side, perhaps because there is initially a divider in between. Then when you lift the divider, the two of them mix with each other, and after it reaches equilibrium, roughly equal numbers of red and blue will be on both sides. This is what is meant by "mixing" in thermodynamics. There are many many more ways in which they could be mixed than the one way in which they could all be on their own sides, so there is a lot more entropy in the mixed state at the end than there was in the separated state in the beginning. Unlike the version of this where only 3 particles were involved, this version is irreversible in the sense that: it's extremely unlikely, and pretty much inconceivable, that you would ever see a mixed box of these particles naturally and spontaneously sort themselves out to all blue on one side and all red on the other, whereas you would find nothing surprising whatsoever if initially unmixed red and blue gases gradually mixed with each other and wound up in a perfectly homogeneously purple mixture at the end.

In this example, the available state space before the particles are allowed to mix involves only 1 state. Afterwards, it involves something with a size on the order of 2 raised to the power of Avogadro's number of states. So the entropy should increase by something on the order of Avogadro's number. It seems like what has happened is that the state space at the beginning was very small and low dimensional, and then at the end it is very large with high dimensionality. Naively, this appears to be exactly the opposite of what happens during a measurement in quantum mechanics. But somehow, that's not the case--what's going on really is exactly the same. I think what's happening is that we're just confusing two different spaces here. And it's a confusion that I've often made in thinking about this. Where we'll have to go from here to resolve this paradox is to discuss open vs closed systems, and to explore a little bit the many worlds interpretation of quantum mechanics, and what happens to entropy in different parts of the multiverse as different branches of the universal wave function evolve forward in time. I'll leave you with one final piece of the paradox which seems directly related to this high-low dimensionality confusion: if the total entropy remains exactly the same in all branches of the multiverse, then you would think that every time a quantum measurement is performed and different branches split off from each other, the entropy would get divided among them and hence be less and less in each branch over time. (Because surely, the dimensionality of the accessible states in one branch is less than the dimensionality of the accessible states in the combined trunk before the split?) And yet, exactly the opposite happens--while the total entropy remains exactly constant, the entropy in each single branch increases more and more every time there is a split!

To be continued...
spoonless: (orangegray)
First, a quick comment about something pretty basic that I completely forgot to mention in part 4 on the "Boltzmann's brain" paradox: why do they call it Boltzmann's brain? Seems obvious I should have explained this, but it's because the type of dilemna raised by it is similar to the "brain in a vat" thought experiment that philosophers of metaphysics like to argue about. The basic question is depicted explicitly in the movie The Matrix: how do you know that you aren't just a brain in a vat somewhere, with wires plugged into you, feeding this brain electrical impulses that it interprets as sensory experiences? I think for the most part, the answer is: you don't. I mentioned that the Boltzmann's brain paradox could involve the vacuum fluxuation of a spontaneously generated galaxy, or solar system, or even just a single room. But the ultimate limit of this would be, just a single brain in a vat, with no actual world around it at all. Anyway, just wanted to add that to make part 4 more clear, because you may have been wondering why it was called "Boltzmann's brain". If I ever decided to make these notes into a book, I guess this would be one of the chapters. Now on to the matter at hand...

One of the most puzzling things about quantum mechanics, especially when you first learn it, is why there appear to be two completely different types of rules for how physics works. One is the microscopic set of rules which physicists usually refer to as "unitary time evolution of the wavefunction". In quantum mechanics, what's called the wavefunction is similar to a probability distribution either in regular space or momentum space (not phase space, the combination of the two) for which state a particle (or field, or string) could be in. (Actually, it's more like the square root of a probability, but no matter.) While it is similar to a probability distribution in regular space or momentum space, it can be much more simply represented as a single vector in a much larger space called a Hilbert space. The Hilbert space in quantum mechanics is usually infinite dimensional, so much much bigger than even the 6N dimensional phase space we talked about in part 5. As the system progresses further into the future, this state vector traces out a single path through the Hilbert space, and that path is always exactly reversible according to these microscopic rules. The key word here is "unitary". The vector is moved around in the Hilbert space mathematically by applying a unitary matrix to it. (Heisenberg's formulation of quantum mechanics was orginally called "matrix mechanics" because of this.) Unitary matrices are matrices whose transpose is equal to their complex conjugate (replacing all imaginary components with real components and vice versa, where by imaginary and real I'm talking about the mathematical notion of i, the square root of -1). If this doesn't make any sense, don't worry about it, the only important thing to understand is that this property of unitarity guarantees many nice things about time evolution in quantum mechanics. It makes things nice and smooth, so that a single state always moves to a single state, and if the total probability of the particle being anywhere is initially 100% then it will stay 100% in the future. But most importantly, it guarantees reversibility. Because the time evolution matrix in quantum mechanics is unitary, it means Louiville's theorem holds and you can always reverse time and go backwards exactly to where the system came from.

But wait--if this is the case, then that also means that all quantum mechanical systems are conservative, ie non-dissipative. Does dissipation not happen in quantum mechanics at all? This brings us to the second type of rule in quantum mechanics: the process of measurement. Originally, this rule was called the "collapse of the wavefunction". Because when looked at through Schrodinger's wave mechanics, it appears that when a macroscopic observer makes a measurement in the lab, of a property of a microscopic system (for instance if he asks the question "where is this particle actually located?") the rule is that what used to be a probability distribution over many different states--suddenly that distribution is reduced to, or collapses to, a single state. Before the measurement, we mathematically describe the particle as being in a "superposition" of many different positions at once, but after the measurement it is only found at one of those positions. Mathematically, this is achieved with a projection matrix. A projection matrix is very different from a unitary matrix. Its action in the Hilbert space is that it maps many vectors onto a single vector, instead of mapping one vector onto a single vector. Because of this, the action is irreversible, and does not satisfy Liouville's theorem. The measurement process is therefore dissipative rather than conservative. In other words, the rule for how an observer of a microscopic system makes measurements in quantum mechanics seems completely opposite to the rule for how microscopic systems evolve in time when they are not being observed. One is nice and smooth and reversible, the other is a sudden reduction, an irreversible collapse. Dissipation only seems to happen during observation.

But the way I've explained this highlights the paradox, called the "measurement problem in quantum mechanics". This paradox is what has spawned many different interpretations of quantum mechanics, which philosophers still argue fiercely about today. But the real question is how to reconcile reversibility with irreversibility, and the answer lies in thermodynamics. And once you understand it, you realize that it doesn't really make a whole lot of sense to talk about measurement in this way, as a sudden "collapse" of the wavefunction. And it leads to deeper questions about whether the wave function is real or just a mathematical abstraction--and if there is anything in quantum mechanics which can be said to be real at all. To be continued...
spoonless: (orangegray)
Our universe has 3 large spacial dimensions (plus one temporal dimension, and possibly another 6 or 7 microscopic dimensions if string theory is right, but those won't be of any importance here).

Given 3 numbers (say, longitude, latitude, and altitude), you can uniquely identify where a particle is located in space. But the state of a system depends not only on what the particles positions are, but also on what their momenta are, ie how fast they are moving (the momentum of a particle in classical mechanics is simply it's mass times it's velocity--when relativistic and quantum effects are taken into account, this relationship becomes much more complicated). This requires another 3 numbers in order to fully specify what the state of a particle is.

If you were to specify all 6 of these numbers for every particle in a given system, you would have completely described the state of that system. (I'm ignoring spin and charge here, which you'd also need to keep track of in order to fully specify the state.) In order to categorize all possible states of such a system, you therefore need a space with 6N dimensions, where N is the number of particles. It is this 6N dimensional space which is called "phase space" and it is in this space where wandering sets are defined. The state of the system is represented by a single point in phase space, and as it changes dynamically over time, this point moves around.

In statistical mechanics and in quantum mechanics, you often deal with probability distributions rather than a single state. So you might start out with some distribution of points in phase space, some fuzzy cloudy region near some neighborhood of a point for instance. And as the system evolves, this cloud can move around and change its shape. But one really central and important theorem in physics is Liouville's theorem... it says that as this cloud of probability moves around, the volume it takes up in phase space always remains constant. This theorem can be derived from the equations of motion in classical or quantum mechanics. But it really follows as a consequence of energy conservation, which in turn is a consequence of the invariance of the laws of physics under time translations. At any given moment in time, the basic laws of physics appear to be the same, they do not depend explicitly on time, so therefore energy is conserved and so is the volume of this cloud in phase space. It can morph into whatever different shape it wants as it wanders around, but its total volume in phase space must remain constant.

Systems that obey Liouville's theorem are called conservative systems, and they don't have wandering sets. Systems that do not obey Liouville's theorem are called dissipative systems, and they *do* contain wandering sets.

But wait-- I just said that Liouville's theorem follows from some pretty basic principles of physics, like the fact that the laws of physics are the same at all times. So doesn't that mean that all physical systems in our universe are conservative--in other words, that there really is no such thing as dissipation? And does that in turn mean that entropy never really increases, it just remains constant?

This is one of the most frustrating paradoxes for me whenever I start to think about dissipation. It's very easy to convince yourself that dissipation doesn't really exist, but it's equally easy to convince yourself that all real world systems are dissipative, and that this pervasive tendency physicists have for treating all systems as conservative is no better than approximating a cow as a perfect sphere (a running joke about physicists).

I'll let this sink in for now, but end this part by hinting at where things are going next and what the answers to the above paradox involve. In order to understand the real distinction between conservative and dissipative systems, we have to talk about the difference between open and closed systems, about the measurement problem in quantum mechanics, what it means to make a "measurement", and how to separate a system from its environment, and when such distinctions are important and what they mean. We need to talk about the role of the observer in physics. One of the popular myths that you will find all over in pop physics books is that this role for an observer, and this problem of separating a system from its environment, was something that came out of quantum mechanics. But in fact, this problem has been around much longer than quantum mechanics, and originates in thermodynamics / statistical mechanics. It's something that people like Boltzmann and Maxwell spent a long time thinking about and puzzling over. (But it's certainly true that quantum mechanics has made the problem seem deeper and weirder, and raised the stakes somewhat.) Philosophically, it's loosely connected to the problem of the self vs the other, and how we reconcile subjective and objective descriptions of the world. In short, this is probably the most important and interesting question in the philosophy of physics, and it seems to involve all areas of physics equally, and it all revolves somehow around dissipation and entropy. To be continued...
spoonless: (orangegray)
There's a connected set of issues that rests at the very heart of physics, which I've always thought was not very clearly explained in any of my classes. I had a very vague understanding of it while I was an undergrad, and hoped that I'd be able to sharpen it up in graduate school. To some extent, I did, but I found that even after 6 years of graduate school in physics, these issues were still never quite cleared up in my classes, and I never had enough time while working on other things to read enough on my own about them to fill in all the gaps in my understanding. I have always had the impression that these issues *are* well understood, at least by somebody, but that not many physics professors do understand them fully and they tend to just avoid talking about them in actual classes, or talk about them in a very superficial way. I do think my adviser was among those who understood them, but I always felt a bit shy about wasting his time on questions that were purely for my own curiosity and unrelated to any research we were working on. Especially ones that would take a long time to sort out.

Recently, I've started thinking about them again, but through an unexpected route. A couple months ago, OkCupid added support for bitcoin, so as we got closer to the release of it, several of the people I work with would tend to get into idle conversations about bitcoin during the day. A few of them I got involved in, and in one of them we got on to the subject of bitcoin mining. One guy asked the group if it would be worth it to dedicate his server at home to bitcoin mining. This is where your computer works on various difficult mathematical tasks, like factoring large numbers, and helping to merge block chains into each other, and as a reward you collect newly minted bitcoins that are conjured into existence. One guy commented immediately that it would never be worth it with a standard PC, you'd have to buy very expensive specialized hardware to do it. The original asker of the question objected that he wasn't using his server for anything else anyway, but the cynic replied that with a standard PC, the cost of running the hardware in an increased electric bill was greater than the payoff. (An idle server draws less electricity than one engaged in difficult mathematical computation.) This got us onto a tangent, about whether you could cool the computer down to a point where it didn't cost as much to run because it wasn't dissipating as much heat. Naturally, we went from there to discussing why computing necessarily dissipates heat, and what the ultimate physical limits of that process is. He mentioned that only quantum computers were reversible and didn't dissipate heat. I was first made aware of this fact in 1998 during a gradate quantum computing class that I took as an undergrad at Georgia Tech, where we read some relevant papers on the subject. But while I have long had a superficial understanding for why this was true, and I can repeat the standard things that people say about it, I admitted to him that I never quite understood it fully. He then started talking rapidly about the many worlds interpretation of quantum mechanics, and how the so-called "measurement problem" is really just decoherence and thermodynamics. He said he thought it was fairly straightforward, and this is the point where I mentioned that I had a PhD in theoretical physics and had spent many years working on related topics, but felt that there was just always a missing piece for me conceptually surrounding entropy, Maxwell's demon, reversibility, and dissipation. He shrugged and said "well, it seems straightforward, but I guess if you've studied this more then there must be more to the story than I'm aware of." I told him I didn't want to get into discussing interpretations of quantum mechanics, because it was too complex a subject, and I probably couldn't say anything more on reversibility because I didn't know how exactly to articulate what was missing from my understanding of things.

Every few years I pick up this topic, or one of a related set of topics that ties into it, and try to understand it, and I always make a bit more progress. I think the last bit of big progress I made was in reading Leonard Susskind's book on black holes and information, which helped me understand both black hole complimentarity and more about the density matrix and how entropy and information work in quantum mechanics. But this conversation renewed my interest again, so for the past month or so my mind has occasionally drifted back to it, and a couple days ago I managed to stumble onto the Wikipedia page for "Wandering Sets" which I think is a huge part of the missing piece for me. For some reason, wandering sets were never mentioned in any of my undergrad or graduate classes, and yet they seem absolutely crucial to understanding what the word "dissipation" means. Until yesterday, I had honestly never even heard of them. It's no wonder that I went through undergrad and graduate school always feeling frustrated when professors would use the word "dissipation" without giving any definition for it and just assuming that we all knew what it meant. Unfortunately, I feel like the Wikipedia page is poorly written and there are two facts they mention which seem obviously contradictory to me.

I will explain in part 2 what I've learned so far, and how wandering sets are related to the ergodic theorem, Liouville's theorem, and pretty much every foundational area of physics. And then I'll go into what I still don't understand about it--perhaps after this I need to find the right book that covers this stuff. It seems very weird to me that they always gloss over it in physics classes.
spoonless: (Default)
Man, I really wish I had time to read this it looks very interesting. But so ballsy and seemingly "too good to be true" that it's probably wrong. They're basically arguing that the String Theory multiverse (also known as the Cosmic Landscape, or the megaverse, or the eternal inflation multiverse, or simply the multiverse to string theorists) is the same thing as the regular old quantum multiverse that's been around for so many years!

Leaving for Madrid in 3 days. Then Barcelona, Paris, Rome. Lots of packing to do!

The Multiverse Interpretation of Quantum Mechanics
Authors: Raphael Bousso, Leonard Susskind
(Submitted on 19 May 2011)

http://arxiv.org/abs/1105.3796

"We argue that the many-worlds of quantum mechanics and the many worlds of the multiverse are the same thing, and that the multiverse is necessary to give exact operational meaning to probabilistic predictions from quantum mechanics. Decoherence - the modern version of wave-function collapse - is subjective in that it depends on the choice of a set of unmonitored degrees of freedom, the "environment". In fact decoherence is absent in the complete description of any region larger than the future light-cone of a measurement event. However, if one restricts to the causal diamond - the largest region that can be causally probed - then the boundary of the diamond acts as a one-way membrane and thus provides a preferred choice of environment. We argue that the global multiverse is a representation of the many-worlds (all possible decoherent causal diamond histories) in a single geometry. We propose that it must be possible in principle to verify quantum-mechanical predictions exactly. This requires not only the existence of exact observables but two additional postulates: a single observer within the universe can access infinitely many identical experiments; and the outcome of each experiment must be completely definite. In causal diamonds with finite surface area, holographic entropy bounds imply that no exact observables exist, and both postulates fail: experiments cannot be repeated infinitely many times; and decoherence is not completely irreversible, so outcomes are not definite. We argue that our postulates can be satisfied in "hats" (supersymmetric multiverse regions with vanishing cosmological constant). We propose a complementarity principle that relates the approximate observables associated with finite causal diamonds to exact observables in the hat."
spoonless: (Default)
I just stumbled upon a really neat little tidbit of history, that actually ties together a lot of different people whom I've had thoughts about over the years, some because I knew them personally, some because I heard things about them that were interesting (some good and bad), some whom I've only met once or twice. But the weird thing is, I never realized how closely connected all these different people are, and it as it slowly dawned on me I realized I had to make a post about all of these interconnections.

Where to start? Well, earlier this year, I went to a wedding in Atlanta, where a friend I hadn't seen in many years started asking me some things about physics. He mentioned that he was reading "The Quantum Enigma" and I immediately let out a groan, and mentally did a "palm-face" kind of thing. I then explained that this was written by two guys, Bruce and Fred, whom I knew well, but they were both idiots and I'd told them to their face that I didn't agree with the basic premise of their book, and my advisor had given a whole lecture retaliating against their accusations that the physics community is hiding "skeletons in the closet" regarding the role of consciousness in physics (which they launched at the weekly seminar preceding his lecture the following week). The two other professors at UC Santa Cruz who spoke out against Bruce and Fred before and after my advisor on that same day, were Anthony Aguirre (one of the founders of the FQXi institute mentioned in the How the Hippies Saved Physics video below) and Michael Nauenberg (who is mentioned by name several times in the video below, and was apparently the person who suggested to Fritjoff Capra while he was a postdoc at my school that he write a book mixing quantum mechanics with Eastern mysticism--one of the many things I had no idea about until seeing this video). I know Nauenberg mostly as a crotchety old man who we call "The Santa Cruz Heckler" because he heckles any string theorists or cosmologists who ever give talks, telling them they aren't real scientists and repeatedly asking "what's the physical meaning of this?". I think the one and only occasion where I've ever agreed with anything that's come out of Nauenberg's mouth was when he shot down Bruce and Fred--although I think everyone present would agree that my advisor did a much better job of that. Wikipedia's entry on "quantum mysticism" says that it was primarily Fritjoff Capra's "The Tao of Physics" that started the whole quantum mysticism movement, and got the new age community interested in quantum mechanics--that I was vaguely aware of.

Anyway, I explained to my friend that Fred is just this obnoxious lab manager who wishes he were a real physicist, and Bruce is this kind but really senile old guy who at some point earlier in his career was an atomic physicist, but doesn't understand quantum mechanics (let alone consciousness) any better than Fred. I once watched Bruce debate an undergrad philosophy major about the nature of consciousness in front of an audience of myself and a bunch of undergrad SPS club members, and it was really sad to watch because the undergrad utterly destroyed him, as Bruce didn't know the first thing about consciousness and had all of these silly naive ideas about free will.

After explaining this, he naturally asked me "ok, well yeah... I kind of thought some of the book sounded a little kooky, but I wasn't sure. So who *should* I read if I want to read a really deep book about physics? Who would you recommend?" I thought for only a moment and then said "this is going to sound strange, because I haven't read it, but I think the book I would recommend the most is Leonard Susskind's book The Black Hole War : My Battle With Stephen Hawking to Make the World Safe for Quantum Mechanics. It's funny, I somehow *knew* this was the best popular book on physics out there, even though I hadn't read it. My intuition was that it would be from a combination of things--one, Leonard Susskind is one of the deepest and most intelligent thinkers in physics, and always explains things in a very interesting way that exposes the philosophical importance of the ideas, not just the math behind it. He has razor sharp intellect as well as wit. He also explains things in a very down to earth way, that makes complicated things just make simple sense--he cuts right to the important stuff. Also, when he published that book, he wrote another book called An Introduction to Black Holes, Information, and the String Theory Revolution : The Holographic Universe on the same topic, but including all the math and intended for physicists, and that one I did read and found it outstanding. My impression was that the content was similar, but the Black Hole War was written at a level that anyone should be able to understand, whereas the one I read has a lot of stuff that only a physicist would be able to make sense of.

So, I started listening to The Black Hole War on books on tape yesterday. I've only listened to the prologue and Chapter 1, but to my great delight it appears to be everything I'd hoped it would be and more--the ultimate popular book on physics, that both gets things right and exposes what's interesting and deep about physics without watering it down too much. But the best thing about the book that I hadn't realized is that it tells a lot more of the human story behind it than his other book. So it's not just redundant with what I've already read (so far at least).

The First Chapter of the book is about a series of secret meetings that he attended with Hawking (where the famous battle over black holes known as "The Information Paradox" all began) in the upstairs of the house of a guy named Werner Erhard, who ran an organization known as EST (Erhard Seminar Training) and was filthy rich and loved to invite high profile physicists over to his house to have deep conversations. He mentions 3 or 4 regular attenders including Hawking and himself, Savas Dimopolous (whom I've met a few times, my advisor having introduced us) and--to my surprise--David Finkelstein, whom he mentions twice. I knew that Lenny was friends with my advisor (Tom Banks) and figured he would surely mention him in the book, and I wasn't surprised at all that Dimopolous was in there too, since he works at Stanford with Susskind. But I was not at ALL expecting him to start the book off by mentioning (twice in Chapter 1) David Finkelstein, someone who had an enormous personal impact on my life. It may or may not be an exaggeration to say that taking David Finkelstein's Quantum Relativity class with [livejournal.com profile] ikioi at Georgia Tech in 1998 was what convinced me to major in physics. But it's not at all an exaggeration to say that he's the man who convinced me that Ayn Rand was wrong and to officially renounce my faith in Objectivism, after [livejournal.com profile] ikioi and I invited him to give a talk called "Quantum Objectivism" for our Students of Objectivism club and had dinner with him afterwards. Also, the reason I bought the domain name "spoonless.net" in 1999 (which I've owned for the past 12 years) and adopted "spoonless" as my username is about half related to Finkelstein (in particular his antirealist views on quantum mechanics, which I was fascinated by at the time, later departed from, and now have drifted somewhat back towards) and half related to other things.

So after getting home and googling for this self help guru's name, Werner Erhard, I found more and more additional connections between people that I had never realized were there. Also, before I get to that, let me mention that Landmark Education (which I've met a lot of people who have been involved with, and were in part the inspiration for a circle of friends I spend a lot of time with, called FreedomCommunity) apparently was a direct spinoff of Erhard Seminar Training. And furthermore, the Church of Scientology launched a campaign against him after he allegedly stole a lot of their methods and incorporated them into his own (http://en.wikipedia.org/wiki/Scientology_and_Werner_Erhard).

After reeling from that a bit, then the real fun started once I found Jack Sarfatti's final blog entry at his blog "Destiny Matrix":

http://destinymatrix.blogspot.com/

I had never heard of Jack Sarfatti either, but in his blog post he ties a ton of people I've known or heard about all together, which totally blows my mind. Here are some experps from it:

"Both Fred [Alan Wolf] and I got divorced about same time ~ 1971 and we were room mates. I was too young for that job and was bored and wanted adventure which came soon enough from the CIA with the strange events in 1973 at SRI Remote Viewing Project described in my book" [Fred Alan Wolf is one of the main crazy guys interviewed in What the Bleep Do We Know, along with John Hagelin (also crazy) and David Albert (not crazy at all, but sued them for distorting his words). (On a side note, I found a picture of my advisor somewhere online arm in arm with David Albert at a conference they were at together on the Arrow of Time.)]

"My encounter with Dennis Bardens of British Intelligence in 1974: 'Dr Sarfatti, it is my duty to inform you of a psychic war raging across the continents between the Soviet Union and your country and you are to be in the thick of it.'

"The main thing we did was the Esalen Month in Jan 1976 I think that Gary Zukav writes about in Dancing Wu Li Masters. I brought David Finkelstein there and that's how he met Werner Erhard leading to the big est physics conferences described by Lenny Susskind with Feynman, Gell-Mann, Wheeler, Hawking, Coleman, I think Kip Thorne et-al. I had met David at Yeshiva visiting Lenny Susskind. Finkelstein also worked with Ken Shoulders and Hal Puthoff at a company set up by the Fried Chicken guy William Church as a result of the Esalen month.
We had seminars at the facility on Nob Hill with the Rockefeller-Lanier money."

"I asked Werner in the lobby of the Ritz, he in a silly inappropriate casual outfit, with a woman adorer, what he did. He said "I make people happy." I wanted to run and I said in a strong Brooklyn accent, "I think you're an asshole." Werner got up from his chair a big smile, embraced me warmly and said "I am going to give you money." I had no idea about the message of the est-Training being "You're an asshole." Werner thought I was some kind of Guru I guess."

"Yes. I gave Fritjof $1500 that he needed to pay his lawyer for a Green Card. I also brought my then room-mate Gary Zukav to Esalen and wrote all of the rough draft of the physics parts of Wu Li Masters for him and helped him with the editing in later drafts."

"George was a "spook" who managed Tim Leary when Nixon let him out of prison."

"Indeed Nick Herbert's FLASH paper led to the no-cloning theorem so important in quantum computing today."

"Fred Wolf and I were edged out probably because they thought we were too crazy? Finkelstein sort of took over and I was the guy who brought him there in the first place. It was the usual academic shark cut-throat back-stabbing both Fred & I left SDSU for."

"I am cc'ing this to some of the participants who I am still in touch with. Fred Wolf recently spoke to Werner who now lives in London. I also ran into Stan Klein only a few days ago who is doing very interesting brain research with Stapp. I think Fritjof Capra is still in Berkeley and Stan Klein is in touch with him. Unfortunately Tim Leary, George Koopman and Robert Anton Wilson have died. I think Gary Zukav lives on Mount Shasta. You should also talk to David Finkelstein."


It turns out, this whole post is an interview he did a couple years ago with a science historian at MIT named David Kaiser, who was writing a book called "How the Hippies Saved Physics". Here is a lecture he gave at MIT on the book, it sounds really good! And he mentions even more people I know...



http://forum-network.org/lecture/how-hippies-saved-physics

After watching this video, lots more became clear to me. He talks about FQXI, which Anthony Agueirre helped found and Garrett Lisi, a personal friend of mine, who used to have an active lj here, was one of the main initial recipients of their funding. Both Jack Sarfatti in his blog and this guy in the video also mention Garrett.

Nick Herbert, whom they both talk about a lot, happens to be another guy whom I have met personally. Actually that's a funny story, I met him in Robert Anton Wilson's apartment, actually I think when we shook hands we were just walking in together.

Sarfatti it seems is clearly one of the crazy guys, similar to the What the Bleep people, but Nick Herbert is sort of borderline. Like Capra, he's kind of halfway crazy, and exaggerates a lot of things, but at least seems like he understands some physics, like Bruce and Fred. But I don't think he has that great of a grasp on it, personally. One thing that annoys me in the video, that I disagree with, is the premise. The main premise seems to be that, since Nick Herbert (one of the "Hippies") was able to get a paper published saying that he thought he could send communications faster than light... and that since that prompted a real physicist to respond by proving that you couldn't, that they contributed greatly to physics. While that may be true in a sense, the way I interpret it is more that everyone knew you couldn't do that, but the fact that a bunch of annoying new agers went to the trouble of trying to claim that you could, this made it worth proving that you couldn't. But it did attract more attention to Quantum Information, which overall is a good thing. He sort of glosses over the fact that what the hippies contributed was just that they were totally wrong.

I found this paper online, which was written a couple years ago by the referee who approved Nick Herbert's paper for publication... defending his decision to do it, since many physicists think it was never worthy of publication...
http://arxiv.org/abs/quant-ph/0205076
"Abstract: I was the referee who approved the publication of Nick Herbert's FLASH paper, knowing perfectly well that it was wrong. I explain why my decision was the correct one, and I briefly review the progress to which it led."

"Early in 1981, the editor of Foundations of Physics asked me to be
a referee for a manuscript by Nick Herbert, with title “FLASH —A superluminal communicator based
upon a new kind of measurement.” It was obvious to me that the paper could not be correct, because it
violated the special theory of relativity. However I was sure this was also obvious to the author. Anyway,
nothing in the argument had any relation to relativity, so that the error had to be elsewhere."

(Incidentally, I do think Nick is a very entertaining guy, he's a real fun person to hang out with. I just wouldn't expect him to come up with any interesting ideas in physics. Nor would I ever expect Jack Sarfatti or Fred Alan Wolf to)

Oh, and the How the Hippies Saved Physics guy also talks a lot about Esalen, which is a place various friends of mine have visited, and where some have worked. Apparently lots of these people used to hang out there, and they eventually got Richard Feynman to come. I also think I remember Susskind mentioning that Feynman came to some of those secret meetings at Werner Erhard's house, because there was some story about ordering a Feynman sandwich he tells... where he asks Feynman what a Feynman sandwich would be like and Feynman says "it's like a Susskind sandwich but with less ham." And then Susskind replies "but at least a Susskind sandwich has less baloney." He mentions that that was the only time he remembers one-upping Feynman in terms of wit. I can't remember for sure if this was at Erhard's house but I think so.

At the end of Susskind's first chapter in The Black Hole War, he goes home from the meeting with Hawking where Hawking first told him that he thought information would be completely erased in black holes, and even that information was being erased all of the time in empty space due to microscopic virtual black holes. And he says "as soon as I got back, I went to my friend Tom Banks and we talked about it, and eventually figured out why it bothered us so much... erasing information means increasing entropy, which means you end up producing a lot of heat!" Soon after that, they published a paper arguing that Hawking's proposal violated the laws of thermodynamics, which was the first shot fired in the great war (which Hawking eventually conceded, although not until several decades later--in fact, he conceded it while I was in graduate school, just before I started working with Tom Banks, a couple years before Lenny wrote the book).

Profile

spoonless: (Default)
Domino Valdano

May 2023

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 3rd, 2025 01:08 pm
Powered by Dreamwidth Studios