new blog!

Apr. 13th, 2016 08:42 am
spoonless: (Default)
Exciting news. I've started a new blog about physics on Medium. It's called Physics as a Foreign Language:

https://medium.com/physics-as-a-foreign-language

This is in preparation for a book I'm starting on writing this month on the same topic (and likely with the same name).

I'll continue to update lj occasionally with personal stuff, and possibly even thoughts on physics. But for more well thought out posts on physics (as opposed to my own rambling and sometimes confused thoughts) it'll go on the Medium blog now.

If you're interested, please set your RSS feeder or whatever you normally do if you want to follow a blog.

Excited this book is finally getting rolling! On a personal note, I quit my job a little over a week ago, and will be moving back from the East coast to the West in 2 months. Yay!
spoonless: (Default)
The sci-fi film we've been talking about making at work has to do with the many worlds interpretation of quantum mechanics. I can't go into details (because they have involved many hours of conversations), but basically it starts from the premise that somehow, a group of hackers tinkering with various electrical equipment accidentally stumble upon a way to communicate with other decoherent branches of the multiverse, and this opens pandora's box in a lot of ways.

I was asked to evaluate how this could be made realistic from a physics perspective, and we had many conversations about it. What my coworker pointed out, which sounds right, is that it seems like communicating between different branches of the multiverse would have to involve some kind of non-linear modifications of quantum mechanics. This led us to read up a little on how realistic such modifications would be. Most physicists assume that quantum mechanics is an exact description of the world, but many are open to the possibility that there are slight modifications to it at the scale of quantum gravity which haven't been detected yet. Unfortunately, every time someone has explored this possibility theoretically, they tend to be led to the conclusion that any kind of non-linear modification, no matter how slight, tends to lead to problems that make the whole theory inconsistent and incompatible with other more sacred laws of physics such as thermodynamics.
My advisor's advisor was one who went down this path and eventually concluded it was probably a blind alley.

It appears that there are a lot of connected things that happen when you monkey with the laws of physics as we know them. For example, if you add non-linear modifications to quantum mechanics, you tend to violate the 2nd law of thermodynamics. In creating a lot of negative energy, you also tend to violate the 2nd law of thermodynamics. And in creating a wormhole, you tend to create the possibility for closed timelike loops. David Deutsch and others have analyzed what might happen theoretically if closed timelike loops (CTC's) were possible, and the conclusion is that you'd be able to solve NP complete problems in polynomial time, ie it implies P=NP. Closed timelike loops also violate the 2nd law of thermodynamics, because entropy cannot always increase within a closed loop of time. Either entropy would have to remain constant throughout the whole loop, or increase for a while and then suddenly decrease. It's like one of those staircases from an Escher painting: the staircase that always goes up cannot connect to itself in a circle. Either it doesn't go up, or it goes up and then comes back down. The same with entropy.

So many things are connected here. Non-linear modifications of quantum mechanics, negative energy, perpetual motion, antigravity, time travel, traversable wormholes, and P=NP. The more I read about these (and especially when I read Scott Aaronson's stuff) the more it seems like either you have to accept all of them or none of them.

There are actually multiple connections between computational complexity and wormholes I've found, not just via the connection between CTC's and P=NP. For example, there is the ER=EPR conjecture which is a very exciting proposal by two of the world's greatest living theoretical physicists, Leonard Susskind and Juan Maldacena. They have found a possible way in which wormholes are the same thing as quantum entanglement. Again, I don't have time to delve into the details, but this has to do with the blackhole firewall paradox. Many physicists have been worried that if there are no modifications to quantum mechanics (ie, information is never lost in black holes) then this would imply the existence of "blackhole firewalls" where an infalling observer would reach a flaming wall of fire at the event horizon. This violates a central principle of general relativity known as "the equivalence principle" which states that physics is the same in all reference frames.

But what Susskind says is that maybe firewalls don't actually form inside a black hole, and aren't needed to resolve the information paradox. Instead, the explanation for how things like the no-cloning theorem are preserved in the context of quantum mechanics in a black hole is that the interior of the black hole is protected by an "armor of computational complexity" (as Scott Aaronson puts it). You could try to send messages from the outside to the interior non-locally via quantum entanglement (or equivalently, through a traversable wormhole), but it would require you to solve a computational problem which is so complex it's in the complexity class known as QSZK (Quantum Statistical Zero Knowledge). If I understand correctly, the only reason you cannot send such a message is because quantum computers are not powerful enough to solve problems in this class.

I mentioned in my previous post that while it seems crystal clear that there's no way an advanced civilization could ever build a macroscopic wormhole that something the size of a human could pass through, it's a lot less clear why they couldn't build a microscopic traversable wormhole and use it to send information faster than light. If they could do that, then they could also create timelike loops and hence solve P=NP problems. So maybe the only reason why they couldn't do it is also related to computational complexity. Maybe it's the same general reason Susskind suggests in the context of black hole physics: somehow, computational complexity prohibits the transmission of meaningful information through such a wormhole, even though it would otherwise appear to be possible to build one.

Shortly after writing my last entry on this, I discovered an interesting recent paper from May 2014 on traversable wormholes from a physicist at Cambridge. He wrote the paper in an attempt to construct a stable wormhole geometry using the throat of the wormhole itself to generate the required negative energy to stabilize it. He ended up finding that it was not possible to completely stabilize it, for the particular parameters he was using. But he argues that even though it isn't stable, it would collapse slowly enough that a beam of light would still have a chance to pass through it before it completely collapsed. So if you could somehow construct that geometry (a daunting task), maybe it could be considered "traversable" in that light could temporarily pass through it. He also speculates that maybe you could find other geometries with less symmetry where the whole thing could be stabilized, but this seems like wishful thinking to me. The possibility of having some kind of temporary closed timelike loop while an unstable microscopic wormhole is collapsing I find very intriguing, and unlike the large wormholes of Interstellar it's not something I would completely rule out as a possibility. However, again... if you accept that then it seems like you'd have to accept all of the above weird violations of physics, including P=NP. And to me, it seems more likely that somehow, this armor of computational complexity Susskind and Aaronson are talking about comes into play and stops the beam of light from making it all the way through. Or at least, stops there from being any meaningful information encoded in the light. It occurs to me that this may be exactly what Hawking had in mind with his Chronology Protection Conjecture.

The only somewhat serious possibility I've left out here is if somehow, Kip Thorne's original conclusion that traversable wormholes necessarily imply the ability to build time machines is flawed. If that's the case, then I will admit there is a real possibility for faster than light communication in the future; and then the armor of computational complexity, or the chronology protection conjecture, or whatever you want to call it, would only come into play when you tried to make the wormhole into a time machine. This seems far fetched to me, but less far fetched than the idea that P=NP, perpetual motion machines are possible, and the 2nd law of thermodynamics is wrong, all of which I think would need to be true in order to have an actual microscopic wormhole that could be made into a closed timelike curve.
spoonless: (Default)
Since my last post on this topic, several new interesting things related to this have come to light. And some of the things I mentioned last time I wanted to get to seem less important, so I'm not sure whether I will get to everything. And this whole series may be longer than I thought or veer off in another direction.

Recently, some friends at work and I have been discussing the possibility of making a low budget sci-fi film related to the Many Worlds Interpretation of quantum mechanics. And this seemed like a pretty independent topic, but somehow in discussing the physics issues behind what we envision the plot of our film to be, there have been some crossovers. So I have had some new and interesting thoughts about why wormholes should be impossible from thinking about that. But I've also learned some new things in the course of reading some more papers on wormholes while trying to get the details right for this part V post.

First, there's a pretty good popular-level reference online which summarizes most of what I've already discussed plus a few important other things about wormholes. It comes from a Scientific American article written by Ford and Roman in 2000:
http://www.bibliotecapleyades.net/ciencia/negativeenergy/negativeenergy.htm

(A good bit of what I wrote in my previous posts was based loosely on what I found in Thomas Roman's 2004 review of this subject http://arxiv.org/abs/gr-qc/0409090, but the popular link above is more readable for a general audience.)

I said that I wanted to try and give some examples of the kinds of restrictions the QEI's and what we know about the Casimir Effect put on the construction of wormholes. Rather than doing the work myself, I'll just quote from the SciAm article above since they provide some numbers:

When applied to wormholes and warp drives, the quantum inequalities typically imply that such structures must either be limited to submicroscopic sizes, or if they are macroscopic the negative energy must be confined to incredibly thin bands. In 1996 we showed that a submicroscopic wormhole would have a throat radius of no more than about 10-32 meter.

This is only slightly larger than the Planck length, 10-35 meter, the smallest distance that has definite meaning. We found that it is possible to have models of wormholes of macroscopic size but only at the price of confining the negative energy to an extremely thin band around the throat. For example, in one model a throat radius of 1 meter requires the negative energy to be a band no thicker than 10-21 meter, a millionth the size of a proton.

Visser has estimated that the negative energy required for this size of wormhole has a magnitude equivalent to the total energy generated by 10 billion stars in one year. The situation does not improve much for larger wormholes. For the same model, the maximum allowed thickness of the negative energy band is proportional to the cube root of the throat radius. Even if the throat radius is increased to a size of one light-year, the negative energy must still be confined to a region smaller than a proton radius, and the total amount required increases linearly with the throat size.

It seems that wormhole engineers face daunting problems.


So hopefully this gives you a sense for what an advanced civilization would need to do in order to make a wormhole such as that depicted in Interstellar, assuming it were even possible. They would need to be able to harness amounts of negative energy that were on par with the total (positive) energy output of billions of stars (like, the energy of an entire galaxy). But on top of that, they would need to find a way to concentrate all of that energy into an extremely tiny space much smaller than the size of a single proton. This is obviously something that, if it worked, would require a very "post singularity" civilization. But there's a catch-22 here in that, it's hard to imagine a civilization which could harness all of the energy in an entire galaxy (even if we were talking about positive energy rather than a kind of energy not known to exist in such quantities) without imagining that they had first been able to colonize a galaxy. But if they haven't been able to build a wormhole yet (the most plausible way anyone has come up with for traveling faster than light) then how would they be able to colonize a galaxy? Even if they had lifespans long enough to live for the hundreds of thousands of years it would take to make a roundtrip journey like that, they wouldn't be able to communicate back to their home planet or with other pioneers exploring other regions of the galaxy, while they were traveling. But all of this of course is pure science fiction, since we're not talking about positive energy, we're talking about negative energy which, as I've explained in previous posts, can only exist momentarily in very tiny quantities microscopically.

This leads in to my next important point: why couldn't an advanced civilization figure out a way to somehow mine negative energy, picking up little tiny quantities of it here and there from different microscopic effects, and store it or concentrate it somehow, building up a vast resource of negative energy which they could use to build wormholes?

There are a couple reasons they can't do that. One of course is that in doing so, it would violate the quantum energy inequalities. But even so, given that the full extent of these inequalities is still being worked out and we don't know exactly where or when they apply (for example, it has been proven for flat space and for various curved spaces, but if you add extra dimensions or other weird modifications of gravity, there are still some cases where it remains unproven), is there any more solid reason to think this couldn't be done? The answer is yes, there's a big reason which I had left out of previous posts but which is highlighted in this SciAm article and I've seen reference to in a few other places. The reason is, violating the QEI's would also allow you to violate the 2nd law of thermodynamics, one of the most sacrosanct laws of physics ever discovered, even more sacred (I dare say) than the absoluteness of the speed of light.

One of the unique things about gravity as opposed to other forces is that, as far as we know, it only acts attractively not repulsively. This is because the charge associated with this force (mass/energy) is believed to be always positive. (With other forces, such as electromagnetism, the charge--electrical charge--can be positive or negative, and therefore you can get either attraction or repulsion.) A repulsive force (or "antigravity") is what's needed in order to stabilize a wormhole. That's why you need negative energy. And because of quantum uncertainty, you do have a combination of positive and negative energy fluctuations in the vacuum, which average out to zero (or very slightly above zero) over the long run or over large regions of space. But by "long run" and "large regions of space" here we mean compared to the Planck length or the Planck time which are both very very tiny. Because the average has to be zero, if you take away the negative energy and beam it off into deep space, you would be left with a bunch of positive energy. In other words, you would have extracted positive energy out of the vacuum that you can then use to do useful work. This would be a free and infinite energy source, which is the holy grail for many crackpots who have made it their life's work to try and build perpetual motion machines (and often claim falsely to have succeeded). Negative energy, antigravity, perpetual motion, and breaking the 2nd law of thermodynamics (which implies that you need to expend energy to do useful work, you can't just do it for free) are all directly connected to each other. A firm disbelief in this by the physics community is why all of the people who claim to have harnessed "zero point energy" are ignored. The 2nd law of thermodynamics is something which should almost not even be regarded as a law of physics, but as a law of mathematics/statistics. It's pretty much a direct consequence of statistics, with only very minimal mathematical assumptions going in (such as ergodicity). If it were broken, it wouldn't just mean the laws of physics don't work, it would mean statistics and basic mathematics doesn't work.

In the next part, I'd like to connect up some of the issues here with the issues we've been discussing in the development of our low budget sci-fi film. As a teaser, I will say that a big thing I realized after writing all of this is that I've been thinking of "traversable wormhole" the whole time as mostly meaning "something big enough that a human being could pass through it". This is the type of wormhole portrayed in Interstellar. And as I hope you'll agree after reading this far, it seems like one can say with a very high degree of certainty that it would be impossible, even for an infinitely advanced post singularity civilization. However, there is another class of traversable wormholes I wasn't thinking about much when I started writing this series. And that's microscopic wormholes that could allow a single particle or some other small piece of matter or information to pass through, from one point in space to a very distance point in space. If this were possible, then you would have faster than light communication but not faster than light travel (unless you could scan every atom of the body, convert it to pure information, beam it through, and reconstruct the body--similar to Star Trek teleportation). It would still give rise to all of the same paradoxes of time travel, but it seems much more difficult to rule out just based on the physical restrictions on negative energy densities. I think it's pretty likely that this type of traversable wormhole is also impossible to build, although in focusing on the big kind of wormhole featured in Interstellar, I was missing what is surely the more interesting question (of how or why we can't build a microscopic traversable wormhole that could be used for communication). This will get us into issues of computational complexity, revisiting Hawking's chronology protection conjecture, and seeing the 2nd law of thermodynamics come up again in a different way.
spoonless: (Default)
In part 3, I mentioned there was a difference between the standard local energy conditions which were originally proposed in classical General Relativity and the "averaged" conditions. But I went off on a tangent about quantum inequalities and quantum interest, and never got around to connecting this back with the averaged conditions or defining what they are.

The local energy conditions original proposed in GR apply to every point in spacetime. Since general relativity is a theory about the large-scale structure of the universe, the definition of a "point" in spacetime can be rather loose. For the purposes of cosmology, thinking of a point as being a ball of 1km radius is plenty accurate enough. You won't find any significant curvature of spacetime that's smaller than that, so whether it's exactly 0 in size or 1km in size doesn't matter. But for quantum mechanics, it matters a lot because it's a theory of the very small scale structure of the universe. There, the difference between 0 and 1km is huge, in fact so huge that even anything the size of a millimeter is already considered macroscopic.

So if you're going to ask whether quantum field theory respects the energy conditions proposed in general relativity, you have to get more precise with your definitions of these energy conditions. The question isn't "can energy be negative at a single point in spacetime?" but "can the average energy be negative in some macroscopic region of space over some period of time long enough for anyone to notice?" The actual definition of the AWEC (averaged weak energy condition) is: energy averaged along any timelike trajectory through spacetime is always zero or positive. A timelike trajectory basically means the path that a real actual observer in space who is traveling at less than the speed of light could follow. From the reference frame of this observer, this just means the energy averaged at a single point over all time. The ANEC (averaged null energy condition) is similar but for "null" trajectories through spacetime. Null trajectories are the paths that photons and other massless particles follow--all particles that move at the speed of light. A real observer could not follow this trajectory, but you can still ask what the energy density averaged over this path would be.

From what I understand, the quantum energy inequalities are actually a bit stronger than these averaged energy conditions. The AWEC basically says that if there is a negative energy spike somewhere, then eventually there has to be a positive energy spike that cancels it out. The QEI's say that not only does this have to be true, but the positive spike has to come very soon after the negative spike--the larger the spikes are, the sooner.

However, you may notice that the QEI's (and the averaged energy conditions) just refer to averaging over time. What about space? Personally, I don't fully understand why Kip Thorne and others focused on whether the average over time is violated but didn't seem to care about the average over space. Because the average over space seems important for constructing wormholes too--if you can't generate negative energy more than a few Planck lengths in width, then how would you ever expect to get enough macroscopic negative energy to support and stabilize a wormhole that someone could actually travel through?

I haven't mentioned the Casimir Effect yet, which is a big omission as it's one of the first things people will cite as soon as you ask them how they think someone could possibly build a traversable wormhole. Do the quantum inequalities apply to the Casimir Effect? Yes and no.

As I understand them, the quantum inequalities don't actually limit the actual absolute energy density, they limit the difference between the energy density and the vacuum energy density. Ordinarily, vacuum energy density is zero or very close to it. (It's actually very slightly positive because of dark energy, also known as the cosmological constant, but this is so small it doesn't really matter for our purposes.) The vacuum energy is pretty much the same everywhere in the universe on macroscopic scales. So ordinarily, if a quantum energy inequality tells you that you can't have an energy density less than minus (some extremely small number) then this also places a limit on the absolute energy density. But this is not true in the case of the Casimir Effect. Because the Casimir Effect lowers the vacuum energy in a very thin region of space below what it normally is. This lowered value of the energy (which is slightly negative) can persist for as long as you want in time. But energy fluctuations below that slightly lowered value are still limited by the QEI's.

This seems like really good news for anyone hoping to build a traversable wormhole--it's a way of getting around the quantum energy inequalities, as they are usually formulated. However, if you look at how the Casimir Effect actually works you see a very similar limitation on the negative energy density--it's just that it is limited in space instead of limited in time.

The Casimir Effect is something that happens when you place 2 parallel plates extremely close to each other. It produces a very thin negative vacuum energy density in the region of space between these plates. To get any decent amount of negative energy, the plates have to be enormous but extremely close together. It's worth mentioning that this effect has been explained without any reference to quantum field theory (just as the relativistic version of the van der Waals force). As far as I understand, both explanations are valid they are just two different ways of looking at the same effect. The fact that there is a valid description that doesn't make any reference to quantum field theory lends weight to the conclusion that despite it being a little weird there is no way to use it to do very weird things that you couldn't do classically like build wormholes. However, I admit that I'm not sure what happens to the energy density in the relativistic van der Waals description--I'm not sure there is even a notion of vacuum energy in that way of looking at it, as vacuum energy itself is a concept that exists only in quantum field theory (it's the energy of the ground state of the quantum fields).

Most of what I've read on quantum inequalities has come from Ford and Roman. They seem very opposed to the idea that traversable wormholes would be possible. I've also read a bit by Matt Visser, who seems more open to the possibility. The three of them, as well as Thorne, Morris, and Hawking seem to be the most important people who have written papers on this subject. Most other people writing on it write just a few papers here or there, citing one of them. Visser, Ford, and Roman seem to have all dedicated most of their careers to understanding what the limits on negative energy densities are and what their implications are for potentially building wormholes, time machines, or other strange things (like naked singularities--"black holes" that don't have an event horizon).

There are a few more things I'd like to wrap up in the next (and I think--final) part. One is to give some examples of the known limitations on how small and how short lived these negative energy densities can be, and what size of wormhole that would allow you to build. Another is to mention Alcubierre drives (a concept very similar to a wormhole that has very similar limitations). Another is to try to enumerate which averaged energy conditions are known for sure to hold in quantum field theory and in which situations, comparing this with which conditions would need to be violated to make various kinds of wormholes. And finally, to try to come up with any remotely realistic scenario for how this might be possible and give a sense for the extremely ridiculous nature of things that an infinitely advanced civilization would need to be able to do in order for that to happen practically, from a technological perspective.
spoonless: (Default)
So what is this thing called negative energy (also called "exotic matter", and could it exist somewhere, or if it doesn't exist naturally, is there a way we could somehow generate it?

The two main theories of fundamental physics today are General Relativity and Quantum Field Theory. General Relativity was developed as a way to understand the large scale structure of the universe (cosmology, astrophysics, etc), while quantum field theory was developed as a way to understand the small scale structure (quantum mechanics, subatomic particles, etc.) Putting the two together is still a work in progress and string theory so far seems to be the only promising candidate, but it is far from complete.

General Relativity by itself is usually referred to as a "classical" theory of physics, since it doesn't involve any quantum mechanics. But there has been a lot of work using a "semi-classical" theory called Quantum Field Theory in Curved Spacetime. This is basically quantum field theory but where the space the quantum fields live in is allowed to be slightly curved as opposed to perfectly flat. Because this doesn't work once the curvature becomes too strong, it's not a full theory of quantum gravity, and is only regarded as an approximation. But it has been good enough to get various interesting results (for example, the discovery of Hawking radiation).

In General Relativity by itself (usually referred to by string theorists as "classical GR"), there are a number of "energy conditions" which were conjectured early on, specifying what kinds of energy are allowed to exist. The main ones are the strong energy condition, the dominant energy condition, the weak energy condition, and the null energy condition. As I understand it, all of these are satisfied by classical physics. If there were no quantum mechanics or quantum field theory, then it would be easy to say that wormholes are impossible, since negative energy is not even a thing. But in quantum field theory, the situation is much more subtle. In Kip Thorne's 1989 paper he finds that a variant of the weak energy condition (AWEC = averaged weak energy condition) is the one which would need to be violated in order to construct his wormhole. I've seen more recent papers which focus more on ANEC (averaged null energy condition) though, so perhaps there have been wormhole geometries since discovered which only require violation of the null energy condition.

I'm not going to explain what the difference is between all of these different energy conditions. But I should explain the difference between the "averaged" conditions and the regular ("local") conditions. The weak energy condition says that the energy density measured by every ordinary observer at a particular location in space must be zero or positive. The surprising thing about quantum field theory is that this, as well as all of the other local conditions (local means at a particular point) are violated. In other words, in quantum field theory, negative energy is very much "a thing".

But hold your horses for a second there! Because the thing about quantum field theory is that, there are loads of different examples of weird things that can happen on short time scales and at short distances that cannot happen macroscopically. For example, virtual particles exist that travel faster than the speed of light, masses can be imaginary, and energy is not even strictly conserved (there are random fluxuations due to quantum uncertainty). There are particles and antiparticles being created out of the vacuum and then annihilated all the time (quantum foam). There are bizarre things called "ghosts" that can have negative probability (which I won't go into). But when you look at the macroscopic scale, none of these weird effects show up--through very delicate mathematics, they all cancel out and you end up having very normal looking physics on the large scale. It's like if you look at the individual behavior at the microscopic level, everything is doing something completely weird and bizarre. But if you take an average of what's happening, it all gets smoothed out and you have very solid reliable macroscopic properties: energy is conserved, probabilities are positive, everything moves at less than the speed of light, etc. These things have been proven and are well understood. So given everything I know about how quantum field theory works, my intuition would be that something similar happens for negative energy: it's the kind of thing that could happen momentarily on the microscopic scale, but would never be the kind of thing one would expect to see on the macroscopic scale. And that's the main reason I've always told people I don't think wormholes are possible, despite not having reviewed most of the relevant literature related to it until this month.

After reviewing the literature, I have seen that over the past 20 years, the case that negative energy cannot exist macroscopically in our universe has grown stronger. Since the mid 90's the focus has shifted from energy conditions to what are known as "quantum energy inequalities" or QEI's. I read a couple review papers on QEI's, and will try to summarize in my next part. The gist of it is that while negative energy can happen locally, there are limits which can be placed on how negative that energy can be. And the limits depend on what timescale you're looking at. If you want a very negative energy, you will only find that on a very short timescale. If you want only a little bit of negative energy, you might find it on a longer time scale. But once you get to timescales like a second or more, the amount of negative energy you can have at a point is indistinguishably different from zero. There is a related idea called "quantum interest". Quantum interest refers to the fact that: given any negative energy spike there will be some compensating positive energy spike in the near future to compensate it (and make it average out to zero). And the time you have to wait to have this "payback" in the energy balance is shorter the larger the initial spike.

Gotta run for now, but I still have more to summarize on this. To be continued in part IV!
spoonless: (Default)
I started my review of wormholes by reading Kip Thorne's famous paper on them from 1989. Thorne is the T in "MTW" a book by Misner, Thorne, and Wheeler called Gravitation, written in 1973 and still one of the most widely used textbooks on general relativity.

I'm not actually sure whether Kip Thorne believes that wormholes are possible--I assume he would at least lean towards "no" but I have no idea. You might think that because he has written important papers on them and because he consulted on a movie that depicts one, he believes they are. But that doesn't follow, because theoretical physicists often explore ideas that they don't think will work out, to see where they will lead and find the limits of existing theories and uncover new questions or problems with them. I didn't search for comments from him so I don't know what his present take on them is or if it has changed any, but in his 1989 paper he doesn't say they are possible, he just outlines what the conditions would have to be in order for them to be possible.

In his paper, he does three main things. The first is to construct a simple example of a stable traversable wormhole geometrically. In other words, he describes what the shape of space and time would have to be, and what distribution of energy would be needed in order to create this shape. (Remember, the basic idea of general relativity is that matter and energy warp space and time; given any distribution of energy you get a well defined shape of space and time.) Unfortunately he finds that the distribution would have to be quite "exotic", meaning it would require a lot of negative energy, a substance which is very different from ordinary matter and energy. The main question is: could such a substance exist or be created somehow, and if so could it exist in large enough quantities to make a wormhole? At the time, little was known about the answer to this question but a lot more work has been done since which is the topic I focused most of the rest of my reading on.

The second important thing he does in his 1989 paper is to show that if it is possible to create even the simplest kind of wormhole that just connects point A to point B in space, then it is also possible to build a time machine out of the wormhole, that could be used for traveling backwards in time.

So while the fact that a prominent very respectable physicist was even discussing the possibility of wormholes must have been very exciting to the sci fi community, what they may not have realized is that both of these results make wormholes less likely, not more likely. The first because he demonstrated that they depend on a substance not known to exist. And the second because time travel has a whole set of causality and consistency problems that come with it. If it were possible to build a wormhole that couldn't be made into a time machine, that would be much more believable than a wormhole that could be made into a time machine. But sadly, it doesn't seem like the first scenario is possible, at least according to Kip Thorne's 1989 results. However, there is some encouraging news here: in 1992 Stephen Hawking conjectured that there may be weird as of yet unknown effects in physics which act to protect against time travel. (He called this the "Chronology Protection Conjecture".) It seems like pure speculation to me, but if Hawking's suggestion is right then there might plausibly be some mechanism that prevents someone traveling through a wormhole if they plan to travel backwards in time. Like, maybe the wormhole suddenly closes up or becomes unstable. However, I don't think he has much reason to believe this is true other than wishful thinking--it would be nice if some kind of wormhole were possible, without having to face all of the obviously troublesome inconsistencies that time travel brings (grandfather paradox, etc.) So he tried to think of any way in which it could be. This is one way of avoiding that problem, but seems unlikely and doesn't do anything to solve the main problem which is a lack of negative energy.
spoonless: (Default)
I saw Christopher Nolan's latest film Interstellar a couple weeks ago on opening night. I hadn't even heard about it before, my company just decided to buy us all tickets to see it in IMAX, one of the nice things they do for employees once a month or so. I loved it and it left a big impression on me, and I intend to go back and see it again as soon as I get a chance. (Don't worry, I'm not going to spoil anything about the plot here--the intention of this series of posts is not to talk about the movie but about wormholes.)

I mostly loved it for the story, but also that you see concepts from theoretical physics like time dilation, black holes, and wormholes being applied in a relatively mainstream hollywood movie. Not all of it was very accurate scientifically, but it was still really exciting to see these things show up on the big screen in such a prominent spectacular way.

Lots of people were asking me about the time dilation effects after the movie, and it took me a few days to remember exactly how everything works and do enough back of the envelope calculations, but I eventually came to the conclusion that there is no realistic way in which the effects portrayed could have worked the way they did in the movie. But something similar could have maybe possibly happened if things had been a bit more complicated (for instance if there were two black holes nearby instead of just one). Maybe Kip Thorne (who consulted on the movie) suggested something like this but they decided they didn't care enough about the details to bother getting everything exactly right.

Regarding the wormhole itself, I feel like a bit of a hypocrite for getting excited about it. I've always been annoyed at how big the disconnect is between what the sci-fi community thinks is plausible and what is actually plausible according to our latest and most up to date scientific knowledge. As I've always told people who ask me, traversable wormholes are most likely completely impossible, not something that any civilization no matter how advanced could create. But they show up in sci fi all the time as if all we need to do is gain enough technology and then we can figure out how to build one. Worse, the depictions of them in sci fi are usually nothing like a real wormhole would look like, in the unlikely case it turned out somehow they were possible. At least, in Interstellar, they got this right--a real wormhole would look like a sphere, not like a hoop as I've seen in most sci fi.

After I watched the film, I started thinking about wormholes and the different conversations I've had with people, many of whom tend to be very enthusiastic about the possibility of using them for interstellar travel some day. I always have tried to emphasize that it's very unlikely that they are possible at all, even in principle. But I realized that the truth is--I have never looked into the science behind them deeply enough to know exactly what the reasons for this are, and what possible loopholes there could be that might allow some advanced civilization to build one. So for the past couple weeks I did my due diligence and looked through the current scientific literature to find out what the present state of knowledge is. What is the most solid argument against them being possible--are they almost certainly impossible, or just probably impossible, or is it that we really just don't know whether they are impossible? I think my answer to this is about the same as when I started looking through the literature--they're either very likely impossible or almost certainly impossible depending on who you believe, but as of yet nobody has succeeded in coming up with an absolutely rock solid proof that they are impossible. However, I now know much more of the details than I did a few weeks ago, so I'd like to share them and let you be the judge!
spoonless: (orangegray)
In particle physics, everything is based on symmetry. Symmetry magazine is the main industry newsletter for particle physics. All of the fundamental laws of physics can be expressed as symmetries. Other than symmetry, the only rule is basically "anything that can happen, will happen".

There are two general categories of symmetries in particle physics, internal symmetries and spacetime symmetries. I'm only going to discuss spacetime symmetries here.

Within the category of spacetime symmetries there are continuous symmetries like rotational symmetry (responsible for the conservation of angular momentum), translational symmetry (responsible for regular conservation of momentum), time translation (responsible for conservation of energy), and Lorentz boosts (responsible for Einstein's theory of relativity).

But then there is also another kind of spacetime symmetry--discrete symmetries. There are 2 important discrete spacetime symmetries and they are pretty simple to explain. The first is called time reversal symmetry, usually denoted by the symbol T. As an operator, T represents the operation of flipping the direction of time from forwards to backwards--basically, hitting the rewind button. Parts of physics are symmetric with respect to T and other parts are not. The other important one is P (parity), which flips space instead of time--it's basically what you see when you look in the mirror and left and right are reversed, everything is backwards.

Here is a video of me doing a cartwheel, an every day process which by itself would appear to break both P and T. The animation shows the forward-in-time process first which is a right-handed cartwheel, followed by the time reverse which then looks like a left-handed cartwheel. Because applying T in this case accomplishes exactly the same thing as P (if you ignore the background), this means that this process breaks both P symmetry and T symmetry, but it preserves the combination of the 2, PT:



And now for the front handspring. Unlike the cartwheel, this process respects P symmetry. If you flip left and right, it still looks the same. However, if you time reverse it, it looks like a back handspring instead of a front handspring! So the handspring respects P symmetry but not T symmetry.



Of the 4 fundamental forces of nature--gravity, electromagnetism, the strong force, and the weak force--the first 3 respect time-reversal symmetry while the fourth, the weak force, does not. Because the other 3 are symmetric, it was assumed for a long time (until the 1960's) that all laws of physics had to be symmetric under T. Only in 1964 did the first indirect evidence that the weak force does not respect T symmetry emerge, and more direct proof came in the late 90's and still more interesting examples have piled on within the past decade.

Lots more explanation behind the cut! )
spoonless: (orangegray)
Our universe has 3 large spacial dimensions (plus one temporal dimension, and possibly another 6 or 7 microscopic dimensions if string theory is right, but those won't be of any importance here).

Given 3 numbers (say, longitude, latitude, and altitude), you can uniquely identify where a particle is located in space. But the state of a system depends not only on what the particles positions are, but also on what their momenta are, ie how fast they are moving (the momentum of a particle in classical mechanics is simply it's mass times it's velocity--when relativistic and quantum effects are taken into account, this relationship becomes much more complicated). This requires another 3 numbers in order to fully specify what the state of a particle is.

If you were to specify all 6 of these numbers for every particle in a given system, you would have completely described the state of that system. (I'm ignoring spin and charge here, which you'd also need to keep track of in order to fully specify the state.) In order to categorize all possible states of such a system, you therefore need a space with 6N dimensions, where N is the number of particles. It is this 6N dimensional space which is called "phase space" and it is in this space where wandering sets are defined. The state of the system is represented by a single point in phase space, and as it changes dynamically over time, this point moves around.

In statistical mechanics and in quantum mechanics, you often deal with probability distributions rather than a single state. So you might start out with some distribution of points in phase space, some fuzzy cloudy region near some neighborhood of a point for instance. And as the system evolves, this cloud can move around and change its shape. But one really central and important theorem in physics is Liouville's theorem... it says that as this cloud of probability moves around, the volume it takes up in phase space always remains constant. This theorem can be derived from the equations of motion in classical or quantum mechanics. But it really follows as a consequence of energy conservation, which in turn is a consequence of the invariance of the laws of physics under time translations. At any given moment in time, the basic laws of physics appear to be the same, they do not depend explicitly on time, so therefore energy is conserved and so is the volume of this cloud in phase space. It can morph into whatever different shape it wants as it wanders around, but its total volume in phase space must remain constant.

Systems that obey Liouville's theorem are called conservative systems, and they don't have wandering sets. Systems that do not obey Liouville's theorem are called dissipative systems, and they *do* contain wandering sets.

But wait-- I just said that Liouville's theorem follows from some pretty basic principles of physics, like the fact that the laws of physics are the same at all times. So doesn't that mean that all physical systems in our universe are conservative--in other words, that there really is no such thing as dissipation? And does that in turn mean that entropy never really increases, it just remains constant?

This is one of the most frustrating paradoxes for me whenever I start to think about dissipation. It's very easy to convince yourself that dissipation doesn't really exist, but it's equally easy to convince yourself that all real world systems are dissipative, and that this pervasive tendency physicists have for treating all systems as conservative is no better than approximating a cow as a perfect sphere (a running joke about physicists).

I'll let this sink in for now, but end this part by hinting at where things are going next and what the answers to the above paradox involve. In order to understand the real distinction between conservative and dissipative systems, we have to talk about the difference between open and closed systems, about the measurement problem in quantum mechanics, what it means to make a "measurement", and how to separate a system from its environment, and when such distinctions are important and what they mean. We need to talk about the role of the observer in physics. One of the popular myths that you will find all over in pop physics books is that this role for an observer, and this problem of separating a system from its environment, was something that came out of quantum mechanics. But in fact, this problem has been around much longer than quantum mechanics, and originates in thermodynamics / statistical mechanics. It's something that people like Boltzmann and Maxwell spent a long time thinking about and puzzling over. (But it's certainly true that quantum mechanics has made the problem seem deeper and weirder, and raised the stakes somewhat.) Philosophically, it's loosely connected to the problem of the self vs the other, and how we reconcile subjective and objective descriptions of the world. In short, this is probably the most important and interesting question in the philosophy of physics, and it seems to involve all areas of physics equally, and it all revolves somehow around dissipation and entropy. To be continued...
spoonless: (orangegray)
I gave 3 examples of things that dissipate in part 2: friction, electrical resistance, and hurricanes. I feel like I understand fairly well why we call these dissipative, although I've always felt or hoped that there is some unifying principle that sheds more light on the subject and explains why them and not other things. But there's a forth example that is far more interesting, and for that example I still don't feel like I really understand why exactly it's dissipative: computation.

Now, you might first think--maybe computation is dissipative because it involves the flow of electricity through circuits (whether those circuits be wires or be microchips), but that's beside the point. First, as I understand it, any kind of physical irreversible computational process must necessarily dissipate heat and increase entropy. So this applies not just to electrical circuits but to anything we could conceivably use to compute an answer to something, including for example, an abacus (of course the amount of computation that can be performed by an abacus is presumably so tiny that you wouldn't notice.) Second, it's not just the electrical resistance because supposedly, computers actually draw *more* electricity while they are involved in some intense computation, not when they are just idling. There are many circuits which are on while the computer is doing nothing, but it's not being on that creates the entropy I'm worried about... it's the entropy created specifically from irreversible computation, from switching those circuits on and off in just such a way that it computes a simple answer to a more complex question fed to it. Beforehand, there are many possible answers, but afterwards, there is only one... for example, 42. This reduces the available microstates of the system from many to one, and therefore represents a reduction of entropy (which remember, counts the number of available microstates). Because of the 2nd law, this cannot happen by itself without producing heat... it needs to produce heat in order to cancel out that entropy loss by a gain in entropy due to the heat, for exactly the same reason that the earth must dump heat into its environment if evolution is to result in more highly organized organisms. So even a perfectly efficient computer which caused no net entropy gain for the universe would still produce heat!

The only exception to the above process is if, instead of taking a large set of inputs and reducing them to one output, all of the inputs and outputs correspond exactly in a 1-to-1 fashion, in other words, you use all reversible logic gates to build the computer. An example of an irreversible logic gate is and AND gate. It takes 2 inputs and has 1 output, it outputs "Yes" if both the inputs are on, and "No" if either one of them is off. Another example is an OR gate, which outputs "Yes" if either input is on, and "No" if both are off. To build a reversible gate, you need 2 inputs and 2 outputs, so that if you ran the computation backwards, you could recover the question from the answer. For example, if you put 42 into the computer, it should be able to spit out what the ultimate question is, just as easily as going the other direction. This is the meaning of reversibility.

Maxwell's demon is a thought experiment that James Clerk Maxwell came up with which illustrates how weird this connection between entropy and information is. If there were a little demon who were watching the individual molecules in a box, and he had a switch that could slide in a divider instantly in the middle of the box, then he could sit there and watch for the moment when each gas particle (normally, bouncing around randomly in the box) was about to cross the boundary from one side of the box to the other. If he presses the switch at just the right time, he can deflect the gas particle back into the left side of the box without expending any energy. If he keeps doing this for hours and hours, eventually all of the gas particles will randomly wander into the left side of the box and get stuck there, because he will put in the partition just as they try to cross the boundary back over to the right. Because entropy is connected to volume (smaller volumes have a smaller # of microstates), the final state has less entropy than the initial state, due to having half the volume. And yet, no work was done and no heat was expended in the process! This seems to be a blatent violation of the 2nd law of thermodynamics. So what happened here?

Well, in the real world, demons don't exist. And humans do not have the supernatural powers that demons have that would enable them to see individual gas particles moving super fast around in a box. But what if we set up a computer that could play the role of the demon? In principle, a computer could detect a gas particle much faster than a human, maybe even as fast as Maxwell's hypothetical demon. But if it does this, it has to either store the information about where each of the gas particles are, or temporarily watch each gas particle for a moment and then forget about it. If it stores this information, then it needs to fill up an exponentially large memory storage system. If it wants to keep the storage from getting out of hand, then it has to erase some of this information at some point... and erasure of information is an irreversible process. Because it is irreversible, it must dissipate heat. I mostly understand this part of Maxwell's demon. The other part I've always been a little bit fuzzy on though... what happens if the computer chooses to just store and store more and more information in its memory? Then, it will be filling up its memory with more and more information about the trajectories of the billions and billions of particles in the box. But does this in itself represent an increase in entropy? Or is it just the erasure of such information which increases entropy? It seems to me that storing anything at a memory location which could have previously taken multiple values but is then set to a single value represents a decrease in entropy. It would seem that storing it decreases entropy and then erasing it undoes that increasing it again. But I must be thinking about things a bit wrong there. I admit, this is where my understanding has always grown a bit fuzzy.

In the next part, I hope to actually get to wandering sets, and by extension, Boltzman's brain paradox, Poincare recurrence, and Liouville's Theorem. But maybe that's ambitious. To be continued...
spoonless: (orangegray)
In the summer of 2003, exactly one decade ago, I had a few months to spare before I started graduate school with no full time employment. So I spent that time reading and reviewing as many books on physics as I could. Partly to prepare for graduate school, partly for fun, and partly because I knew the first thing I would have to do when I got to California was take the PhD qualifying exams. One of the most fun parts for me was rereading the interesting parts of my undergrad textbook on statistical mechanics, and making sure that I understood entropy and related concepts very well.

The 2nd law of thermodynamics says that entropy always increases as time progresses. One of the most common popular conceptions of entropy is that it's sort of like a measure of order. It's not exactly true--rigorously, it's the number of microstates in a system that correspond to a given macrostate, but if you're reading this and not familiar with it at all, order should be the first thing you think of when you think of entropy. The visual picture of the 2nd law you should have is that... when eggs fall from a great height, they naturally break into little tiny pieces. But you never see the time-reversed process happening, little tiny splinters of eggshells gathering themselves up and coming together to form a whole egg. In this way, there is an "arrow" associated with time, it has a direction to it that progresses from low entropy states to high entropy states but not the reverse. Even though all of the microscopic laws of physics are 100% reversible, the macroscopic progression of states is irreversible.

One argument that advocates of intelligent design often make is that if natural processes always go from low entropy states (a lack of disorder) to high entropy states (more disorder) then there is no natural process that can explain the spontaneous emergence of order from nothing. Hence, they claim, there must have been some intelligent being who injected such order into the system, unnaturally. So one of my projects that summer was to understand what's wrong with this argument, and why it isn't the conclusion one draws if you actually understand what entropy is and how it works. For the most part, I succeeded in understanding this--to sum it up, it's mostly that they are forgetting the the earth radiates heat out into space, and heat is a form of disorder, it contributes to entropy. The earth absorbs energy from the sun, turns some of it into useful work, organizing things, and lets some of it messily spill out into the surrounding space. Entropy increases locally here on earth in that things look more organized every century here than they did the last, but this comes at the cost of dumping all this messy heat into outer space. (And lately, it seems, not all of that heat has even been making it out into space, some of it has been getting trapped in the atmosphere, warming the planet and causing disorganized things to happen like freak hurricanes and tsunamis! Probably another part of science that intelligent design advocates don't believe in.)

I say I succeeded for the most part. But after I made my own notes about everything, what energy, temperature, entropy, pressure, and volume were and how these macroscopic properties of our world arise from microscopic physical laws that determine the motion of individual particles... after I'd mapped out all that, I realized there was one missing piece. I didn't quite understand the definition of reversibility, and how to tell what kind of a system would be reversible and what kind wouldn't. I knew that reversible processes were those where entropy remains constant, rather than increasing. My textbook said that a reversible process was one which was "slow enough to stay in equilibrium, and without the presence of any "dissipative forces". I understood the first half of that definition. If you expand a box to twice its size instantly, you go out of equilibrium because the air inside expands very rapidly, and this has the effect of increasing the entropy (entropy is proportionate to volume, for a fixed temperature). By contrast, if you double the size by slowly and carefully expanding it, then you cool the gas as you expand it and the whole thing stays in equilibrium the whole time... there is no entropy increase, because miraculously, the entropy that would have been added by an increase in volume is cancelled out exactly by the decrease in temperature. The former process is irreversible, while the latter is reversible. But then there was this pesky second qualification they added to the definition... not only does it have to be slow, but it also has to lack "dissipative forces". But what the heck is a dissipative force? I searched all over for a definition of this but didn't find a good one that made sense to me.

I knew various examples of forces that I thought of as being dissipative... friction, electrical resistors, hurricanes. And all of these things had certain things in common... they all seem like messy kinds of things that mess up the nice clean elegant laws of physics you usually learn in college, where you can count on things like conservation of energy and path independence. Electrical resistors radiate heat, and friction gives off heat. Hurricanes (or tornadoes, cyclones, or any kind of vortex in a fluid) exchange various stuff including heat with its surroundings. But if giving off heat was all there was to it, why didn't they just say that the system can't exchange heat with its surroundings? Couldn't they have just said it has to be a closed system? And most importantly, what is it really about these processes that makes them the kind of things that give off heat, while other types of processes don't? And what about mixing processes--when you pour cream into a coffee cup, and let it sit for a while, it eventually mixes into the coffee. This is an irreversible process as it won't naturally unmix, and therefore it does increase entropy. But in what sense is this "dissipative"? Surely it was a slow process. Although perhaps it couldn't have been said to be in equilibrium before it got done mixing.

The concept of a wandering set purports to explain what exactly a dissipative system is, so I feel like it is the missing part I was looking for here. To be continued in part 3...
spoonless: (orangegray)
There's a connected set of issues that rests at the very heart of physics, which I've always thought was not very clearly explained in any of my classes. I had a very vague understanding of it while I was an undergrad, and hoped that I'd be able to sharpen it up in graduate school. To some extent, I did, but I found that even after 6 years of graduate school in physics, these issues were still never quite cleared up in my classes, and I never had enough time while working on other things to read enough on my own about them to fill in all the gaps in my understanding. I have always had the impression that these issues *are* well understood, at least by somebody, but that not many physics professors do understand them fully and they tend to just avoid talking about them in actual classes, or talk about them in a very superficial way. I do think my adviser was among those who understood them, but I always felt a bit shy about wasting his time on questions that were purely for my own curiosity and unrelated to any research we were working on. Especially ones that would take a long time to sort out.

Recently, I've started thinking about them again, but through an unexpected route. A couple months ago, OkCupid added support for bitcoin, so as we got closer to the release of it, several of the people I work with would tend to get into idle conversations about bitcoin during the day. A few of them I got involved in, and in one of them we got on to the subject of bitcoin mining. One guy asked the group if it would be worth it to dedicate his server at home to bitcoin mining. This is where your computer works on various difficult mathematical tasks, like factoring large numbers, and helping to merge block chains into each other, and as a reward you collect newly minted bitcoins that are conjured into existence. One guy commented immediately that it would never be worth it with a standard PC, you'd have to buy very expensive specialized hardware to do it. The original asker of the question objected that he wasn't using his server for anything else anyway, but the cynic replied that with a standard PC, the cost of running the hardware in an increased electric bill was greater than the payoff. (An idle server draws less electricity than one engaged in difficult mathematical computation.) This got us onto a tangent, about whether you could cool the computer down to a point where it didn't cost as much to run because it wasn't dissipating as much heat. Naturally, we went from there to discussing why computing necessarily dissipates heat, and what the ultimate physical limits of that process is. He mentioned that only quantum computers were reversible and didn't dissipate heat. I was first made aware of this fact in 1998 during a gradate quantum computing class that I took as an undergrad at Georgia Tech, where we read some relevant papers on the subject. But while I have long had a superficial understanding for why this was true, and I can repeat the standard things that people say about it, I admitted to him that I never quite understood it fully. He then started talking rapidly about the many worlds interpretation of quantum mechanics, and how the so-called "measurement problem" is really just decoherence and thermodynamics. He said he thought it was fairly straightforward, and this is the point where I mentioned that I had a PhD in theoretical physics and had spent many years working on related topics, but felt that there was just always a missing piece for me conceptually surrounding entropy, Maxwell's demon, reversibility, and dissipation. He shrugged and said "well, it seems straightforward, but I guess if you've studied this more then there must be more to the story than I'm aware of." I told him I didn't want to get into discussing interpretations of quantum mechanics, because it was too complex a subject, and I probably couldn't say anything more on reversibility because I didn't know how exactly to articulate what was missing from my understanding of things.

Every few years I pick up this topic, or one of a related set of topics that ties into it, and try to understand it, and I always make a bit more progress. I think the last bit of big progress I made was in reading Leonard Susskind's book on black holes and information, which helped me understand both black hole complimentarity and more about the density matrix and how entropy and information work in quantum mechanics. But this conversation renewed my interest again, so for the past month or so my mind has occasionally drifted back to it, and a couple days ago I managed to stumble onto the Wikipedia page for "Wandering Sets" which I think is a huge part of the missing piece for me. For some reason, wandering sets were never mentioned in any of my undergrad or graduate classes, and yet they seem absolutely crucial to understanding what the word "dissipation" means. Until yesterday, I had honestly never even heard of them. It's no wonder that I went through undergrad and graduate school always feeling frustrated when professors would use the word "dissipation" without giving any definition for it and just assuming that we all knew what it meant. Unfortunately, I feel like the Wikipedia page is poorly written and there are two facts they mention which seem obviously contradictory to me.

I will explain in part 2 what I've learned so far, and how wandering sets are related to the ergodic theorem, Liouville's theorem, and pretty much every foundational area of physics. And then I'll go into what I still don't understand about it--perhaps after this I need to find the right book that covers this stuff. It seems very weird to me that they always gloss over it in physics classes.
spoonless: (morpheus-far)
It seems that top notch physicists have now discovered Nick Bostrom's ingenius "Doomsday Argument".

Eternal Inflation Predicts That Time Will End
http://arxiv.org/abs/1009.4698

"If you accept that the end of time is a real event that could happen to you, the change in odds is not surprising: although the coin is fair, some people who are put to sleep for a long time never wake up because
they run into the end of time first. So upon waking up and discovering that the world has not ended, it is more likely that you have slept for a short time. You have obtained additional information upon waking—the information that time has not stopped—and that changes the probabilities. However, if you refuse to believe that time can end, there is a contradiction. The odds cannot change unless you obtain additional information. But if all sleepers wake, then the fact that you woke up does not supply you with new information."


Lending some weight to this theory is the fact that both Peter Woit and Lubos Motl think the paper is complete nonsense (Motl's rant on it is particularly entertaining and vacuous), since both of them are idiots (although usually in polar opposite ways)!

I've always thought of Raphael Bousso as a better physicist than ex physicist Lubos Motl was, and certainly better than mathematics lecturer Peter Woit. I suppose that doesn't guarantee that it's right though.

Normally I wouldn't pay much attention to a headline like this, but Bousso is actually someone I have a lot of respect for. And to add to that, I have found Bostrom's Doomsday Arguments in the past fairly persuasive (at least more convincing than not), which have a similar flavor... although Bostrom's arguments were far less technical in nature. This may give a more solid, physical basis to the idea that being a good Bayesian entails believing we are all doomed.
spoonless: (morpheus-far)
In parts 2 and 3, I discussed one example of a quantum anomaly, the chiral anomaly generated by the electroweak sphaleron that could explain the baryon asymmetry of the universe (how there got to be more matter than antimatter).

Now let's take a look at another kind of quantum anomaly: the conformal anomaly. Personally, I consider the conformal anomaly to be the most interesting kind of anomaly of them all (although perhaps if I'd stayed in physics longer I would have discovered even more interesting ones, you never know).

What is the conformal anomaly?

The conformal anomaly is an anomaly that shows up in a lot of different quantum field theories, in one way or another. It has to do with a particular kind of symmetry called conformal symmetry. Said in one sentence, conformal symmetry is the symmetry group of transformations in a vector space that preserves angles, but not scale. This would include any kind of rigid rotations or scaling, but not stretching or twisting. It would also include more complicated transformation although I'm not sure how to describe them (maybe I will link to a picture in the next post). For example, in ordinary 3-dimensional space if you had a hologram of the Wizard of Oz's head, floating in space... a conformal transformation on the hologram would be to make the head look bigger or smaller, or rotate it around, perhaps upside down or sideways. An example of something that would *not* be a conformal transformation would be squishing the head in such a way that the face looked wider than it usually does, or taller than it usually does. In other words, if it looks distorted in a way that changes the "aspect ratio", then you've gone outside of the conformal symmetry group. If you stick to only rotations and scaling (or other things that preserve all angles within the hologram), then you're still in the conformal symmetry group. Of course, the Wizard of Oz's head is an example of something that does *not* have conformal symmetry. It does not have conformal symmetry because if you do a conformal transformation on it, it looks different (like, rotated or scaled).

So if the Wizard of Oz's head does not have conformal symmetry, then what *would* have conformal symmetry? Well, the rotation part is easy. In order to be symmetric under rotations, you'd have to have something like a sphere, where it looks the same no matter how you rotate it. Although a sphere itself won't work, because a sphere is not scale invariant--it has a particular radius. If you expand the sphere or shrink the sphere, you can tell that it's different. It has grown or it has gotten smaller. In other words, the sphere has a particular size, a "characteristic length" associated with it. For something to be truly conformally symmetric, it would need to have no size or all sizes at once. A simple example in ordinary 3d space would be a point--a second example would be the entire space. In either case, no matter how much you scale it up or down, it still looks the same. The size of the point is 0, even if you multiply 0 by 100. The size of the entire space is infinite, and even if you multiply it by 100 it's still infinite. I'm not sure there are other examples in 3d space, but quantum field theories happen in a much more complicated space than 3 dimensional space. One important thing with quantum field theories is that usually they are mathematically defined in such a way that there are a lot of "non-physical" properties of them, as well as some "physical" properties. And in order to obey conformal symmetry, you don't have to worry about the non-physical aspects, just the physical ones. To use a fairly loose analogy, if you hypothetically performed a rotation on a being that had a soul, and the actual material flesh and bones were symmetric (came back to the same position after the rotation) but you could tell that the soul had been rotated, then that being would still be considered symmetric for "all practical purposes" since it only differs by something that's non-physical and hence non-measurable. In physics, non-physical things are treated as an issue of "useful redundancy" in a theory. You deliberately include things in your metaphysical framework that are extra, beyond the measurable parts of the theory, because it makes the math easier or more elegant, but then you ignore them in making physical predictions. Often, there can be multiple ways of defining a theory where the non-physical elements are different, but the physical ones are identical. These are treated, essentially, as part of the subjective "language" that you're using to describe the world rather than part of the actual physical objective world itself... as are all metaphysical things. You can have equivalent models of reality even if you start from different frameworks or perspectives--that's a really important thing in physics actually, that shows up all over the place, but especially in relativity and quantum mechanics. These are often called "dualities" (for instance, wave-particle duality).

The rotational aspect of conformal symmetry is not the interesting part, it's more the scaling that's interesting. If something remains the same when you expand or shrink it, it's called "scale invariant". All conformal quantum field theories (also called "conformal field theories") are scale invariant. If a quantum field theory isn't scale invariant, then it doesn't have conformal symmetry and is therefore not considered a conformal field theory.

Now, to move beyond the more visualizable case of the hologram in 3d space, what does conformal symmetry mean in an actual quantum field theory, which is a mathematical structure that requires infinite-dimensional space to define? Well, you can sort of picture a quantum field theory itself as being "the laws of physics" for a particular universe. If those laws have particular constants in them that define a particular length scale, then just like the sphere that has a particular radius, they would not be scale-invariant and therefore have no conformal symmetry. One way to look at it is to ask the question of "what would happen if I were to double the size of everything in the universe... would the laws of physics change?" Although there is a bit more to it than that, because in quantum field theory, length is something that is connected to energy (and therefore mass), which is connected to momentum, which is connected to time, which is then connected back to space. So if you double all the lengths, you also have to halve all the energies and momenta, and double all the time intervals. All of these 4 types of things (space, time, energy, and momentum) can be measured in the same "natural units" instead of the more widely known "engineering units" (kilograms, meters, seconds, etc.) where they look like completely different types of things. Natural units in physics, are units where you consider the speed of light to be equal to 1, and Planck's constant (the fundamental constant of quantum mechanics, that separates the weird microscopic quantum world from the more normal macroscopic classical world) also equal to 1. (Technically, it is really Dirac's constant that is set to 1, but since everyone calls it Planck's constant colloquially I will too.)

Units are a pretty fundamentally important thing in physics. If I had to pick a "second most important thing in physics" besides symmetry, I would probably pick units. And they probably seem pretty boring if you have only experienced them at the high school or undergrad level. But later on, they actually become more and more interesting as you learn to do "dimensional analysis" and other neat tricks, that actually expose deep symmetries and properties of the theories or systems you're considering. One way of saying why units are so important, is that they are related to what I was saying earlier, about the difference between the metaphysical frameworks we use (the "language" that reality is constructed out of) and reality itself. Units are symbols that label things, they are the way in which things are measured, so they are connected to the whole idea of measurement and observation, and the fact that if you measure something you have to have some kind of measuring stick to measure it by. When people talk about the "role of the observer" in quantum mechanics, and say that the "measurement paradox" is what makes quantum mechanics so deep and interesting... I would agree, although I would also add that units are deep and interesting for exactly the same reason. And this is what the term "dimensional transmutation" has to do with.

Having gone over most of what conformal symmetry is, and begun to introduce the interesting aspects of units in physics (I have a lot more to say about them though), I will break here and then in part 5 we should be ready to jump right into dimensional transmutation itself, which is what happens when you have conformal symmetry broken by a quantum anomaly. Dimensional transmutation is the core of what I wanted to talk about in this series of posts, although as usual... it has taken me much longer than I had expected to introduce the subject =)
spoonless: (neo)
Ok, so it looks like everything I said in my last post was correct. It's inevitable that I start doubting my memory after I write things, especially when it's about stuff I worked on nearly 5 years ago. I gave http://arxiv.org/abs/hep-ph/9803479 a quick skim and I think it backs up everything I said regarding the sphaleron process. I was not confusing anything with QCD. However, there is one thing I forgot which is that most of the transition probability comes from thermal processes rather than quantum tunneling, the tunneling rate at zero temperature being extremely tiny. This could use some clarification.

When I referred to transitions between different topologically inequivalent vacuum states as "untying a knot without letting go of either end of the string" this may seem like a pretty random and weird analogy. And the fact that it can also untie the knot via thermal fluxuations makes me realize that I may have exaggerated how miraculous it is. Nevertheless, it's still pretty neat. To clarify what's going on, there are several quantum fields that get twisted up in spacetime: the gauge fields and the Higgs field. The gauge fields are the fields responsible for the "fundamental forces of nature", in this case electromagnetism and the weak nuclear force. The Higgs field is the field responsible for the Higgs particle that the LHC, the giant particle accelerator near Geneva, is looking so hard for and that the media often refers to as "the God particle"... it gives the quarks and leptons of the Standard Model their masses. If you imagine spacetime having a spherical boundary, the visual picture of what happens in one of these topologically non-trivial vacuum states is that each of these fields gets "wound up" around the boundary of spacetime.

Imagine winding a rubber-band around your wrist. You can wind it once, or you can stretch it a bit and wind it twice, or really stretch it and wind it 3 times. Each of these 3 is a stable configuration once you get it wound up, it's not going to suddenly slip off or spontaneously transition into a state with a different "winding number". To do so your hand would have to disappear or something. However, if you expend energy, you can stretch it back over your fingers, and then let it snap back to a different winding number (say, going from 3 times back down to 2 times). Stretching it is getting it over an "energy barrier" that separates the different topologically inequivalent configurations. Well, the same thing happens for the gauge fields and the Higgs field, except that they are sort of winding around the whole universe. And unlike a rubber band which is a 1-dimensional object, these fields are 3-dimensional and they are essentially wrapping themselves around a hypersphere at the boundary of spacetime. If you picture a normal ball, that's a 2-sphere. Its surface is 2-dimensional, and if you try to wrap it with something else that's 2-dimensional, you'll find that it's very hard to imagine wrapping it more than once. However, these fields are more intangible than most wrapping paper, and a lot more stretchy. So they can actually wrap around a sphere more than once... every additional time they wrap it that increases the winding number by 1. Except they are actually wrapping a hypersphere that has a 3-dimensional surface rather than a 2-dimensional surface like a ball. So visualizing exactly what's happening becomes even more difficult, although hopefully you have more of an idea now than you got from my last post which was rather vague. Apparently the official name for the number that indicates winding number, in the case of the sphaleron, is the Chern-Simons class. (I mention this because it was something [livejournal.com profile] vaelynphi asked about, so I looked it up).

At any rate, the point here is that there are only two ways you can get the fields to unwind. They won't unwind themselves naturally because they would have to expend energy to go into a sort of "stretched" state that they don't like to be in temporarily, before they could relax into the next topological vacuum. So there are only two ways for them to unwind. One is quantum tunneling, which I mentioned. That's the more magically seeming route, where despite the fact that they don't have enough energy to do it, they just sort of do it anyway. It's almost as if your hand temporarily becomes immaterial and the rubber band just passes right through your wrist. The second way, which is a bit more mundane, is that they could get a random energy fluxuation due to the constant background of thermal fluxuations. This could cause them to temporarily become excited and get over the energy barrier and then relax. Like the rubber band suddenly stretching out randomly, slipping over your hand, and then going back onto it in an unwound state. However, this can only happen at very high temperatures. At normal room temperature, the sphaleron process just never happens. But if you go back in time to just moments after the big bang, you'll find temperatures much much hotter than room temperature. Hot enough to randomly cause the fields to unwrap momentarily and then rewrap. The awkward state that they have to pass through between their happy relaxed wrapped states is what's officially called the "sphaleron". So the sphaleron is sort of like the hardest demon they have to battle before getting past to the new world they seek. And yet, the amazing thing is, they must have done this again, and again, and again... to give eventually give us how many atoms we have in the world today. Without the primordial gauge fields doing battle with the sphaleron to cross the energy barrier we would have no protons or neutrons around, and therefore, no atoms!

Also, I mentioned that the sphaleron was a "saddle point" rather than a maximum like most instantons. A maximum is like a hill you have to get over. But for those who don't know what a saddle point is, it's similar to a maximum in one direction but a minimum in another direction. So it's like a hill in one direction that's also a valley because it's between two hills in a perpendicular direction... the whole thing looks kind of like the saddle of a horse, hence the name. So while they were on their epic quest to get through the sphaleron on state into the next topological vacuum, the gauge fields may have said to themselves something like "as I walk through the valley of the shadow of death..." since they were both in a valley and on top of a hill at the same time, depending on which direction you're talking about. And in the case of the sphaleron, there is actually only one direction that's a valley and all the other directions are maximums. (On a 2-dimensional surface like the earth, you can only have two directions so the most you could have is "one of each" at a saddle point in the landscape, but in this case we are talking about more dimensions so there can be more hilly directions). Incidentally, it took me about a month to calculate the contribution of the sphaleron to the instanton action, but I remember using Mathematica a lot to plot out different perspectives of the saddle-shaped point and rotating it around. Ironically, after I submitted the paper that was the one thing they said I should take out, because the referee said it was a well-known calculation that most cosmologists already knew how to do in a different way... so it never made it into the final published version but it is still in the version on arxiv.org.

Also, I forgot to mention that the name for this kind of anomaly in general is a "chiral anomaly"... it shows up in a lot more types of situations than the electroweak sphaleron though. Actually, if you've ever seen the show Sliders, where the four main characters jump from one quantum world to the next in the multiverse, the very first episode has the professor lecturing and the genius kid Quinn in the class listening... then after class, Quinn goes up and says something like "Hey professor Arturo... I read your paper on chiral anomalies, and it was totally brilliant!" however the Hollywood actor (Jerry O Connell) pronounces it wrong and says "Cheeral anomalies"... it's supposed to be pronounced like "kiyral". The word means "handedness" (like lefthanded versus righthanded) although it would take quite a while to explain why that is relevant here. I used to watch the show in high school and never noticed that he pronounced it wrong, and then I thought it was hilarious when I eventually watched it again later in grad school.

Well, my part 3 was supposed to be about the conformal anomaly and how protons and neutrons get mass from dimensional transmutation. But I still haven't made it there. So that will start in part 4. Then, eventually, I'll say some things about the conformal anomaly in string theory, and why that places a constraint on the number of dimensions of spacetime.
spoonless: (morpheus-far)
Incidentally, some of what I'm going to talk about here is the expanded version of one of the short background sections of my dissertation, section 1.5 (pages 14-19). In fact, I thought of writing this series of posts as I was writing that section, it just took me a while (over a year) to make it. If you want you can read that part here: http://physics.ucsc.edu/~jeff/dissertation.pdf. It's not very long, and some of it is about electric-magnetic duality and other things specific to the model I was working on, which I won't discuss here. But it's dense so you probably want to have some kind of physics background if you attempt reading it, preferably a particle physics background. Also, even with a background what I'm going to write here will have more depth and hopefully be more insightful, I'm mainly just linking to it for completeness and to indicate how this stuff fits into the research I published in grad school.

Speaking of my dissertation, I posted it about a year ago when it was finished, and encouraged people to find the "Easter Egg" I put in the front material. Nobody took me up on that, so I figure at this point I should just give it away. If you want to see the Easter Egg, click on the above link and read the last line of page xii!

So--What is a quantum anomaly? This seems like the best place to start.

Everything in particle physics is based on symmetry. If there's one principle that is widely acknowledged to be behind every other principle or "law" of physics in the universe, it is undoubtedly symmetry. Like many scientific and engineering fields, the particle physics community has its own professional magazine, to help keep members of the community informed about important advances going on in the field and current events. Appropriately, the name of this magazine is "Symmetry Magazine" (http://www.symmetrymagazine.org/). In physics, a symmetry is defined as any group of transformations you can perform on something that leaves important physical properties unchanged.

But there are a lots of different types of symmetries. And of all the different types of symmetry, there are several broad categories. One distinction is between exact symmetries and approximate symmetries. This distinction is pretty obvious--for an exact symmetry things are perfectly symmetric, while for an approximate symmetry they are not quite perfectly symmetric. Another distinction is between local (also known as "gauge") symmetries and global symmetries. A third distinction is between spacetime symmetries and internal symmetries. A forth distinction is between unbroken symmetries and spontaneously broken symmetries (also called "hidden symmetries"). And a fifth distinction is between non-anomalous symmetries and anomalous symmetries. Each of these categories has all kinds of examples within it. Explaining what all of these are would take us off track, but I mention them just to put the idea of anomalous symmetries in context--this isn't the only distinction you can make, and you can have just about all combinations of these general categories of symmetries show up in physical theories. But the anomalous/non-anomalous is nevertheless an important one.

The anomalous/non-anomalous distinction has to do with the way in which quantum theories are constructed out of classical theories. Actually, many of today's leading theorists would object to the way I said "constructed" here because really things should be viewed the other way around. In practice, what we do to define a quantum field theory is to start with a classical field theory and then "quantize" it by promoting all physical observables from regular variables into non-commuting measurement operators. However, in principle the world was not "constructed" in this way, that's just the way that humans have of understanding the world. We like to think classically, so we start with something classical and build something quantum out of it. But the world is not classical, it's quantum. So really the right way to think of it is that some quantum theories have "classical limits" where you imagine Planck's constant becoming zero and all commutation relations between different measurement operators vanishing. Some quantum theories even have more than one classical limit. An anomalous symmetry is when the classical limit of the theory is symmetric but the full quantum theory is not symmetric.

To start with one example which has some pretty sweeping consequences, let's take baryogenesis. The "baryon asymmetry" of the universe (also called "matter-antimatter asymmetry") refers to the puzzling fact that there is more matter in the universe than antimatter... and yet the underlying laws of physics look like they are symmetric with respect to matter and anti-matter--whatever can happen to matter can happen to anti-matter, and vice versa. If you started out with only pure energy (no matter) existing at the big bang, as the standard cosmological picture requires, and then evolved things forward in time as the universe cooled and expanded... you would have to end up with equal amounts of matter and antimatter, as long as the laws of physics really did have this symmetry built in. There has been a slight asymmetry observed at particle accelerators, but not nearly enough to account for the overall imbalance between matter and antimatter. (And incidentally, it's a good thing we have this imbalance, because without it life could not exist because the matter and antimatter would just annihilate with each other. But that doesn't help explain the mechanism for the origin of the imbalance, what is known as "baryogenesis".) What does provide at least some of the explanation however, is that if you look closer at the laws of physics governing matter and antimatter, you find a quantum anomaly.

This anomaly is known as the "electroweak sphaleron". It is one kind of quantum anomaly that fits into a category of anomalies called "instantons". These types of anomalies have to do with the fact that in quantum field theory, you can have multiple vacua within the same universe that have effects on each other. ("Vacua" is the plural of "vacuum"). This surely sounds like crazy talk to anyone outside of the field, however the word "vacuum" in quantum field theory just means "ground state". The vacuum in quantum field theory is the state when all of the quantum fields are in their lowest energy state. That doesn't mean they are at zero energy, due to the zero point energy, but it means it's lower than any other locally accessible state. So if there are multiple "lowest energy states" that the fields could get into that each have the same energy, then these are the "vacua states" of the theory. They are also called "degenerate vacua" (degenerate meaning "having the same energy"). In addition to that possibility, sometimes you can have a "false vacuum" or a "metastable vacuum" appear. If you notice I said that a vacuum had to be the lowest energy state of any other locally accessible states. By local I mean that you could smoothly transition from one state to the other in a classical way. But quantum mechanics allows for more "non-local" effects like quantum tunneling. If something would be the lowest energy state if there were no quantum tunneling allowed, then it is called a "false vacuum" or a "metastable vacuum". Sometimes the rate at which it can quantum tunnel into the "true vacuum" (the global energy minimum rather than just a local minimum) is so small that the false vacuum could exist for billions of years and you'd never know it was really the false vacuum. This is one possibility for the way our universe could end... we could be living in a false vacuum and suddenly tunnel into a true vacuum that has a greater degree of symmetry but no life allowed in it. Locally, a bubble would spontaneously form somewhere in space, and then expand until it filled the entire universe. However, given that this hasn't happened for the many billions of years the universe has been around and stable, I wouldn't bet on it happening any time soon, even if we are living in a false vaccuum.

So instantons are a form of quantum tunneling that happens between two different vacua states of a quantum field theory, sometimes between different degenerate vacua or other times from false vacuum to true vacuum. So the fact that there are multiple vacua states in the Standard Model allows for this instanton effect called the electroweak sphaleron to occur. The reason why you can't transition from one of those states to another classically is because they are in different "topological sectors" of field configuration space. Topology is the branch of mathematics that deals with deforming shapes into other shapes, and deciding which shapes can be smoothly deformed into each other and which cannot (because they would require ripping or tearing the shape apart). The fields in one topological sector cannot be smoothly (locally) deformed into a configuration that's in another topological sector--it would be like trying to untie a knot without letting go of either end of a string. Miraculously however, quantum tunneling allows you to effectively untie a knot without ever letting go of either end of the string. This is the instanton process and it results in a quantum anomaly--a symmetry that is there classically but violated quantum mechanically. Since all laws of physics are in the form of symmetries, a quantum anomaly is essentially a law of physics that is valid classically but can be broken (slightly) quantum mechanically. If you like, perhaps it could be described as sort of like "bending" the laws of physics every once in a while. More precisely, it's just that what is possible versus impossible is a bit more permissive in quantum mechanics than in classical mechanics because there are fewer symmetries forbidding things from happening.

A baryon is a proton or a neutron (or any other combination of 3 quarks, although those are the only two stable combinations). The Standard Model has a different topological sector for each baryon number (baryon number is the total number of protons and neutrons in the universe, minus the number of anti-protons and anti-neutrons since they are essentially "negative baryons", the opposite of baryons). So there is one vacuum where the total number of baryons is 0 (perfect symmetry between matter and anti-matter)--presumably this is how the universe started out. But there is another vacuum where the total number is 1, and another where it is 2 (and also ones where it is -1 or -2). In our universe, the baryon number is about 10^80, so we are in a topological sector that is pretty far from the symmetric sector that things started in. How did we get here? Well, possibly through the electroweak sphaleron process. However, in order to fully understand why we would have tunneled all the way in this direction, rather than the opposite direction or just done a random walk through the different sectors, you also need something that tilts the energy levels of the different vacua states so that the ones on the matter side have lower energy (and are therefore favored) while the ones on the antimatter side have higher energy and are therefore "false vacua". One of the papers I published in grad school was a paper on how this might be done.

Well, I think I will break here until part 3. But to give a sneak preview of where we're headed after this. The quantum anomaly in baryon number helps explain how we could have more matter than antimatter in the universe. However, there is another question (which at first glance might seem related but is not, really) of where most of the baryonic mass in the universe comes from. (By baryonic I mean, as opposed to dark matter which makes up most of the mass in the universe.) This is one misconception that I think a lot of non-physicists have. They think that matter has to have mass, or that the two are even synonymous or something. Neither matter nor anti-matter has to weigh anything or have any inertia... for a long time it was believed that neutrinos, for instance, were massless. This was a part of the Standard Model and has only been recently revised within the past decade or two. It turns out that they do have a very tiny mass (at least 2 of the 3 flavors do, we don't know for sure whether the 3rd flavor does) but there is nothing in the laws of physics themselves that say that matter has to have mass. The way in which the quarks and leptons get mass is through their interactions with the Higgs boson. The quarks make up protons and neutrons. However, if you add up the masses of the 3 quarks in a single proton or neutron, you don't get anywhere near the total mass of the proton or neutron. Most of the mass of protons and neutrons is not due to their constituent quarks, only a tiny fraction of it is. Where does the rest of this mass come from? It turns out, it comes from a quantum anomaly. The conformal anomaly is the relevant anomaly here (yes, the same anomaly responsible for the 10-dimensional requirement of string theory), and the surprising result of it in this case is that protons and neutrons get most of their mass from dimensional transmutation! I'll begin explaining that in part 3.

[Update: wrote this out this morning and then was thinking about a few of the things I wrote more today, and wondering whether I have remembered everything right. The main two things I'm not sure of is whether sphalerons are considered a type of instanton or just similar to instantons--a typical instanton is at a maximum of the potential energy of a field configuration, while a sphaleron is at a saddle point. Whether they are just similar or a sphaleron is considered a type of instanton I'm not entirely sure, although everything I've said should be true of both of them I believe. The second thing is, I'm not sure my description of each vacuum having a different baryon number is quite right. I may be mixing up some things from quantum chromodynamics, relating instanton winding number to baryon number, with the case of the electroweak sphaleron. I seem to recall other ways of looking at it where each vacuum has a slightly different definition of baryon number and therefore there is some overlap between different rungs on the energy spectrum, and a possibility for transition between them. I will think about these issues more and clarify and/or correct any details I messed up in my next post if need be.]
spoonless: (morpheus-closeup)
I've heard that one of the cardinal rules for writing a good blog is to "stick to what you know". If you're an expert at something, write about that, skip the comments on things you're only starting to figure out. I've never adhered to that. Instead, I tend to write about whatever I'm thinking about, or whatever I'm learning about. I think the problem is that if you stick to topics you know everything about, it's boring to write out. It's more exciting for me to write about something I *don't* know much about and am just starting to think about... then I get lots of interesting comments that are educational for me.

You might think that, given that I spent from 2003 through 2009 in grad school for physics, I would have been learning a lot of physics and therefore would be writing about it a lot. Unfortunately, that's not true either. I didn't usually write anything about what I was learning about there because I knew it would likely take a whole book to explain instead of a single post. I did make a few posts toward the beginning of grad school, but by the end the topics were so esoteric I knew that the interested audience would be so small that it wasn't worth it.

Anyway, I said to myself at some point "that's sad, I should at least try to write about something interesting that I know a lot about, even if it's not what I'm immediately learning." I thought surely there was a balance somewhere that could be attained, and yet I never quite got around to figuring it out. Right toward the end of my degree, as I was writing my dissertation, I did decide that I wanted to write something about dimensional transmutation and quantum anomalies. But my priority was always writing the actual dissertation rather than my blog. So now, I've got a bit more free-time and can give it a shot. I should warn up front that the "balance" I've chosen here is to write about something deep and interesting that I have never *quite* fully understood (otherwise it would be too boring for me to write about) but I nevertheless feel like I have much more knowledge about it than the average person so I can count it as "good blogging behavior". Not sure how many parts this will be in or how well it will go over but here goes...

Theoretical physicists love to pick cool-sounding names for things, names like "Anti de Sitter space", "tachyon condensation", "zero point energy", "warp factor", "the eightfold way", or the names for the flavors of quarks, "up", "down", "strange", "charm", "truth", and "beauty"... or even sexually sounding names for things like "quivers" and "kinks"! Sadly, in most cases if the average person found out what these names actually stood for, they'd probably be very disappointed or bored. They all boil down to different sorts of mathematical relationships. (And no, "warp factor" as used in physics has nothing to do with the term "warp factor" used in the Star Trek world.) Give them a cool name and they get very excited, but show them the equation that it stands for and they're like "oh, just a bunch of greek symbols?" Of course, those mathematical structures do useful work in describing the universe we live in (or so we think) and many of them are very interesting in their own right. But they are still pretty different than what people expect when they hear the names. There is not always a perfect correlation between how cool the name for something sounds and how deep or interesting it really is if you understand the concept. But as I was writing my dissertation, two of the coolest-sounding names for things that I wrote about--"dimensional transmutation" and "quantum anomalies"--do happen to stand for really deep and important things in physics. And what's more, they are related! Indeed, dimensional transmutation is due to one particular kind of quantum anomaly, called the conformal anomaly. Granted, none of those terms mean what they probably sound like they mean, but the connections are still pretty neat. So this is what tempts me to try to write a more "accessible" explanation of them than the brief paragraph or so I dedicated to them in my dissertation. Really, when I look back it's some of the most interesting yet most difficult to understand stuff that I learned in graduate school. And connected to so many different interrelated topics that have bearing on all sorts of different things in quantum field theory and string theory.

I guess this "part 1" is turning out to be more of an introduction to my intended post about quantum anomalies, rather than the explanation itself. I guess I won't really start jumping into the meat of it until at least part 2. But I will give a little bit more of a teaser first. Quantum anomalies are a weird kind of effect that shows up in some quantum field theories. Dimensional transmutation is something that happens when a conformal anomaly, one type of anomaly, shows up. Quantum anomalies are also important in string theory. If you've ever heard anything about string theory, you've probably heard that it requires 10 spacetime dimensions rather than the usual 4 (3 space + 1 time) that are more traditionally assumed in physics. (If you heard 11, you're right also, but let's ignore the M-theory dimension for now.) So one of the first questions most people ask is "why 10? why not 20, or 50, or 300? where does the 10 come from?? And why can't you just have vibrating strings in the usual 4 dimensions, what's wrong with that?" Well, it turns out that the reason you need 10 dimensions rather than 4 or some other number is because of the conformal anomaly. So if you want to understand where the 10 comes from, you must first understand the conformal anomaly.

Profile

spoonless: (Default)
Domino Valdano

May 2023

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
28293031   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 3rd, 2025 01:49 pm
Powered by Dreamwidth Studios