Perhaps you would like to read K. Dick’s ‘The Man in the High Castle’ where the author [poor chap] is looking for certainty in a bizarre story set in the Multiverse’ and finds uncertainty everywhere but this destroys him because as an essentially religious man he cannot endure things without human meaning.

Best Regards to everybody

Prof. Hormesis Boguslawski ]]>

I think I’d disagree (though I’m quite naive regarding econ). As I understand it

Y = C + G + I + NX

ought to really be thought of as a definition. It’s true because it’s true.

Energy conservation on the other hand is not a definition. It is a law that arises from symmetries in nature (in this case, systems that are symmetrical through time).

So I’d say that the accounting identity above is more like

p = mv

as a definition of momentum.

But I’d love to be corrected if I’m in error.

]]>1) I’m glad (and unsurprised) that you don’t find the decision-theoretic approach to understanding probability in MWI appealing. Personally, I think that what they do is likely technically okay (I haven’t looked at it enough, but I’ll just take it as given that they do), I just think that layering the axioms of decision theory on top of the postulates of the many worlds interpretation is not really shedding much light on things. You may as well just assume the Born rule outright!

2) I can’t speak to the mangled worlds approach. I do remember looking it up and reading a bit about it when I was going through your tutorial, but the details are not clear in my mind. I do seem to recall that it’s not complete yet though—or wasn’t at the time at any rate.

I get the impression that you find collapse to always be an unnatural construct in the context of quantum theory. I claim that this needn’t be so. So let’s talk about collapse interpretations. You say, “Collapse interpretations make no use of unitarity”, but it depends on what you mean by a collapse interpretation (I’m also not 100% I understand the comment…the naive form of the Copenhagen interpretation makes use of unitarity, it’s just that collapse appears to be a non-unitary additional process. I assume that what you mean is that ultimately, collapse or something like it should flow out of unitary evolution. That’s not a crazy position at all, so I’ll go on assuming that that’s what you mean). If we’re talking about an interpretation that alters the mathematical form of quantum theory so that the wavefunction randomly collapses here and there, from time to time, then yeah, we’re talking about a totally different theory really. This approach has some adherents and a working model that some folks have been testing (this is the so-called GRW collapse theory). My money’s on quantum mechanics though. I’m guessing yours would be as well.

There’s also what I like to call naive Copenhagen, wherein we think of their being some sharp quantum/classical divide and that the interaction across that divide causes collapse. This is not so much of a deviation from the mathematical structure of quantum mechanics as an additional set of postulates that basically say that quantum mechanics is incomplete without a classical world to explain measurements and their outcomes. While this approach is sort-of defensible, it requires you to believe that despite the fact that classical things are made out of quantum things, there’s some extra thing that happens somewhere that renders us classical. Okay, so that’s a hypothesis. Then there’s a prediction that arises: we’ll hit this divide at some point in our experiments, Anton Zellinger will win a Nobel, and we can all be happy. My money’s on quantum mechanics applying all the way up. Unitarity in this situation only applies to systems allowed to evolve without interacting with the classical-level objects.

You can imagine other interpretations that are actually tweaks of the formal structure of quantum mechanics. Either because they alter the math, or because additional postulates alter the domain of applicability. Personally, I think that these things should be called “approaches” or alternate theories, since they don’t let quantum mechanics just be quantum mechanics.

I think we want to restrict ourselves to discussing interpretations that maintain the structure of quantum mechanics and work from the standpoint that it is applicable on all scales.

Among this way of interpreting things, there are some choices. One is collapse versus no-collapse. Another one that is prior to that is whether you believe that the wavefunction is part of the world or whether it is in the head of an observer. Is the wavefunction part of our ontology or is it epistemic? The terms I’ve seen thrown around are psi-ontic versus psi-epistemic. This naming scheme was naturally devised by psi-epistemic people so that they could tease the other view by calling them psi-ontologists (I’m not even joking here!).

[Incidentally in your point number 4, you argue that Bell’s theorem rules out the wavefunction as an epistemic quantity that describes our state of knowledge about a world satisfying local-realism. In fact, it rules out any *hidden variable* model that posits locality and that properties pre-exist their measurements. I’ll respond in more depth when I get to point (4), but for now, let me just say that I don’t see why dropping counterfactual-definiteness (which is the trick MWI uses) is obviously better than dropping (or weakening) locality (which is what any self-respecting hidden-variables theory must do). As I said, I’ll talk about this more later.]

Anyway, whether they be psi-epistemic or psi-ontic, there are collapse interpretations that are completely free of these sorts of distortions of quantum theory—let’s agree that these really are “interpretations” rather than actually alternate theories.

As an existence proof, I bring up Bohmian mechanics (a psi-ontic model if ever there was one, though with subtleties!). It can be formulated to agree exactly with the quantum formalism. And yet, there is a form of collapse since in Bohmian mechanics, uncertainty arises from ignorance of the initial state of everything. Once we determine the position of the particle, there is no more uncertainty about it (until it’s perturbed by something that we’re not tracking), so your epistemic probability scheme “collapses” to a delta function and your ontological wavefunction is altered by the physical act of measurement (probably via something like decoherence—I actually don’t know the details here since I’m not an expert by any stretch of the imagination).

Now, I’ve got qualms with Bohmian mechanics. I think it masks more than it reveals about the nature of quantum theory (in some ways, it’s designed to do so). So I am not a Bohmian. However—and I’m jumping around a bit here—I’d be careful about using Occam’s Razor to do violence to Bohm (poor Bohm!). I worry that what individuals find to be conceptually clear and hence, simple, is largely a matter of familiarity and taste in these types of discussions. As you know, people throw around Occam’s Razor in trying to slice up MWI (who needs all these universes! I say cut them with Occam’s Razor!). I don’t think that those arguments are compelling, but hey, I don’t necessarily think that they are compelling with regards to Bohm either (although, yes, I am more sympathetic to MWI than Bohm afterall).

(Incidentally, I think Bohm’s theory can play a useful role in getting you to think through certain types of problems that you might encounter. Plus, it can act as a kind-of safety blanket, since as a young child you go into physics due to it’s precision and the fact that in principle everything is predictable. Then quantum mechanics comes along and causes you to go into a young-adult-life-crisis where you start questioning everything you thought you knew about yourself, the universe, God, no-God, whatever. Bohm comes in and says “No! Don’t panic! Particles exist, they have trajectories, it’s all okay. We are just limited in our ability to predict these things due to a conserved amount of initial ignorance about the state of the universe!” This makes you happy. Then you get older and you realize that uncertainty is the way of things and you just have to deal with it, so into the bin goes the safety blanket).

(Okay, I got a little carried away there!)

So I don’t think you can use unitarity to rule out all collapse-type interpretations. There are ways of making these things compatible. I also worry whenever Occam’s Razor is invoked, because often its use is more revelatory about the tastes of the user than the merits of the things its being invoked against.

3) You said “more generally, no amount of single-world interpreting gets us any closer to understanding the Born probabilities.”

What exactly do you mean by “understanding the Born probabilities”? Are you referring to setting up an interpretation that does not postulate them and then deriving them? Are you talking about the related but different problem of understanding these things as probabilities (a very difficult problem since you are then trying to answer the question of what probabilities are)?

You go on to ask (I’ll paraphrase) why it should be so much better to have a single world left behind after measurement, rather than a population of worlds with different results. How does it help explain the origin of Born statistics in particular?

I think that the answer to your question is basically that it is a lot more straightforward to understand what any sorts of probabilities mean in a situation where counterfactual definiteness holds. Furthermore, it’s a lot clearer how one should think about experimental results and how to falsify theories when we view the world as one with definite outcomes. I think that that’s the key argument, and it’s in addressing those issues that MWI itself racks up a lot of complexity.

To put it another way, I could make precisely the same statements about the ordinary application of probability theory. Why shouldn’t a classical die roll be thought of as splitting the universe into 6 different pieces (or whatever) in which any of the possibilities could happen? You might claim that the dynamics is deterministic, but chaotic effects can make it impossible to really churn out deterministic answers (they effectively put classical physics on a similar footing to Bohmian mechanics—you have no way to ever determine the initial conditions accurately enough). So why not say that classical probabilities also lead to many worlds? The most straightforward reason not to (and it’s a choice, not a rule) is that it makes it a lot harder to understand what these probabilities mean. Again, having definite outcomes makes this problem easier to deal with.

Furthermore, I think having a single definite outcome better positions you to attack the problem of Born probabilities.

One way to do this is to actually start from the Born probabilities and attempt to see how much of the structure of quantum mechanics can be derived from that (perhaps plus a bit of additional structure). One programme in this vein—certainly not complete—is Chris Fuchs’ (he’s a professor at Perimeter), in which he recasts the Born rule in a form that is very similar to the law of total probability in ordinary probability theory. From this form he’s able to derive a number of conclusions about the structure of quantum mechanics (though he cannot derive the full theory from this one starting point just yet).

Another approach may involve taking density matrices as your primary descriptors of a system’s probabilities. What’s nice about this is that the Born statistics emerge from certain unique properties of density matrices. Such an approach can also makes the clearest contact with decoherence, when formulated appropriately.

Turning your questions around, I don’t quite see why it’s necessarily the case that one treats amplitudes as physical in the way that you are suggesting. Not many people have a problem with a classical probability distribution collapsing to some particular outcome after a measurement occurs—we just call that updating. Why can’t we think of quantum probability in a similar manner?

In the context of MWI, we have a serious issue, because MWI sets itself the task of using the wavefunction as an actual, physical entity that represents the state of the whole universe. From this perspective, you really need to carefully describe what the Born probabilities are, even if you include them as an additional postulate. You have to get into explanations of self-locating probabilities and whatnot. I’m not saying that that’s impossible, I’m just saying, it spoils some of the original sleekness of many worlds, which is its major source of appeal.

You also get entangled in the great debate about the meaning of probability itself, which I think should ideally be kept distinct from questions about the meaning of quantum mechanics (what’s more nettlesome than one deep mystery? Taking two of them and scrambling them up!).

I suppose what I’m trying to say is that approaches like MWI have to grapple with the nature of what probabilities are, which is a very hard problem. On top of that they either have to derive or add as an additional postulate the Born probability, which only adds to the oddness of trying to understand probabilities in these contexts. Other approaches, that basically try to separate out the problem of probability from the rest of quantum theory are at least cleaner from that perspective. I hate to return to Bohmian mechanics since I’m not a fan, but at least the meaning of probability is simple and clear in that approach, it’s a way of formalizing our ignorance.

4) That the wavefunction is an epistemic description of our uncertainty about some deeper, locally causal, single world is decisively ruled out by Bell’s Theorem. I can’t quite say that everyone knows this part, but everyone knows the math and I think a majority of physicists would agree with what I just said, though they might phrase it differently since they wouldn’t be acquainted with Judea Pearl’s standard definition of local causality which is more something that probability theorists know.

4) If we insist on an epistemc or ontic conception of the wavefunction that is local and real and maintains counterfactual-definiteness, then those interpretations are ruled out. MWI gets around this by dropping counterfactual definiteness.

But you could also weaken your notion of locality. For example, you could simply demand that information has to be transmitted in a manner that respects locality or causality, but non-local things can occur if no information gain is possible. For example, in a Bell-type experiment, Alice and Bob can only determine that their systems were entangled after-the-fact. They need to gather data and compare in order to rule out the possibility of a classical correlation in favor of a quantum entanglement. None of their measurements during the experiment can be used to transmit information superluminally (i.e. acausally).

It’s not obvious that dropping locality is a much worse option than dropping counterfactual definiteness. Bell can’t save you from considering alternate interpretations! Even the ones that take the wavefunction as an epistemic object rather than as an ontological one.

There are indeed serious problems one has to confront if you want to use the wavefunction as an epistemic object. For example: if you have a composite system, typically you can write down the wavefunction for the entire thing, but it’s not necessarily the case that the wavefunction cleanly factorizes so as to give you a sensible definition of wavefunction for the pieces. In that case, it’s unclear (to me anyway) how I am supposed to use wavefunctions as epistemic objects to encode my knowledge-state about the subsystems.

The easiest way around this that I know of is to forget about wavefunctions and work at the level of density matrices. These behave sort-of like a probability operator (akin to the energy being represented as a Hamiltonian operator), but it can’t work precisely like all the other operators (there are technical differences that I won’t delve into here). Anyway, there is a unique prescription for assigning a density operator to any system you wish to investigate. Furthermore, if you want to go down a more epistemic route, the density operator is a natural object since it is traditionally a way of encoding “classical” uncertainty that sits on top of the quantum kind. However, this distinction between classical and quantum uncertainty does not have to be maintained, and one can instead see density matrices as unifying both. I don’t want to get bogged down in details, but I just want to put this out there as another way to take an epistemic approach to quantum probabilities.

As an aside, I’m actually pretty interested in Judea Pearl’s work on causality—I have your treatment of it on Lesswrong bookmarked for a time when I am not so busy. In fact, it’d be great to read his book…but again too busy!

But the reason I wanted to talk about Pearl is actually because his influence is starting to be felt at least a little bit in a corner of the foundations of quantum mechanics community. I recommend checking out Robert Spekkens’ essay on kinematics versus dynamics (you can find it via Google. The title escapes me. I believe it won an fQXI award). His point is that the dividing line between kinematics and dynamics is a choice and that the more invariant notion to use is that of causal structure (a la Pearl). He is actively interested in figuring out how to properly quantize these ideas. I figured you’d find that interesting!

5) Again, I think that the complexity-penalty calculus is not quite so straightforward. MWI does indeed seemingly begin with fewer postulates, but then you have to construct elaborate arguments for why it’s okay to use Born statistics to interpret your experiments. Again, even if you add Born as an additional postulate, explaining what you mean by probabilities here adds complexity to the approach.

And MWI is not the only approach to quantum theory that comes from applying the same rules at all levels. Developing quantum theory as primarily a generalized form of probability theory also allows you to work with it at all levels, and it provides a natural reason for why you can throw out unobserved branches (because it is exactly analogous to what you do in classical probability theory).

6) “Hypothesis non yet fingo” is such an obscene sounding expression!

But turning to the matter of the preferred basis—Now if I’m reading you correctly, you’re saying tht the preferred basis problem is an artifact of taking the worlds as basic ingredients rather than as something that emerges from the description in a natural way. I agree to a certain extent. In some ways the worst thing to happen to Everett’s “relative state” interpretation was having it renamed “many worlds” since that places the emphasis on a rather sensational, but ultimately secondary aspect of the interpretation.

(I’d be careful about analogizing this with notions of symmetry in relativity or newtonian mechanics. While there is a way of attacking the preferred basis issue using symmetry [Zurek’s notion of envariance, if I recall correctly] it is of a rather different sort of flavor.)

I think there are several approaches to dealing with the preferred basis issue, one of which is sort-of suggested by your comment (again, if I’m reading you correctly). That is, there is nothing fundamental about this particular decomposition of Psi (the universal wavefunction). Instead, it is merely the decomposition compatible with our perception of how the Hilbert space breaks up into subsystems. It seems to me that this has to do with dynamics—what ends up being weakly coupled to what so that separation into subsystems is a useful thing to do. So while I don’t think that the question can be wished away, I do think that there are ways of trying to get out of it with out larding on more interpretational complexity.

I don’t want to suggest that MWI isn’t a valid approach. I think that it is, and I also think that it has gained a great deal of popularity in the physics community because it is conceptually clarifying in some important respects. But it is false to suggest that it is the only principled way to do quantum mechanics, or even the most minimalistic principled way to do it. As I’ve alluded to several times in this response, there are several other approaches, all with their merits, that don’t differ too strongly (to my mind) in their level of complexity from the many worlds approach. Some of them have the added benefit of cleanly separating the problems of interpreting probabilities themselves from the problem of formulating quantum mechanics. This is one area where many worlds runs into severe difficulties, and is probably my primary reason for feeling that it does not deserve the status of a consensus position.

Whew! I’m sure I’ve left a lot unsaid that probably should be said…but this is getting rather too lengthy, so I will stop here.

]]>(i know!)

]]>the question was not supposed to have any thing to do with QM

]]>Now I’m not sure I entirely understand what you’re saying regarding general covariance and the number of loops in your diagrams. I had thought that using background-field techniques, one could ensure that general covariance is preserved at any loop order. However, you will typically find that the counter-terms you have to add into the the theory are new combinations of the curvature tensor. This result also emerges from string-theoretic calculations.

In other words, if you start with plain-vanilla Einstein gravity, where the Lagrangian density is proportional to the ricci curvature scalar R, you find that at one loop you have to add an R^2 term in order to be able to renormalize. This term is generally covariant, so that’s okay, but it means that quantization leads to a higher order curvature effect than you had in Einstein’s original theory. When you go to two loops, you can get terms of order R^3, and so on.

So the issue, as I understand it, is simply that at every order you go (and equivalently, as you go up the energy ladder), you have to add more empirically determined constants to the theory in order to renormalize it. If you reason about this all the way to infinite energies, you get something that is unusable since you’d have to determine an infinite number of constants before making predictions. Thankfully, we don’t need to go up to infinite energies—a natural cut-off exists at the Planck scale, where we anticipate the nature of the whole theory changes to something like string theory (or some other approach to quantum gravity).

I think that your intuition about dimensionful constants if a pretty good one. The idea being that the dimension of the coupling is a hint of some kind of underlying structure.

We know how to do QFT on weakly curved spaces. The background field method is how it’s done. But in a deeper sense, we don’t know how to do QFT when spacetime itself can fluctuate violently. The only examples we have of theories that seem to probe the nature of quantum gravity relatively fully are the ones that are dual to non-gravitational qft’s via the AdS/CFT correspondence. The trick there though is that we don’t know how to track things like the worldlines of particles from the CFT (non-gravity) side, so we cannot start to answer questions like “what happens when an observer falls into a black hole”.

]]>I’ve seen that video before. Rosling is wonderful at presenting and explaining data!

I’m not exactly sure what you’re asking for here, but the chart and it’s evolution show two things quite clearly. There’s a strong correlation between income and lifespan, and there’s been an impressive overall shifting up and tilting flat of the relationship since the beginning of last century.

How this relates to the interpretation of quantum theory is beyond me.

]]>“Put it another way – is the social value added of economics greater than its opportunity costs? Or would many of our great economists be better reemployed as mediocre physicists, from a social planner’s point of view?”

Given the government subsidies to both mediocre economists and physicists perhaps both would be more productive doing something else. Great economists are far more productive in econ than they would be in physics.

]]>I have even an advanced, but black and white version:

Chart 4 in:

Technical Change and the Aggregate Production Function

Author(s): Robert M. SolowReviewed work(s):Source: The Review of Economics and Statistics, Vol. 39, No. 3 (Aug., 1957), pp. 312-320Published by: The MIT PressStable URL: http://www.jstor.org/stable/1926047 .

What is wrong with this?

]]>