Wednesday, September 20, 2017

I've just posted a new paper to the ArXiv, "Classical states, quantum field measurement", which comes out of a math bender I've been on for about the last month. I'll be coming out of that, hopefully. When I'm in that state, ideas come and go so fast that I lose track of them, but almost all of them turn out to be nonsense, so there's really no need to keep track. I make so many mistakes, I've seen ideas I've thought absolutely solid crash and burn because of sign errors, or conceptual misunderstandings, or really anything that can go wrong, that I now always prefix anything with "if I haven't made any mistakes". Sometimes I only realize after a few years what every physicist knows. But this paper feels a little different. Something really dropped out that's simple enough (in a mathematician's sense of simple), just a few lines, the whole paper's only 4 pages, that if I've made a mistake there's not many places for it to hide. Either on Saturday or Sunday, so this is really much too soon to feel confident, I tried something and it worked spectacularly.

So what does this paper do? One of the problems in trying to understand quantum field theory is that "quantized Dirac spinor fields" (otherwise called "fermion fields", they're what we use to describe matter in contrast to electromagnetism) are a lot different from classical physics. This paper kinda fixes that, it makes fermion fields look almost as classical as a 19th Century physicist could wish it to be. Not quite, because one doesn't and one doesn't want to get rid of 90 years of history, but if physicists understand it and I've made not too many mistakes, and hopefully no big mistakes, there'll be some change.

So what does this paper do? Enough of the hand waving! The things (operators) that come out of a Dirac field that correspond to what we can measure are the constituents of what is called a Lie algebra; there are other operators that don't correspond to anything we can measure, a bigger algebra that is the heart of what is a lot different about fermion fields, but they're so necessary to the way the theory is constructed that they've really been thought part of the whole package. This paper introduces a new way to construct the same Lie algebra of observables, but using different, almost, very nearly, really all but classical tools to do it. Once a mathematician sees the few lines that set this up, and if they also accept the embedding of the Lie algebra of observables into the new big Lie algebra, really a whole lot becomes possible. Even if it comes to nothing, there's something about having a new perspective that makes everything never the same again. In the light of the new bigger algebra, that old bigger algebra looks a lot more sensible.

I doubt anyone here will want it, but I can't give a link to the paper on the ArXiv until this evening. The abstract is
Manifestly Lorentz covariant representations of the algebras of the quantized electromagnetic field and of the observables of the quantized Dirac spinor field are constructed that act on Hilbert spaces that are generated using classical random fields acting on a vacuum state, allowing a comparatively classical interpretation of the states of the theory.
so that's fun. [Added September 21st, https://arxiv.org/abs/1709.06711.]

Wednesday, August 30, 2017

This is something of a placeholder, a way to cite a construction that I think deserves to be known widely that I put into an otherwise deservedly unpublished paper, https://arxiv.org/abs/1211.2831.
So, three images from that paper, the last being the only bibliographical reference needed (and very general it is):


The point of this is just that it contrasts with the usual way of talking about summing amplitudes over all possible trajectories, forward and backward in time without limit. The Lagrangian can be said to fix a deformation of the differential equation that is satisfied by the free field. Not to discount taking a path-integral understanding of the Lagrangian seriously as well, as a contrasting point of view, but we can take the interacting field to be constructed as a complex of free field operators that is purely contained in the backward light-cone of a given point. We can think of the action of the interacting field as a consequence of interference between a carefully weighted infinite sea of free field components that is isolated in the backward light-cone of x; or we can think of it as just doing what has to be done to make it look like there's something complicated at x; or ... .
Everything here can be said to follow from the action of the interaction Lagrangian on the free quantum field by time-ordered commutation being effectively the same as the action of the same expression on the free classical field by time-ordered Poisson bracket, in the sense that the same differential equation is satisfied by the interacting quantum field and by the interacting classical field --- up to the usual worries about renormalization.

Saturday, April 29, 2017

Quantum measurement and Free Will

John Bell invoked Free Will for experimenters as part of a derivation of inequalities that would have to be satisfied by classical relativistic models\cite[Bell, Chapter 12], a modification of an earlier stipulation that experimental choices should be "at the whim of experimenters"[Bell, Chapter 7].
He more pragmatically requires that the experimenters' choices should be ``effectively free for the purpose at hand'', which suggests some consideration of just how free that might be in the context of quantum measurement.

Consider Alice and Bob running two ends of an experiment. Alice and Bob each have to choose a random sequence of 0s and 1s. If either of them chooses 0 too often or 1 too often, we have to restart the data collection. They're also not allowed to have too many 0000 sequences, too many 01101110 sequences, et cetera; they have to satisfy all the tests here, say, within some pre-agreed limits. They're not allowed to look at the statistics of their past choices to make sure that they don't break any of the rules. A typical experiment might need Alice and Bob each to generate a sequence that contains a few hundred million 0s and 1s that can be certified after the event to be random enough. Furthermore, without conferring, the two lists must not be correlated, again within some pre-agreed limit. Hard to do. Alice and Bob don't seem to be very free at all. Every individual 0 or 1 can be freely chosen, but the statistics are constrained.

Alice and Bob in practice farm out the job either to random number generators or to photon detectors driven by light from stars 600 light years away (arXiv here). No Free Will required.

I don't have much problem with Bell-EPR experiments these days, but the seemingly pervasive idea that Free Will plays a significant part in the discussion seems unsupportable.

The discussion above hints at the stochastic nature of the constraints on Free Will. Suppose that Alice and Bob are both friends of Wigner. They agree that Wigner can construct quantum mechanical models of their brains that predict the statistics of their choices, which is checked while they practice choosing a list that contains millions of random numbers, millions of times. If quantum theory is truly universal, this is just hard to do, even very hard, but it's not in principle impossible. This model doesn't constrain Alice and Bob's Free Will, it just describes where their Free Will has brought them to. If Alice and Bob include observations of stars 600 light years away to decide their 0 and 1 choices, then Wigner has to include a quantum mechanical model of the light from those stars that is accurate enough to describe the statistics of Alice's and Bob's lists. A quantum mechanical model describes the statistics of Alice's and Bob's choices about as much as would a classical stochastic model.

Bell J S 1987 Speakable and unspeakable in quantum mechanics (Cambridge: Cambridge University Press).

Thursday, April 06, 2017

I'll do a bit of catching up on newish news. After a conversation with our daughter, I posted a video to YouTube, https://www.youtube.com/watch?v=frSL-BJTh90, that makes a blunt point about quantum mechanics:

Quantum Mechanics: Event Thinking

Published on Feb 18, 2017
To save time, watch the last five seconds, where I write down the word that this is in part a polemic against. That word appears in almost every interpretation of quantum mechanics. In this video, I talk about how to think about quantum mechanics as about events instead of using that word. This isn't a full–blown interpretation of quantum mechanics in 4'26", but it's a way of thinking that I find helpful. Something can be taken from this way of thinking without knowing anything about quantum mechanics, but inevitably the more math you know already the more you'll pick up on nuances (and, doubtless, know why you disagree with many of them).
Thinking about quantum mechanics as about events helps a little, but thinking of quantum field theory as a formalism for doing signal analysis is better, if you can get to that level of mathematics.

Adding a little more thinking in terms of events, imagine that we have a black box that puts out a continuous zero voltage on an output wire, but occasionally something happens inside the box so that the voltage rises sharply to some non-zero voltage for a very short period of time, then the voltage equally sharply returns to zero. We set up a clock so that whenever the voltage rises the time is sent to a computer's memory.
  When we put our event black box into a dark room at 20℃ we see events every now and then; if we change the temperature, the statistics of the events changes a little. Imagine we have a different kind of black box, which has a power cable into it, but no output, however when we introduce this box and turn on its power, the statistics of the events from the first box change, so we call the new kind of box a source of events. If we move the source black box to a different place, the statistics of the events change.
  If we have a number of event black boxes, we can do more sophisticated statistics, including correlations between when events happen. Then we can introduce multiple source black boxes and other apparatus, such as lenses, prisms, waveplates, polarizers, crystals, etc., and see what changes there are in the statistics.
  After many decades, we would have a quite comprehensive list of how statistics change as we change many aspects of the geometrical arrangement of source black boxes, event black boxes, and other apparatus. We would find that how the statistics change obey various equations as we move the pieces around. Eventually we would find that there are different kinds of source black boxes, which affect different kinds of event black boxes differently, and we would characterize the different ways that changes of the geometry change the statistics of the events.
  One thing that would soon become clear is that event black boxes cause statistics associated with other event black boxes to change. We'd like to have event black boxes that cause other statistics to change as little as possible, but we'd be disconcerted to discover that there's a limit to how much we can reduce the changes that an event black box will cause in the statistics of other black boxes' events.

  To return to the real world, which already knew about the electromagnetic field, electrons, atoms, before anyone thought of recording times of events so systematically, there was already a lot of knowledge about different kinds of sources, much of which had to be unlearned when quantum mechanics came along. When we use just light, the equations are provided by quantum optics. There are different equations if we use different types of source black boxes. We know what type of source black box we are using because the statistics change differently as we change the geometry. A lot of the work of quantum mechanical experiment is to characterize newly invented source black boxes using event black boxes we have already characterized with other sources carefully enough that we can use the new source black box to characterize newly invented event black boxes.

  The altogether too difficult question is "what is there between the source black boxes and the event black boxes?" The instrumentalist is quite certain that it doesn't matter, all we need to know is how the statistics change. As I said in the last post, there are so many possibilities that it's worth not worrying about what's between too much so we can do other things. Not quite the old-timers "shut up and calculate", more "we can do some fun stuff until such time as there's something it's useful to say for the sake of doing even more fun stuff". There is, inevitably, a lingering thought that if we better understood what is between we could do more fun stuff, but the regularities will be the same whether we understand or not.

Wednesday, April 05, 2017

Seven years later

Seven years away from this blog. The biggest change is that I've mostly reconciled myself to quantum theory, which would have been a surprise to me seven years ago but seems quite natural to me now. The name of the blog is probably not as appropriate as it was, but whatever.

Why that change? Mostly because there are so many ways to have something "under" quantum theory. "Stochastic superdeterminism" is possible, faster-than-light can't be ruled out if it has only limited effects at large scales, neither can a myriad of GRW-type or de Broglie-Bohm-type approaches (if one is generous about a few things). All of them are somewhat weird, but how are Buridan's ass and I to choose? Moreover, the statistics of the regularities of nature are the same either way, which will not kill me any faster whether they have something one might call an explanation or not.

In any case, by now I'm mostly happy to say that "quantum field theory is a signal processing formalism". Modern physics comes down to recording in a computer as much as we can fit into a reasonable amount of memory. A typical electrical signal could be recorded as an average voltage every trillionth of a second (a terabyte per second, say), but we don't do that because we don't have enough memory, so we save a very lossily compressed signal, perhaps, and quite commonly, as just the times when the signal changed from a low voltage to a high voltage (which might be only a few kilobytes per second). For that to be possible, we have to engineer the hardware so that the electrical signal does make transitions consistently from one voltage to another, and so that a timer is triggered to send the time to computer memory when the transition happens. The records in computer memory are what have to be modeled and perhaps explained by a quantum theoretical model. Where things get tricky is making those models as easy to use as possible. Specifically, we'd like to use quantum theory for reliable everyday engineering, we don't want to have to spend years figuring out how to make some new piece of apparatus work, so there's a kind of simplicity required. Physicists and engineers have all sorts of rules of thumb that work pretty well for relating new experimental apparatus to quantum theoretical models, and I've become more happy than I was to say that's OK, though knowing everything you need to know about quantum optics alone has become a lot.

Enough for now.

Sunday, May 30, 2010

Uniqueness, similarity, difference.

Adam Frank, at NPR, in his post Things, Ideas And Reality: What Persists?, prompted the response below (which can also be found as a comment there).
There is only uniqueness. Everything is different. Then comes noticing that things are similar. An apple is different from another apple; an apple is different from an orange, but they are both fruit. People are all different, but they are all people. Abstraction is the process of noticing and using similar beginnings to predict when we will come to similar outcomes. Noticing differences is essential to know when there may be unexpected turns, and then it is useful to notice similarities amongst differences.

Science deals only with reproducible, similar events, it can say nothing about anything that is unique. Insofar as everything is absolutely unique, Science can say nothing about the world, absolutely, but insofar as we notice similarities, Science is very useful. If the similarities are only in our imagination, the "true objective world" is as tenuous, or as solid, as our imagination.

Insofar as theory is capable, we can imagine ways both to use and test that capability.
This was also prompted by a conversation at dinner on Friday night with a Episcopal minister who is wife to a colleague of my wife. Science finds similarities that are very useful, but it cannot touch anything that is unique. If everything is unique, then, without taking away from remarkable success, Science makes contact with the world of measure zero. Also, we might think that there is no separation of one thing from another.

Saturday, May 01, 2010

Group identification and the Tea Partiers?

This is a comment, a long way down, on a Slacktivist post,

Empathy and epistemic closure

It's hard to know what my title should be, but here's my comment:

I think you're arguing that if we support each other, and consider what's best for other people as much as what's best for us, we will all make it. All 6, 8, 10, 12, 15 billion people, and every animal and plant, will go on into the cooperative future.

But suppose you believe that everything's going to hell. You believe that there are too many people on the earth. If you believe, deep down, that we're not all going to make it, empathy is a real problem. You start hardening yourself, preparing for riots and chaos, deciding who you will and won't support. Deciding which group gives you the best chance of survival, of making it through. Make that decision, and that group accepts you as one their own, then you're committed to that survival strategy. The closer you think the riots are, the more you can't back out, the more you have to go along if the beliefs of the group you've decided is your survival strategy start to make no sense. The more you're committed to that group as your only chance of survival, the more you want to make it strong and purge it of anything that might make it weak. Groups within the group emerge. Of course the riots and chaos are dealt with by a fundamental concept, "the end-times", which goes close to one-to-one with environmental disaster, but gives it meaning, of a sort.

Is this "stupid"? Much has been made above of "smart people can deal with complex understandings", but the bandying about of "stupid" is just the opposite. If environmental disaster leads to a good approximation of the end-times, with only a few million people left alive in the US, 50 million across the world, say, then on the liberal gold-standard of meaning, evolution, it is arguable that it is who is left who will not be stupid, as a matter of definition. Can we be sure that only empathic people will be left? Or does it seem more of those who will be left will be people who have decided to harden themselves to outsiders, but to cooperate fiercely within their little group?

If there come to be riots and chaos, smart liberals may find themselves defending their own. By writing this comment, I suppose I can't call myself a smart liberal, but since I have residual empathy, Good Luck.

Sunday, March 07, 2010

Algebra in Wonderland


Algebra in Wonderland

Since I was an mathematics undergraduate at Christ Church, Oxford, from 1975-78, Charles Dodgson has inevitably had a certain fascination, albeit one I haven't pursued. This New York Times article tells me things I probably ought to have known already. Charles Dodgson seems to have been rather the curmudgeon, but it's not clear from this article whether he had a spark as a mathematics tutor or whether he escaped from his students as much as he could. Teaching a thousand 19th Century mathematics undergraduates brilliantly would probably not make a hundredth of the cultural impact that Alice in Wonderland has made, however.

Friday, March 05, 2010

Modulation of a random signal

Partly thanks to Built on Facts, where can be found a post about "Hearing The Uncertainty Principle", and partly because analyzing the datasets of Gregor Weihs' experiment (arXiv:quant-ph/9810080, but it's good to look at his thesis as well), I suggested there that we can say that "QFT is about modulation of a random signal", in contrast to a common signal processing approach, in which we talk about modulation of a periodic signal.
Comment #9 more-or-less repeats what I said in my #3 (the part where I say "There is no quantum noise/fluctuations in your post, and there's none in the paper I cite above, so there's no Planck constant, which is, needless to say, a big difference."), but then goes on to something conventional, but unsupportable, "when you look for the QM particle, you will only find it in one (random) location". No to that. When you insert a high gain avalanche photodiode somewhere in an experiment, (1) changing the configuration of the experiment will cause interference effects in other signals; (2) the avalanche photodiode signal will from time to time (by which I mean not periodically) be in the avalanche state (for the length of time known as the dead time). The times at which avalanche events occur will in some cases be correlated with eerie precision with the times at which avalanche events occur at remote places in the apparatus. Although it's entirely conventional to say that a "particle" causes an avalanche event in the avalanche photodiode, that straightjackets your understanding of QFT, and is, besides, only remotely correct if you back far away from any lingering classical ideas of what a "particle" is that aren't explicitly contained in the mathematics of Hilbert space operators and states.
Try saying, instead, "QFT is about modulation of a random signal". The post more-or-less talks about modulation of a periodic signal, but we can also talk about modulation of a Lorentz invariant vacuum state. If we use probability theory to model the vacuum state (we could also use stochastic processes, but that's a different ballgame), the mathematics is raised a level above ordinary signals, in the sense that we have introduced probability measures over the linear space of ordinary signals, as a result of which the tensor product emerges quite naturally.
For me, Matt Springer's posts are somewhat variable, perhaps because he's attempting to keep it simple, which as we know is one of the hardest things to attempt, but he hits the spot often enough to remain interesting. For my comment #3, see his post.

The eeriness of the correlations of the times at which avalanche events happen in avalanche photodiodes that I mention above is pretty extreme in Gregor Weihs experiment and others like it. There's a central parametric down conversion apparatus that feeds two fiber optic cables that are 500 meters long, which at the speed of light is equivalent to about 1600 nanoseconds.When an avalanche photodiode is set up at the two remote ends of the two 500 meter fiber optic cables, about 1/20th of the time avalanche events happen within 1 nanosecond of each other. Compared to 1600 nanoseconds. The other 19/20ths of the time, there's not much of a match. We can plot the avalanche events that match within 200 nanoseconds from 2 seconds of Gregor Weihs data:

In this plot, there's some additional information, which shows, at "Alice"'s end of the experiment, which direction an electromagnetically controlled polarization device was set at (0 or 90 degrees is one setting, 45 or 135 degrees is the other setting, which is switched at random, but on average every few hundred nanoseconds), and in which of two avalanche photodiodes there was an avalanche event (in effect choosing between 0 or 90 degree polarization or choosing between 45 or 135 degree polarization).

There are lots of events within about 1 nanosecond, there is a small excess of events that have a match within about 20 nanoseconds, then the rest are distributed evenly. Beyond the 200 nanosecond extent of this plot, the time difference between events in "Alice"'s and "Bob"'s avalanche photodiodes that match most closely are just as uniformly distributed as here, out to a difference of about 20000 nanoseconds, then there's a slightly decreasing density, until all 9711 [edit: this number of events happened in the first 1/4 second, multiply by 8, more-or-less, for the number of events in 2 seconds] of the times of avalanche events in Alice's data are within 120,000 nanoseconds of some avalanche event in Bob's data. The graph below shows how close each of Alice's events is to the closest event in Bob's events in the same 2 second fragment of the dataset [edit: the graph is in fact for the first quarter second of data from the same run. The general features are unchanged]:
There are about 500 hundred of Alice's events that are so close to events in Bob's events that they don't show on the graph, then most of Alice's events are, fairly uniformly, within a time difference of about 2e-5 (=20,000 nanoseconds). The graph is steep at the far right because there are very few of Alice's events that are so separated in time from any of Bob's events.

Trying to make some sense of large amounts of data, with good dollops of muddy randomness thrown in! Modulations of a random signal.

Thursday, February 25, 2010

A response to a video

I posted the comment below at The Poetry of Science (which is on Discovery News).
This is a theme! Scientists are trying to circle the wagons because of ClimateGate and the general perception that Science has got presentation problems. I think circling the wagons is not a good strategy.

Great poetry is more often not lovely. Do the video again showing how lovely are the ways that Science helps the US help Iraqis and Afghanis. Perhaps adopt the style of Wilfrid Owen's Anthem for doomed Youth.


I worry that this kind of thing doesn't step outside preaching to the converted. Lovely places are lovely without Science telling us how to turn them into something else, whether abstractly, into a theoretical idealization, or concretely, into a parking lot. Do we implicitly say, as Scientists, that there is no beauty without understanding? Worse, is there any untouched beauty?

Abrogating everything beautiful and awesome in the world to Science is an unwarranted pretension. It is also a denial that understanding everything that is ugly or inconsequential is certainly also the subject of Science. When what is ugly seems to be the fault of the technological and industrial use of Science, such denial is culpable, and there are many who cry bitterly at the power of Science to change the world. The double standard seems almost always seen through by everyone except Scientists. Is it as it appears, that we claim credit wherever Science does good but reject blame for enabling others to be uncaring, rapacious, or evil?

There are the usual sincerely meant nods to the humility of Science in this video, but exactly how Science constructively critiques its past successes is subtle enough that this claim looks only ingratiating. That a critique is only allowed to be constructive should be honestly admitted to be self-serving --- without this constraint on critique, Science would presumably soon be dead, right? --- but it's just what Science does, for as long as people see Science to be beneficial. The continued existence of Science as a highly structured pattern of behavior depends on a flow of entropy no less than do the people who depend on Science

Monday, February 22, 2010

Is Science a "meme"?

I'm interested in the first tangent on the Forum Thread question about “Natural Phenomenon” of whether Science is a meme.

The idea that the "methods of science" are empirically successful, or that "Science" is empirically successful, is premature. The usefulness of Science and its methods is also questionable, on a long enough time frame, and depending on what you consider success. The methods of Science and the technological use of its product, Scientific theories, have arguably allowed exponential population growth and exponential increase of resource use over the last couple of Centuries, so over that time frame one can say fairly clearly that Science is a successful meme. The real question, however, is whether the human race will wipe itself out in the next hundred years or in the next thousand years or not. If we do, Science, insofar as we take it to be characteristic of us relative to other animals, is a pretty poor meme. Perhaps 20 generations. Hopeless.

Science has been subjected to a number of challenges to its value, but one of the most damning was Rachel Carson's "Silent Spring". Scientists were shown not to have understood more than a small part of the consequences of the technological and industrial use of Science. The ripples of disbelief that Science is necessarily a good thing are reflected every time a Scientist decries Global Warming and is ignored. One can say that it is technology's and industry's use of Science that is at fault, and more broadly that it is the individuals in society that are at fault for wanting washing machines, TVs, cheaply manufactured food, ..., but splitting the whole system up in that way is beside the point. Indeed, the reductionist move of saying that Science is a useful meme, war is a bad meme, ..., misses that it is the whole system that is under the knife at every moment. We cannot do much more than guess how the system will evolve, but we make wild statements about what is good or bad.

Thursday, February 11, 2010

Three changes to 'Comment on "A glance beyond the quantum model" '

As a result of correspondence with Miguel Nevascués, I have communicated three changes to Proc. Roy. Soc. A,
  • to include an acknowledgment, "I am grateful for very helpful correspondence with Miguel Nevascués";
  • to change the first sentence of the Summary so that it does not make two incorrect claims about the aims of NW's paper, so that it would read "“A glance beyond the quantum model” uses a modernized Correspondence Principle that begins with a discussion of particles, whereas in empirical terms particles are secondary to events.";
  • to change one sentence in my concluding paragraph to say "Navascués’ and Wunderlich’s paper requires comment where something less ambitious would have gone unchallenged", saying "comment" instead of saying "a vigorous condemnation", which I can hardly believe I wrote.
Miguel's comments were indeed very helpful. There are other changes I would like to make to my Comment, but I will only think about making them if, against likelihood, the referees recommend acceptance with changes.

For anyone reading this who hasn't submitted papers to journals, waiting for the acceptance or rejection letter is hard work, with an inhuman time-scale that is usually months but can be much shorter, depending on the vicissitudes of referees' schedules and whims. So every time an e-mail arrives from the time a paper is submitted, for several months, it may be the dreaded rejection. It's easier if you're relatively sure you've hit a sweet spot of conventional ideas that you feel sure most Physicists will get, but then the paper is perhaps not close enough to the edge to be more than a little interesting.