Sunday, May 30, 2010

Uniqueness, similarity, difference.

Adam Frank, at NPR, in his post Things, Ideas And Reality: What Persists?, prompted the response below (which can also be found as a comment there).
There is only uniqueness. Everything is different. Then comes noticing that things are similar. An apple is different from another apple; an apple is different from an orange, but they are both fruit. People are all different, but they are all people. Abstraction is the process of noticing and using similar beginnings to predict when we will come to similar outcomes. Noticing differences is essential to know when there may be unexpected turns, and then it is useful to notice similarities amongst differences.

Science deals only with reproducible, similar events, it can say nothing about anything that is unique. Insofar as everything is absolutely unique, Science can say nothing about the world, absolutely, but insofar as we notice similarities, Science is very useful. If the similarities are only in our imagination, the "true objective world" is as tenuous, or as solid, as our imagination.

Insofar as theory is capable, we can imagine ways both to use and test that capability.
This was also prompted by a conversation at dinner on Friday night with a Episcopal minister who is wife to a colleague of my wife. Science finds similarities that are very useful, but it cannot touch anything that is unique. If everything is unique, then, without taking away from remarkable success, Science makes contact with the world of measure zero. Also, we might think that there is no separation of one thing from another.

Saturday, May 01, 2010

Group identification and the Tea Partiers?

This is a comment, a long way down, on a Slacktivist post,

Empathy and epistemic closure

It's hard to know what my title should be, but here's my comment:

I think you're arguing that if we support each other, and consider what's best for other people as much as what's best for us, we will all make it. All 6, 8, 10, 12, 15 billion people, and every animal and plant, will go on into the cooperative future.

But suppose you believe that everything's going to hell. You believe that there are too many people on the earth. If you believe, deep down, that we're not all going to make it, empathy is a real problem. You start hardening yourself, preparing for riots and chaos, deciding who you will and won't support. Deciding which group gives you the best chance of survival, of making it through. Make that decision, and that group accepts you as one their own, then you're committed to that survival strategy. The closer you think the riots are, the more you can't back out, the more you have to go along if the beliefs of the group you've decided is your survival strategy start to make no sense. The more you're committed to that group as your only chance of survival, the more you want to make it strong and purge it of anything that might make it weak. Groups within the group emerge. Of course the riots and chaos are dealt with by a fundamental concept, "the end-times", which goes close to one-to-one with environmental disaster, but gives it meaning, of a sort.

Is this "stupid"? Much has been made above of "smart people can deal with complex understandings", but the bandying about of "stupid" is just the opposite. If environmental disaster leads to a good approximation of the end-times, with only a few million people left alive in the US, 50 million across the world, say, then on the liberal gold-standard of meaning, evolution, it is arguable that it is who is left who will not be stupid, as a matter of definition. Can we be sure that only empathic people will be left? Or does it seem more of those who will be left will be people who have decided to harden themselves to outsiders, but to cooperate fiercely within their little group?

If there come to be riots and chaos, smart liberals may find themselves defending their own. By writing this comment, I suppose I can't call myself a smart liberal, but since I have residual empathy, Good Luck.

Sunday, March 07, 2010

Algebra in Wonderland


Algebra in Wonderland

Since I was an mathematics undergraduate at Christ Church, Oxford, from 1975-78, Charles Dodgson has inevitably had a certain fascination, albeit one I haven't pursued. This New York Times article tells me things I probably ought to have known already. Charles Dodgson seems to have been rather the curmudgeon, but it's not clear from this article whether he had a spark as a mathematics tutor or whether he escaped from his students as much as he could. Teaching a thousand 19th Century mathematics undergraduates brilliantly would probably not make a hundredth of the cultural impact that Alice in Wonderland has made, however.

Friday, March 05, 2010

Modulation of a random signal

Partly thanks to Built on Facts, where can be found a post about "Hearing The Uncertainty Principle", and partly because analyzing the datasets of Gregor Weihs' experiment (arXiv:quant-ph/9810080, but it's good to look at his thesis as well), I suggested there that we can say that "QFT is about modulation of a random signal", in contrast to a common signal processing approach, in which we talk about modulation of a periodic signal.
Comment #9 more-or-less repeats what I said in my #3 (the part where I say "There is no quantum noise/fluctuations in your post, and there's none in the paper I cite above, so there's no Planck constant, which is, needless to say, a big difference."), but then goes on to something conventional, but unsupportable, "when you look for the QM particle, you will only find it in one (random) location". No to that. When you insert a high gain avalanche photodiode somewhere in an experiment, (1) changing the configuration of the experiment will cause interference effects in other signals; (2) the avalanche photodiode signal will from time to time (by which I mean not periodically) be in the avalanche state (for the length of time known as the dead time). The times at which avalanche events occur will in some cases be correlated with eerie precision with the times at which avalanche events occur at remote places in the apparatus. Although it's entirely conventional to say that a "particle" causes an avalanche event in the avalanche photodiode, that straightjackets your understanding of QFT, and is, besides, only remotely correct if you back far away from any lingering classical ideas of what a "particle" is that aren't explicitly contained in the mathematics of Hilbert space operators and states.
Try saying, instead, "QFT is about modulation of a random signal". The post more-or-less talks about modulation of a periodic signal, but we can also talk about modulation of a Lorentz invariant vacuum state. If we use probability theory to model the vacuum state (we could also use stochastic processes, but that's a different ballgame), the mathematics is raised a level above ordinary signals, in the sense that we have introduced probability measures over the linear space of ordinary signals, as a result of which the tensor product emerges quite naturally.
For me, Matt Springer's posts are somewhat variable, perhaps because he's attempting to keep it simple, which as we know is one of the hardest things to attempt, but he hits the spot often enough to remain interesting. For my comment #3, see his post.

The eeriness of the correlations of the times at which avalanche events happen in avalanche photodiodes that I mention above is pretty extreme in Gregor Weihs experiment and others like it. There's a central parametric down conversion apparatus that feeds two fiber optic cables that are 500 meters long, which at the speed of light is equivalent to about 1600 nanoseconds.When an avalanche photodiode is set up at the two remote ends of the two 500 meter fiber optic cables, about 1/20th of the time avalanche events happen within 1 nanosecond of each other. Compared to 1600 nanoseconds. The other 19/20ths of the time, there's not much of a match. We can plot the avalanche events that match within 200 nanoseconds from 2 seconds of Gregor Weihs data:

In this plot, there's some additional information, which shows, at "Alice"'s end of the experiment, which direction an electromagnetically controlled polarization device was set at (0 or 90 degrees is one setting, 45 or 135 degrees is the other setting, which is switched at random, but on average every few hundred nanoseconds), and in which of two avalanche photodiodes there was an avalanche event (in effect choosing between 0 or 90 degree polarization or choosing between 45 or 135 degree polarization).

There are lots of events within about 1 nanosecond, there is a small excess of events that have a match within about 20 nanoseconds, then the rest are distributed evenly. Beyond the 200 nanosecond extent of this plot, the time difference between events in "Alice"'s and "Bob"'s avalanche photodiodes that match most closely are just as uniformly distributed as here, out to a difference of about 20000 nanoseconds, then there's a slightly decreasing density, until all 9711 [edit: this number of events happened in the first 1/4 second, multiply by 8, more-or-less, for the number of events in 2 seconds] of the times of avalanche events in Alice's data are within 120,000 nanoseconds of some avalanche event in Bob's data. The graph below shows how close each of Alice's events is to the closest event in Bob's events in the same 2 second fragment of the dataset [edit: the graph is in fact for the first quarter second of data from the same run. The general features are unchanged]:
There are about 500 hundred of Alice's events that are so close to events in Bob's events that they don't show on the graph, then most of Alice's events are, fairly uniformly, within a time difference of about 2e-5 (=20,000 nanoseconds). The graph is steep at the far right because there are very few of Alice's events that are so separated in time from any of Bob's events.

Trying to make some sense of large amounts of data, with good dollops of muddy randomness thrown in! Modulations of a random signal.

Thursday, February 25, 2010

A response to a video

I posted the comment below at The Poetry of Science (which is on Discovery News).
This is a theme! Scientists are trying to circle the wagons because of ClimateGate and the general perception that Science has got presentation problems. I think circling the wagons is not a good strategy.

Great poetry is more often not lovely. Do the video again showing how lovely are the ways that Science helps the US help Iraqis and Afghanis. Perhaps adopt the style of Wilfrid Owen's Anthem for doomed Youth.


I worry that this kind of thing doesn't step outside preaching to the converted. Lovely places are lovely without Science telling us how to turn them into something else, whether abstractly, into a theoretical idealization, or concretely, into a parking lot. Do we implicitly say, as Scientists, that there is no beauty without understanding? Worse, is there any untouched beauty?

Abrogating everything beautiful and awesome in the world to Science is an unwarranted pretension. It is also a denial that understanding everything that is ugly or inconsequential is certainly also the subject of Science. When what is ugly seems to be the fault of the technological and industrial use of Science, such denial is culpable, and there are many who cry bitterly at the power of Science to change the world. The double standard seems almost always seen through by everyone except Scientists. Is it as it appears, that we claim credit wherever Science does good but reject blame for enabling others to be uncaring, rapacious, or evil?

There are the usual sincerely meant nods to the humility of Science in this video, but exactly how Science constructively critiques its past successes is subtle enough that this claim looks only ingratiating. That a critique is only allowed to be constructive should be honestly admitted to be self-serving --- without this constraint on critique, Science would presumably soon be dead, right? --- but it's just what Science does, for as long as people see Science to be beneficial. The continued existence of Science as a highly structured pattern of behavior depends on a flow of entropy no less than do the people who depend on Science

Monday, February 22, 2010

Is Science a "meme"?

I'm interested in the first tangent on the Forum Thread question about “Natural Phenomenon” of whether Science is a meme.

The idea that the "methods of science" are empirically successful, or that "Science" is empirically successful, is premature. The usefulness of Science and its methods is also questionable, on a long enough time frame, and depending on what you consider success. The methods of Science and the technological use of its product, Scientific theories, have arguably allowed exponential population growth and exponential increase of resource use over the last couple of Centuries, so over that time frame one can say fairly clearly that Science is a successful meme. The real question, however, is whether the human race will wipe itself out in the next hundred years or in the next thousand years or not. If we do, Science, insofar as we take it to be characteristic of us relative to other animals, is a pretty poor meme. Perhaps 20 generations. Hopeless.

Science has been subjected to a number of challenges to its value, but one of the most damning was Rachel Carson's "Silent Spring". Scientists were shown not to have understood more than a small part of the consequences of the technological and industrial use of Science. The ripples of disbelief that Science is necessarily a good thing are reflected every time a Scientist decries Global Warming and is ignored. One can say that it is technology's and industry's use of Science that is at fault, and more broadly that it is the individuals in society that are at fault for wanting washing machines, TVs, cheaply manufactured food, ..., but splitting the whole system up in that way is beside the point. Indeed, the reductionist move of saying that Science is a useful meme, war is a bad meme, ..., misses that it is the whole system that is under the knife at every moment. We cannot do much more than guess how the system will evolve, but we make wild statements about what is good or bad.

Thursday, February 11, 2010

Three changes to 'Comment on "A glance beyond the quantum model" '

As a result of correspondence with Miguel Nevascués, I have communicated three changes to Proc. Roy. Soc. A,
  • to include an acknowledgment, "I am grateful for very helpful correspondence with Miguel Nevascués";
  • to change the first sentence of the Summary so that it does not make two incorrect claims about the aims of NW's paper, so that it would read "“A glance beyond the quantum model” uses a modernized Correspondence Principle that begins with a discussion of particles, whereas in empirical terms particles are secondary to events.";
  • to change one sentence in my concluding paragraph to say "Navascués’ and Wunderlich’s paper requires comment where something less ambitious would have gone unchallenged", saying "comment" instead of saying "a vigorous condemnation", which I can hardly believe I wrote.
Miguel's comments were indeed very helpful. There are other changes I would like to make to my Comment, but I will only think about making them if, against likelihood, the referees recommend acceptance with changes.

For anyone reading this who hasn't submitted papers to journals, waiting for the acceptance or rejection letter is hard work, with an inhuman time-scale that is usually months but can be much shorter, depending on the vicissitudes of referees' schedules and whims. So every time an e-mail arrives from the time a paper is submitted, for several months, it may be the dreaded rejection. It's easier if you're relatively sure you've hit a sweet spot of conventional ideas that you feel sure most Physicists will get, but then the paper is perhaps not close enough to the edge to be more than a little interesting.

The Copenhagen Interpretation and thoughts that arise.

Notification e-mails are wonderful, particularly when they bring a table of contents for Studies in History and Philosophy of Modern Physics. I found the highlight this month to be James R.Henderson's "Classes of Copenhagen interpretations: Mechanisms of collapse as typologically determinative", which classifies some of the versions of the Copenhagen Interpretation quite nicely, in terms of a class of four Physicists, Bohr, von Neumann, Heisenberg, and Wheeler. Henderson is nicely careful to say that each of these four has a view of CI that has spawned its own industry of claims about what each of the big names really said and meant.
The citation is: Studies in History and Philosophy of Modern Physics 41 (2010) 1–8.

Henderson's very clear presentation points out for me the way in which the discussion so much starts with QM and tries to construct classical physics from it, because QM is supposed to be better and more fundamental than the old classical mechanics. As indeed it is, but when discussing foundations it's perhaps better not to start with such a strong assumption. Starting, conversely, from a purely classical point of view, discrete events can be taken as thermodynamic transitions, without any causal account for why they happen (that being the nature of thermodynamics, in contrast to statistical mechanics), so that Heisenberg's or Wheeler's records are the given experimental data, from which unobserved causes might be inferred. There's no question of there being a philosophically based measurement problem in this view, because there is so far no such thing as QM, there are just records in an experimenter's notebook or computer memory.

If we start from the recorded data that comes from an experiment, the question is how we come to have QM? The fundamental issue is that we have to split the world into two types of pieces, pieces that we will model with operators Si, which ordinarily we call states, and pieces that we will model with operators Mj, which ordinarily we call observables or measurement operators. When we record a number Vij when we use the piece of the world that we model with the operator Si with the piece of the world that we model with the operator Mj, we write the equation Vij=Trace[Si Mj]. When we've got a few hundred or million such numbers Vij, we solve the few hundred or million simultaneous nonlinear equations for Si and Mj.

This is a little strange, because QM is supposed to be a linear theory, but these are nonlinear equations --- if we know what the measurement operators should be a priori, we would have a set of linear equations for the states, and vice versa if we know what the states should be a priori, but it's not clear that we know either a priori, so in fact and in principle we have a nonlinear system of equations to solve. In practice, we solve these nonlinear equations iteratively, alternately as linear equations for the set of states, guessing what the measurement operators are, so that after a while we know what the states we are using are (a process often known as characterization), and then as linear equations for the measurement operators, but this is just an approximation method for solving a system of nonlinear equations.

Also peculiarly, the dimensions of the state and measurement operators are not determined, except by experience. In quantum optics there are some choices of experimental data for which it is enough to use 2-dimensional matrices to get good, useful models, but sometimes we have to introduce higher dimensional matrices, sometimes even infinite dimensional matrices, which is rather surprising given that we only have a finite number of numbers Vij. Indeed, given any finite set of experimental results, QM is not falsifiable, because we can always introduce a higher dimensional matrix algebra or set of abstract operators, so we can always solve the equations Vij=Trace[Si Mj].

Instead of solving the equations Vij=Trace[Si Mj], we can minimize the distance between Vij and Trace[Si Mj], using whatever norm we think is most useful. With this modification, we introduce an interesting question: what dimensionalities give us good models? We might find that 2-dimensional matrices give us wonderfully effective models even for millions of data points, in which case we might be tempted not to introduce higher dimensional matrices. Higher dimensionality will certainly allow greater accuracy, a smaller distance between the data and the models, but it may not be worth the trouble. If we find that some matrix dimensions work very well indeed for a given class of experiments, however, we are tempted to think that the world is that way, even though a better model that we haven't discovered yet may be possible.


This is far from being all of QM. We can also introduce transformations that can be used to model the effect of intermediate pieces of the world, so that we effectively split the world into more than two pieces, and we can introduce transformations that model moving pieces of the world relative to each other instead of introducing different operators for different configurations.


The point, I suppose, is that this is a very empirical approach to what QM does for us. There is a fundamental critique that I think QM in principle may not be able to answer, which is that when we use a piece of the world S1 with two pieces of the world M1 and M2, one after the other, we cannot be sure that the piece of the world that we model using S1 is not changed, so we appear to have no warrant for using the same S1 in two equations, V11=Trace[S1 M1] and V12=Trace[S1 M2]. We can say that we can use the same operator in the two different experimental contexts as a pragmatic matter, which we have to do to obtain a set of equations that we can solve for the states and measurement operators, but ultimately everything surely interacts with everything else, indeed QM is more insistent about this than is classical Physics, so as a matter of principle we cannot use the same operator, we cannot solve the equations, and we cannot construct ultimately detailed QM models for experimental apparatuses.


Finally, this way to construct QM does not explain much. For that we have to introduce something between the preparation apparatus and the measurement apparatus. The traditional way of talking is in terms of particle trajectories, but that can only be made to work by introducing many prevarications, the detailed content of which can be organized fairly well in terms of path integrals. An alternative, random fields, a mathematically decent way to bring probability measures and classical fields into a single structure, is the ultimate topic of this blog.

Tuesday, February 09, 2010

This is just a geeky "wow!" post. The engineering side of graphene is starting to really move along. Solid State is just going from strength to strength, while foundations, particle physics, etc., struggles away.
 
 
Special issue on Graphene
   A F Morpurgo and B Trauzettel
   2010 Semicond. Sci. Technol. 25 030301 (1p)
   Abstract: http://www.iop.org/EJ/abstract/-alert=2643/0268-1242/25/3/030301
   Full text PDF: http://www.iop.org/EJ/article/-alert=2643/0268-1242/25/3/030301/sst10_3_030301.pdf

     Since the revolutionary experimental discovery of graphene in the year 2004, research on this new two-dimensional carbon allotrope has progressed at a spectacular pace. The impact of graphene on different areas of research-- including physics, chemistry, and applied sciences-- is only now starting to be fully appreciated.
     There are different factors that make graphene a truly impressive system. Regarding nano-electronics and related fields, for instance, it is the exceptional electronic and mechanical properties that yield very high room-temperature mobility values, due to the particular band structure, the material `cleanliness' (very low-concentration of impurities), as well as its stiffness. Also interesting is the possibility to have a high electrical conductivity and optical transparency, a combination which cannot be easily found in other material systems. For other fields, other properties could be mentioned, many of which are currently being explored. In the first years following this discovery, research on graphene has mainly focused on the fundamental physics aspects, triggered by the fact that electrons in graphene behave as Dirac fermions due to their interaction with the ions of the honeycomb lattice. This direction has led to the discovery of new phenomena such as Klein tunneling in a solid state system and the so-called half-integer quantum Hall effect due to a special type of Berry phase that appears in graphene. It has also led to the appreciation of thicker layers of graphene, which also have outstanding new properties of great interest in their own right (e.g., bilayer graphene, which supports chiral quasiparticles that, contrary to Dirac electrons, are not massless). Now the time is coming to deepen our knowledge and improve our control of the material properties, which is a key aspect to take one step further towards applications.
     The articles in the Semiconductor Science and Technology Graphene special issue deal with a diversity of topics and effectively reflect the status of different areas of graphene research. The excitonic condensation in a double graphene system is discussed by Kharitonov and Efetov. Borca et al report on a method to fabricate and characterize graphene monolayers epitaxially grown on Ru(0001).
     Furthermore, the energy and transport gaps in etched graphene nanoribbons are analyzed experimentally by Molitor et al. Mucha-Kruczynski et al review the tight-binding model of bilayer graphene, whereas Wurm et al focus on a theoretical description of the Aharonov-Bohm effect in monolayer graphene rings. Screening effects and collective excitations are studied by Roldan et al.
     Subsequently, Palacios et al review the electronic and magnetic structures of graphene nanoribbons, a problem that is highly relevant for graphene-based transistors. Klein tunneling in single and multiple barriers in graphene is the topic of the review article by Pereira Jr et al, while De Martino and Egger discuss the spectrum of a magnetic quantum dot in graphene. Titov et al study the effect of resonant scatterers on the local density of states in a rectangular graphene setup with metallic leads. Finally, the resistance modulation of multilayer graphene controlled by gate electric fields is experimentally analyzed by Miyazaki et al. We would like to thank all the authors for their contributions, which combine new results and pedagogical discussions of the state-of-the-art in different areas: it is this combination that most often adds to the value of topical issues. Special thanks also goes to the staff of Institute of Physics Publishing for contributing to the success of this effort.

Tuesday, February 02, 2010

Measurement has happened when the data has been written somewhere?

A somewhat whimsical definition of measurement that I posted on Physics Forums yesterday, which I like enough for a moment to post here as well:
I prefer interpretations that take a measurement to have occurred only when a number has been written in a computer memory. Then a paper can be submitted to Physics Review Letters that says, "the raw measurement data (100MB) is available on a CD on request," and goes on to describe the statistical computations that were done using that data to show how well the data matches up with a proposed quantum mechanical model for the experiment. That sets the standard for measurement as "a PRL paper", in contrast to setting the standard for measurement as something like "it's in my head", or "it's in the head of someone who has a Ph.D" (which is a John Bell joke). I might be OK with a PRD paper as an arbiter of whether a measurement happened, for example, but perhaps not with a JMathPhys paper. Endless fun can be had deciding which journals' imprimatur is OK.
As well as being somewhat facetious, this is a rather instrumental definition, but it captures what the experimental data is moderately well. If one doesn't have this data, it rests on the experimenter's word that they did an experiment at all, let alone that the results are what they say they are in a 4 page PRL. The data might be written in a lab book, but given the millions or billions of data points typically generated by experiments that record individual events, it seems unlikely that the data will not be written by computer.

It's to be hoped that the raw measurement data will include all the data about the geometrical configuration of the experimental apparatus, and details of what materials were used in each part of the apparatus, and how they were prepared, that are sufficient to enable the experiment to be repeated. Every bit of calibration data for parts of the apparatus should be in there, too. An experimentalist ultimately only has to provide enough to persuade another experimentalist that the experiment was done well and could be reproduced, and every single number and aborted try that didn't work might not be recorded, but the more the merrier. A record of the meaning of whatever numbers are recorded is of course also important.

A detailed record of when and where events occurred in the thermodynamically nontrivial parts of the experimental apparatus allows us to discover what correlations occurred in the data after the event, always acknowledging that if we look at the data in enough different ways we will certainly find accidental correlations that look significant. Weihs et al., Phys. Rev. Lett. 81, 5039 (1998), "Violation of Bell’s Inequality under Strict Einstein Locality Conditions", is a pretty good example of what can be done in the way of data reporting. A significant part of the data can be obtained from him, and Gregor Weihs' PhD thesis gives lots of details (in German) about the experimental apparatus.

Friday, January 29, 2010

This is a response to Bee's post at Backreaction, "Division by Zero".

If I write to someone, I assume they will to some small extent register the first two words of the title of my e-mail, and the nature of the e-mail address, and nothing more. If I pay some attention to whether those two words catch their attention, perhaps they'll read the first sentence of the e-mail. If something about that seems interesting, to them, they may go on into the attachment, the arXiv posting, or the published paper that I ask that them to read. I equally apologize and take no offense if they want no part of it.

I write to people who write engagingly, people whose approval I think valuable. I've written to Bee once, who replied with what I recognized as asperity that she doesn't work on foundations, but I knew very well that she does. Bee writes about methodology quite often, and her thoughts on Physics go considerably beyond "shut up and calculate", often with wisdom. Bee appears to have quite broad interests, and a crank who doesn't believe fairly passionately that what they're doing ought to be interesting to someone like her isn't going to be a crank for long. The emotional costs of being publicly identified as a crank are very high. I've been politely but rightly called out by Chad Orzel in the last few days for being at least something of a crank on questions of how to popularize QM, at Continuity, Discretion, and the Perils of Popularization, and even that hurt a little, so I'm now licking my wounds, hoping that I won't become a bitter old man about it. An interesting process.

Do people who receive a lot of crank Physics find it painful to have these things arrive because of the pain the authors feel at what they think is oppression? It is sadly true that the individual desperation of pain can become very unpleasant to a community, but is it best to turn away from the leper? Happily, I've had few enough of these attempts on my time to have been able to reply to them all, but I do not expect the same of someone who receives many.

All of which is to say to well-known Physicists who get a lot of this stuff, let these things come, allow that you'll read as much as you read, that if these strangers to you don't interest you beyond the first two words of the title of the e-mail, then that's all you'll read. Every now and then you can announce that you almost never reply to such e-mails, if you like, but people who write e-mails attempting to get feedback know very well that from almost everyone their most likely feedback is no reply at all.

I suppose this post will stop me and most reasonable people who are reading along with Backreaction from sending anything to Bee in future. I think it sets a tone that says that she does her own thing in her own community. So Bee has achieved something. But I suppose also that there will still be the unreasonable people and those who come new to Backreaction, who might believe from what they see that she is the attractive curious person that she does so well.

And to this response, no answer required.

Thursday, January 28, 2010

On Monday morning, I read something announced by e-mail from Proc.Roy.Soc.A., Navascués, M. & Wunderlich, H. 2010 A glance beyond the quantum model. Proc. R. Soc. A 466, 881-890. (doi: 10.1098/rspa.2009.0453). The authors make numerous assumptions that I consider them to have less awareness of than they should if they're to write a foundational paper, but it's in common with very many physicists, so it's not something I consider outrageous. It's enormously difficult to notice that one is buried and might want to rise from under the commonplace. Of course, the issue is whether anyone can provide different assumptions that work better. For me, for now, I believe a random field approach is again a viable way to understand quantum field theory, and through it quantum theory, despite the standard Physics views about the violation of Bell inequalities (a belief that I can to some extent justify because of my paper Bell inequalities for random fields - cond-mat/0403692, J. Phys. A: Math. Gen. 39 (2006) 7441-7455), so I started to write a formal Comment on their paper. A day and a half later, I thought it came out well, so I submitted it, which is more rash than one is supposed to be, but the chance of getting the tone just right for the editors and referees to accept it is small enough that it's not worth spending enormous amounts of time on it.

The Comment can be found at http://arxiv.org/abs/1001.4993. Here's the title and summary:

Comment on “A glance beyond the quantum model”

Summary. The aim of “A glance beyond the quantum model” to modernize the Correspondence Principle is compromised by an assumption that a classical model must start with the idea of particles, whereas in empirical terms particles are secondary to events. The discussion also proposes, contradictorily, that observers who wish to model the macroscopic world classically should do so in terms of classical fields, whereas, if we are to use fields, it would more appropriate to adopt the mathematics of random fields. Finally, the formalism used for discussion of Bell inequalities introduces two assumptions that are not necessary for a random field model, locality of initial conditions and non-contextuality, even though these assumptions are, in contrast, very natural for a classical particle model. Whether we discuss physics in terms of particles or in terms of events and (random) fields leads to differences that a glance would be well to notice.
The weird thing is that I've since discovered (a friend pointed it out to me) that the arXiv preprint of this paper has no mention of fields in it at all. It looks as if the authors may have put in a reference to fields at the behest of a referee. The way they introduce classical fields in the published version looked heart-felt to me, "we've got to think about this in terms of classical continuous fields, ..." (that's not a quote), but with there being no "field" language in the arXiv version it seems that the whole paper is really about particles, business as usual. If I had seen that, I would not have felt the muse to write, and the certainty of a Comment on the lines I've just submitted being rejected would have been absolute, but there you go. The outcome, however, is that my Comment makes not much sense, at least not to me, if you read only the arXiv version. Sorry if you can't access the published version, but you could write to the authors requesting that they send you a copy.

There was a time when I used not to know how what I wrote would look to other people, particularly Physicists, almost at all. There came a time when I could see that Physicists would be OK with what I was writing and how I was saying it, but now it's more fuzzy again. I'm writing stuff that is only a little twisted from the mainstream, little enough that Physicists sometimes think it's interesting and constructive enough, and I cite the literature just well enough, and there's enough mathematics, that it ought to be published, and referees say OK to it, but sometimes, I suppose, there's not enough that's interesting and constructive to justify it. I've come to identify enough with Physics that I often write it with a capital P, and I love the way the whole thing tries to fit together as if it's a random tiling, not quite a lattice of ideas, so I don't want to pull it down, but I do want to do improvements that go beyond adding a flower or two.

My strong expectation is that this Comment will be rejected by the editors directly, or, if not, by the referee(s), but I write as much to see more clearly as to be read.