Thursday, February 25, 2010

A response to a video

I posted the comment below at The Poetry of Science (which is on Discovery News).
This is a theme! Scientists are trying to circle the wagons because of ClimateGate and the general perception that Science has got presentation problems. I think circling the wagons is not a good strategy.

Great poetry is more often not lovely. Do the video again showing how lovely are the ways that Science helps the US help Iraqis and Afghanis. Perhaps adopt the style of Wilfrid Owen's Anthem for doomed Youth.


I worry that this kind of thing doesn't step outside preaching to the converted. Lovely places are lovely without Science telling us how to turn them into something else, whether abstractly, into a theoretical idealization, or concretely, into a parking lot. Do we implicitly say, as Scientists, that there is no beauty without understanding? Worse, is there any untouched beauty?

Abrogating everything beautiful and awesome in the world to Science is an unwarranted pretension. It is also a denial that understanding everything that is ugly or inconsequential is certainly also the subject of Science. When what is ugly seems to be the fault of the technological and industrial use of Science, such denial is culpable, and there are many who cry bitterly at the power of Science to change the world. The double standard seems almost always seen through by everyone except Scientists. Is it as it appears, that we claim credit wherever Science does good but reject blame for enabling others to be uncaring, rapacious, or evil?

There are the usual sincerely meant nods to the humility of Science in this video, but exactly how Science constructively critiques its past successes is subtle enough that this claim looks only ingratiating. That a critique is only allowed to be constructive should be honestly admitted to be self-serving --- without this constraint on critique, Science would presumably soon be dead, right? --- but it's just what Science does, for as long as people see Science to be beneficial. The continued existence of Science as a highly structured pattern of behavior depends on a flow of entropy no less than do the people who depend on Science

Monday, February 22, 2010

Is Science a "meme"?

I'm interested in the first tangent on the Forum Thread question about “Natural Phenomenon” of whether Science is a meme.

The idea that the "methods of science" are empirically successful, or that "Science" is empirically successful, is premature. The usefulness of Science and its methods is also questionable, on a long enough time frame, and depending on what you consider success. The methods of Science and the technological use of its product, Scientific theories, have arguably allowed exponential population growth and exponential increase of resource use over the last couple of Centuries, so over that time frame one can say fairly clearly that Science is a successful meme. The real question, however, is whether the human race will wipe itself out in the next hundred years or in the next thousand years or not. If we do, Science, insofar as we take it to be characteristic of us relative to other animals, is a pretty poor meme. Perhaps 20 generations. Hopeless.

Science has been subjected to a number of challenges to its value, but one of the most damning was Rachel Carson's "Silent Spring". Scientists were shown not to have understood more than a small part of the consequences of the technological and industrial use of Science. The ripples of disbelief that Science is necessarily a good thing are reflected every time a Scientist decries Global Warming and is ignored. One can say that it is technology's and industry's use of Science that is at fault, and more broadly that it is the individuals in society that are at fault for wanting washing machines, TVs, cheaply manufactured food, ..., but splitting the whole system up in that way is beside the point. Indeed, the reductionist move of saying that Science is a useful meme, war is a bad meme, ..., misses that it is the whole system that is under the knife at every moment. We cannot do much more than guess how the system will evolve, but we make wild statements about what is good or bad.

Thursday, February 11, 2010

Three changes to 'Comment on "A glance beyond the quantum model" '

As a result of correspondence with Miguel Nevascués, I have communicated three changes to Proc. Roy. Soc. A,
  • to include an acknowledgment, "I am grateful for very helpful correspondence with Miguel Nevascués";
  • to change the first sentence of the Summary so that it does not make two incorrect claims about the aims of NW's paper, so that it would read "“A glance beyond the quantum model” uses a modernized Correspondence Principle that begins with a discussion of particles, whereas in empirical terms particles are secondary to events.";
  • to change one sentence in my concluding paragraph to say "Navascués’ and Wunderlich’s paper requires comment where something less ambitious would have gone unchallenged", saying "comment" instead of saying "a vigorous condemnation", which I can hardly believe I wrote.
Miguel's comments were indeed very helpful. There are other changes I would like to make to my Comment, but I will only think about making them if, against likelihood, the referees recommend acceptance with changes.

For anyone reading this who hasn't submitted papers to journals, waiting for the acceptance or rejection letter is hard work, with an inhuman time-scale that is usually months but can be much shorter, depending on the vicissitudes of referees' schedules and whims. So every time an e-mail arrives from the time a paper is submitted, for several months, it may be the dreaded rejection. It's easier if you're relatively sure you've hit a sweet spot of conventional ideas that you feel sure most Physicists will get, but then the paper is perhaps not close enough to the edge to be more than a little interesting.

The Copenhagen Interpretation and thoughts that arise.

Notification e-mails are wonderful, particularly when they bring a table of contents for Studies in History and Philosophy of Modern Physics. I found the highlight this month to be James R.Henderson's "Classes of Copenhagen interpretations: Mechanisms of collapse as typologically determinative", which classifies some of the versions of the Copenhagen Interpretation quite nicely, in terms of a class of four Physicists, Bohr, von Neumann, Heisenberg, and Wheeler. Henderson is nicely careful to say that each of these four has a view of CI that has spawned its own industry of claims about what each of the big names really said and meant.
The citation is: Studies in History and Philosophy of Modern Physics 41 (2010) 1–8.

Henderson's very clear presentation points out for me the way in which the discussion so much starts with QM and tries to construct classical physics from it, because QM is supposed to be better and more fundamental than the old classical mechanics. As indeed it is, but when discussing foundations it's perhaps better not to start with such a strong assumption. Starting, conversely, from a purely classical point of view, discrete events can be taken as thermodynamic transitions, without any causal account for why they happen (that being the nature of thermodynamics, in contrast to statistical mechanics), so that Heisenberg's or Wheeler's records are the given experimental data, from which unobserved causes might be inferred. There's no question of there being a philosophically based measurement problem in this view, because there is so far no such thing as QM, there are just records in an experimenter's notebook or computer memory.

If we start from the recorded data that comes from an experiment, the question is how we come to have QM? The fundamental issue is that we have to split the world into two types of pieces, pieces that we will model with operators Si, which ordinarily we call states, and pieces that we will model with operators Mj, which ordinarily we call observables or measurement operators. When we record a number Vij when we use the piece of the world that we model with the operator Si with the piece of the world that we model with the operator Mj, we write the equation Vij=Trace[Si Mj]. When we've got a few hundred or million such numbers Vij, we solve the few hundred or million simultaneous nonlinear equations for Si and Mj.

This is a little strange, because QM is supposed to be a linear theory, but these are nonlinear equations --- if we know what the measurement operators should be a priori, we would have a set of linear equations for the states, and vice versa if we know what the states should be a priori, but it's not clear that we know either a priori, so in fact and in principle we have a nonlinear system of equations to solve. In practice, we solve these nonlinear equations iteratively, alternately as linear equations for the set of states, guessing what the measurement operators are, so that after a while we know what the states we are using are (a process often known as characterization), and then as linear equations for the measurement operators, but this is just an approximation method for solving a system of nonlinear equations.

Also peculiarly, the dimensions of the state and measurement operators are not determined, except by experience. In quantum optics there are some choices of experimental data for which it is enough to use 2-dimensional matrices to get good, useful models, but sometimes we have to introduce higher dimensional matrices, sometimes even infinite dimensional matrices, which is rather surprising given that we only have a finite number of numbers Vij. Indeed, given any finite set of experimental results, QM is not falsifiable, because we can always introduce a higher dimensional matrix algebra or set of abstract operators, so we can always solve the equations Vij=Trace[Si Mj].

Instead of solving the equations Vij=Trace[Si Mj], we can minimize the distance between Vij and Trace[Si Mj], using whatever norm we think is most useful. With this modification, we introduce an interesting question: what dimensionalities give us good models? We might find that 2-dimensional matrices give us wonderfully effective models even for millions of data points, in which case we might be tempted not to introduce higher dimensional matrices. Higher dimensionality will certainly allow greater accuracy, a smaller distance between the data and the models, but it may not be worth the trouble. If we find that some matrix dimensions work very well indeed for a given class of experiments, however, we are tempted to think that the world is that way, even though a better model that we haven't discovered yet may be possible.


This is far from being all of QM. We can also introduce transformations that can be used to model the effect of intermediate pieces of the world, so that we effectively split the world into more than two pieces, and we can introduce transformations that model moving pieces of the world relative to each other instead of introducing different operators for different configurations.


The point, I suppose, is that this is a very empirical approach to what QM does for us. There is a fundamental critique that I think QM in principle may not be able to answer, which is that when we use a piece of the world S1 with two pieces of the world M1 and M2, one after the other, we cannot be sure that the piece of the world that we model using S1 is not changed, so we appear to have no warrant for using the same S1 in two equations, V11=Trace[S1 M1] and V12=Trace[S1 M2]. We can say that we can use the same operator in the two different experimental contexts as a pragmatic matter, which we have to do to obtain a set of equations that we can solve for the states and measurement operators, but ultimately everything surely interacts with everything else, indeed QM is more insistent about this than is classical Physics, so as a matter of principle we cannot use the same operator, we cannot solve the equations, and we cannot construct ultimately detailed QM models for experimental apparatuses.


Finally, this way to construct QM does not explain much. For that we have to introduce something between the preparation apparatus and the measurement apparatus. The traditional way of talking is in terms of particle trajectories, but that can only be made to work by introducing many prevarications, the detailed content of which can be organized fairly well in terms of path integrals. An alternative, random fields, a mathematically decent way to bring probability measures and classical fields into a single structure, is the ultimate topic of this blog.

Tuesday, February 09, 2010

This is just a geeky "wow!" post. The engineering side of graphene is starting to really move along. Solid State is just going from strength to strength, while foundations, particle physics, etc., struggles away.
 
 
Special issue on Graphene
   A F Morpurgo and B Trauzettel
   2010 Semicond. Sci. Technol. 25 030301 (1p)
   Abstract: http://www.iop.org/EJ/abstract/-alert=2643/0268-1242/25/3/030301
   Full text PDF: http://www.iop.org/EJ/article/-alert=2643/0268-1242/25/3/030301/sst10_3_030301.pdf

     Since the revolutionary experimental discovery of graphene in the year 2004, research on this new two-dimensional carbon allotrope has progressed at a spectacular pace. The impact of graphene on different areas of research-- including physics, chemistry, and applied sciences-- is only now starting to be fully appreciated.
     There are different factors that make graphene a truly impressive system. Regarding nano-electronics and related fields, for instance, it is the exceptional electronic and mechanical properties that yield very high room-temperature mobility values, due to the particular band structure, the material `cleanliness' (very low-concentration of impurities), as well as its stiffness. Also interesting is the possibility to have a high electrical conductivity and optical transparency, a combination which cannot be easily found in other material systems. For other fields, other properties could be mentioned, many of which are currently being explored. In the first years following this discovery, research on graphene has mainly focused on the fundamental physics aspects, triggered by the fact that electrons in graphene behave as Dirac fermions due to their interaction with the ions of the honeycomb lattice. This direction has led to the discovery of new phenomena such as Klein tunneling in a solid state system and the so-called half-integer quantum Hall effect due to a special type of Berry phase that appears in graphene. It has also led to the appreciation of thicker layers of graphene, which also have outstanding new properties of great interest in their own right (e.g., bilayer graphene, which supports chiral quasiparticles that, contrary to Dirac electrons, are not massless). Now the time is coming to deepen our knowledge and improve our control of the material properties, which is a key aspect to take one step further towards applications.
     The articles in the Semiconductor Science and Technology Graphene special issue deal with a diversity of topics and effectively reflect the status of different areas of graphene research. The excitonic condensation in a double graphene system is discussed by Kharitonov and Efetov. Borca et al report on a method to fabricate and characterize graphene monolayers epitaxially grown on Ru(0001).
     Furthermore, the energy and transport gaps in etched graphene nanoribbons are analyzed experimentally by Molitor et al. Mucha-Kruczynski et al review the tight-binding model of bilayer graphene, whereas Wurm et al focus on a theoretical description of the Aharonov-Bohm effect in monolayer graphene rings. Screening effects and collective excitations are studied by Roldan et al.
     Subsequently, Palacios et al review the electronic and magnetic structures of graphene nanoribbons, a problem that is highly relevant for graphene-based transistors. Klein tunneling in single and multiple barriers in graphene is the topic of the review article by Pereira Jr et al, while De Martino and Egger discuss the spectrum of a magnetic quantum dot in graphene. Titov et al study the effect of resonant scatterers on the local density of states in a rectangular graphene setup with metallic leads. Finally, the resistance modulation of multilayer graphene controlled by gate electric fields is experimentally analyzed by Miyazaki et al. We would like to thank all the authors for their contributions, which combine new results and pedagogical discussions of the state-of-the-art in different areas: it is this combination that most often adds to the value of topical issues. Special thanks also goes to the staff of Institute of Physics Publishing for contributing to the success of this effort.

Tuesday, February 02, 2010

Measurement has happened when the data has been written somewhere?

A somewhat whimsical definition of measurement that I posted on Physics Forums yesterday, which I like enough for a moment to post here as well:
I prefer interpretations that take a measurement to have occurred only when a number has been written in a computer memory. Then a paper can be submitted to Physics Review Letters that says, "the raw measurement data (100MB) is available on a CD on request," and goes on to describe the statistical computations that were done using that data to show how well the data matches up with a proposed quantum mechanical model for the experiment. That sets the standard for measurement as "a PRL paper", in contrast to setting the standard for measurement as something like "it's in my head", or "it's in the head of someone who has a Ph.D" (which is a John Bell joke). I might be OK with a PRD paper as an arbiter of whether a measurement happened, for example, but perhaps not with a JMathPhys paper. Endless fun can be had deciding which journals' imprimatur is OK.
As well as being somewhat facetious, this is a rather instrumental definition, but it captures what the experimental data is moderately well. If one doesn't have this data, it rests on the experimenter's word that they did an experiment at all, let alone that the results are what they say they are in a 4 page PRL. The data might be written in a lab book, but given the millions or billions of data points typically generated by experiments that record individual events, it seems unlikely that the data will not be written by computer.

It's to be hoped that the raw measurement data will include all the data about the geometrical configuration of the experimental apparatus, and details of what materials were used in each part of the apparatus, and how they were prepared, that are sufficient to enable the experiment to be repeated. Every bit of calibration data for parts of the apparatus should be in there, too. An experimentalist ultimately only has to provide enough to persuade another experimentalist that the experiment was done well and could be reproduced, and every single number and aborted try that didn't work might not be recorded, but the more the merrier. A record of the meaning of whatever numbers are recorded is of course also important.

A detailed record of when and where events occurred in the thermodynamically nontrivial parts of the experimental apparatus allows us to discover what correlations occurred in the data after the event, always acknowledging that if we look at the data in enough different ways we will certainly find accidental correlations that look significant. Weihs et al., Phys. Rev. Lett. 81, 5039 (1998), "Violation of Bell’s Inequality under Strict Einstein Locality Conditions", is a pretty good example of what can be done in the way of data reporting. A significant part of the data can be obtained from him, and Gregor Weihs' PhD thesis gives lots of details (in German) about the experimental apparatus.