Saturday, April 29, 2017

Quantum measurement and Free Will

John Bell invoked Free Will for experimenters as part of a derivation of inequalities that would have to be satisfied by classical relativistic models\cite[Bell, Chapter 12], a modification of an earlier stipulation that experimental choices should be "at the whim of experimenters"[Bell, Chapter 7].
He more pragmatically requires that the experimenters' choices should be ``effectively free for the purpose at hand'', which suggests some consideration of just how free that might be in the context of quantum measurement.

Consider Alice and Bob running two ends of an experiment. Alice and Bob each have to choose a random sequence of 0s and 1s. If either of them chooses 0 too often or 1 too often, we have to restart the data collection. They're also not allowed to have too many 0000 sequences, too many 01101110 sequences, et cetera; they have to satisfy all the tests here, say, within some pre-agreed limits. They're not allowed to look at the statistics of their past choices to make sure that they don't break any of the rules. A typical experiment might need Alice and Bob each to generate a sequence that contains a few hundred million 0s and 1s that can be certified after the event to be random enough. Furthermore, without conferring, the two lists must not be correlated, again within some pre-agreed limit. Hard to do. Alice and Bob don't seem to be very free at all. Every individual 0 or 1 can be freely chosen, but the statistics are constrained.

Alice and Bob in practice farm out the job either to random number generators or to photon detectors driven by light from stars 600 light years away (arXiv here). No Free Will required.

I don't have much problem with Bell-EPR experiments these days, but the seemingly pervasive idea that Free Will plays a significant part in the discussion seems unsupportable.

The discussion above hints at the stochastic nature of the constraints on Free Will. Suppose that Alice and Bob are both friends of Wigner. They agree that Wigner can construct quantum mechanical models of their brains that predict the statistics of their choices, which is checked while they practice choosing a list that contains millions of random numbers, millions of times. If quantum theory is truly universal, this is just hard to do, even very hard, but it's not in principle impossible. This model doesn't constrain Alice and Bob's Free Will, it just describes where their Free Will has brought them to. If Alice and Bob include observations of stars 600 light years away to decide their 0 and 1 choices, then Wigner has to include a quantum mechanical model of the light from those stars that is accurate enough to describe the statistics of Alice's and Bob's lists. A quantum mechanical model describes the statistics of Alice's and Bob's choices about as much as would a classical stochastic model.

Bell J S 1987 Speakable and unspeakable in quantum mechanics (Cambridge: Cambridge University Press).

Thursday, April 06, 2017

I'll do a bit of catching up on newish news. After a conversation with our daughter, I posted a video to YouTube, https://www.youtube.com/watch?v=frSL-BJTh90, that makes a blunt point about quantum mechanics:

Quantum Mechanics: Event Thinking

Published on Feb 18, 2017
To save time, watch the last five seconds, where I write down the word that this is in part a polemic against. That word appears in almost every interpretation of quantum mechanics. In this video, I talk about how to think about quantum mechanics as about events instead of using that word. This isn't a full–blown interpretation of quantum mechanics in 4'26", but it's a way of thinking that I find helpful. Something can be taken from this way of thinking without knowing anything about quantum mechanics, but inevitably the more math you know already the more you'll pick up on nuances (and, doubtless, know why you disagree with many of them).
Thinking about quantum mechanics as about events helps a little, but thinking of quantum field theory as a formalism for doing signal analysis is better, if you can get to that level of mathematics.

Adding a little more thinking in terms of events, imagine that we have a black box that puts out a continuous zero voltage on an output wire, but occasionally something happens inside the box so that the voltage rises sharply to some non-zero voltage for a very short period of time, then the voltage equally sharply returns to zero. We set up a clock so that whenever the voltage rises the time is sent to a computer's memory.
  When we put our event black box into a dark room at 20℃ we see events every now and then; if we change the temperature, the statistics of the events changes a little. Imagine we have a different kind of black box, which has a power cable into it, but no output, however when we introduce this box and turn on its power, the statistics of the events from the first box change, so we call the new kind of box a source of events. If we move the source black box to a different place, the statistics of the events change.
  If we have a number of event black boxes, we can do more sophisticated statistics, including correlations between when events happen. Then we can introduce multiple source black boxes and other apparatus, such as lenses, prisms, waveplates, polarizers, crystals, etc., and see what changes there are in the statistics.
  After many decades, we would have a quite comprehensive list of how statistics change as we change many aspects of the geometrical arrangement of source black boxes, event black boxes, and other apparatus. We would find that how the statistics change obey various equations as we move the pieces around. Eventually we would find that there are different kinds of source black boxes, which affect different kinds of event black boxes differently, and we would characterize the different ways that changes of the geometry change the statistics of the events.
  One thing that would soon become clear is that event black boxes cause statistics associated with other event black boxes to change. We'd like to have event black boxes that cause other statistics to change as little as possible, but we'd be disconcerted to discover that there's a limit to how much we can reduce the changes that an event black box will cause in the statistics of other black boxes' events.

  To return to the real world, which already knew about the electromagnetic field, electrons, atoms, before anyone thought of recording times of events so systematically, there was already a lot of knowledge about different kinds of sources, much of which had to be unlearned when quantum mechanics came along. When we use just light, the equations are provided by quantum optics. There are different equations if we use different types of source black boxes. We know what type of source black box we are using because the statistics change differently as we change the geometry. A lot of the work of quantum mechanical experiment is to characterize newly invented source black boxes using event black boxes we have already characterized with other sources carefully enough that we can use the new source black box to characterize newly invented event black boxes.

  The altogether too difficult question is "what is there between the source black boxes and the event black boxes?" The instrumentalist is quite certain that it doesn't matter, all we need to know is how the statistics change. As I said in the last post, there are so many possibilities that it's worth not worrying about what's between too much so we can do other things. Not quite the old-timers "shut up and calculate", more "we can do some fun stuff until such time as there's something it's useful to say for the sake of doing even more fun stuff". There is, inevitably, a lingering thought that if we better understood what is between we could do more fun stuff, but the regularities will be the same whether we understand or not.

Wednesday, April 05, 2017

Seven years later

Seven years away from this blog. The biggest change is that I've mostly reconciled myself to quantum theory, which would have been a surprise to me seven years ago but seems quite natural to me now. The name of the blog is probably not as appropriate as it was, but whatever.

Why that change? Mostly because there are so many ways to have something "under" quantum theory. "Stochastic superdeterminism" is possible, faster-than-light can't be ruled out if it has only limited effects at large scales, neither can a myriad of GRW-type or de Broglie-Bohm-type approaches (if one is generous about a few things). All of them are somewhat weird, but how are Buridan's ass and I to choose? Moreover, the statistics of the regularities of nature are the same either way, which will not kill me any faster whether they have something one might call an explanation or not.

In any case, by now I'm mostly happy to say that "quantum field theory is a signal processing formalism". Modern physics comes down to recording in a computer as much as we can fit into a reasonable amount of memory. A typical electrical signal could be recorded as an average voltage every trillionth of a second (a terabyte per second, say), but we don't do that because we don't have enough memory, so we save a very lossily compressed signal, perhaps, and quite commonly, as just the times when the signal changed from a low voltage to a high voltage (which might be only a few kilobytes per second). For that to be possible, we have to engineer the hardware so that the electrical signal does make transitions consistently from one voltage to another, and so that a timer is triggered to send the time to computer memory when the transition happens. The records in computer memory are what have to be modeled and perhaps explained by a quantum theoretical model. Where things get tricky is making those models as easy to use as possible. Specifically, we'd like to use quantum theory for reliable everyday engineering, we don't want to have to spend years figuring out how to make some new piece of apparatus work, so there's a kind of simplicity required. Physicists and engineers have all sorts of rules of thumb that work pretty well for relating new experimental apparatus to quantum theoretical models, and I've become more happy than I was to say that's OK, though knowing everything you need to know about quantum optics alone has become a lot.

Enough for now.

Sunday, May 30, 2010

Uniqueness, similarity, difference.

Adam Frank, at NPR, in his post Things, Ideas And Reality: What Persists?, prompted the response below (which can also be found as a comment there).
There is only uniqueness. Everything is different. Then comes noticing that things are similar. An apple is different from another apple; an apple is different from an orange, but they are both fruit. People are all different, but they are all people. Abstraction is the process of noticing and using similar beginnings to predict when we will come to similar outcomes. Noticing differences is essential to know when there may be unexpected turns, and then it is useful to notice similarities amongst differences.

Science deals only with reproducible, similar events, it can say nothing about anything that is unique. Insofar as everything is absolutely unique, Science can say nothing about the world, absolutely, but insofar as we notice similarities, Science is very useful. If the similarities are only in our imagination, the "true objective world" is as tenuous, or as solid, as our imagination.

Insofar as theory is capable, we can imagine ways both to use and test that capability.
This was also prompted by a conversation at dinner on Friday night with a Episcopal minister who is wife to a colleague of my wife. Science finds similarities that are very useful, but it cannot touch anything that is unique. If everything is unique, then, without taking away from remarkable success, Science makes contact with the world of measure zero. Also, we might think that there is no separation of one thing from another.

Saturday, May 01, 2010

Group identification and the Tea Partiers?

This is a comment, a long way down, on a Slacktivist post,

Empathy and epistemic closure

It's hard to know what my title should be, but here's my comment:

I think you're arguing that if we support each other, and consider what's best for other people as much as what's best for us, we will all make it. All 6, 8, 10, 12, 15 billion people, and every animal and plant, will go on into the cooperative future.

But suppose you believe that everything's going to hell. You believe that there are too many people on the earth. If you believe, deep down, that we're not all going to make it, empathy is a real problem. You start hardening yourself, preparing for riots and chaos, deciding who you will and won't support. Deciding which group gives you the best chance of survival, of making it through. Make that decision, and that group accepts you as one their own, then you're committed to that survival strategy. The closer you think the riots are, the more you can't back out, the more you have to go along if the beliefs of the group you've decided is your survival strategy start to make no sense. The more you're committed to that group as your only chance of survival, the more you want to make it strong and purge it of anything that might make it weak. Groups within the group emerge. Of course the riots and chaos are dealt with by a fundamental concept, "the end-times", which goes close to one-to-one with environmental disaster, but gives it meaning, of a sort.

Is this "stupid"? Much has been made above of "smart people can deal with complex understandings", but the bandying about of "stupid" is just the opposite. If environmental disaster leads to a good approximation of the end-times, with only a few million people left alive in the US, 50 million across the world, say, then on the liberal gold-standard of meaning, evolution, it is arguable that it is who is left who will not be stupid, as a matter of definition. Can we be sure that only empathic people will be left? Or does it seem more of those who will be left will be people who have decided to harden themselves to outsiders, but to cooperate fiercely within their little group?

If there come to be riots and chaos, smart liberals may find themselves defending their own. By writing this comment, I suppose I can't call myself a smart liberal, but since I have residual empathy, Good Luck.

Sunday, March 07, 2010

Algebra in Wonderland


Algebra in Wonderland

Since I was an mathematics undergraduate at Christ Church, Oxford, from 1975-78, Charles Dodgson has inevitably had a certain fascination, albeit one I haven't pursued. This New York Times article tells me things I probably ought to have known already. Charles Dodgson seems to have been rather the curmudgeon, but it's not clear from this article whether he had a spark as a mathematics tutor or whether he escaped from his students as much as he could. Teaching a thousand 19th Century mathematics undergraduates brilliantly would probably not make a hundredth of the cultural impact that Alice in Wonderland has made, however.

Friday, March 05, 2010

Modulation of a random signal

Partly thanks to Built on Facts, where can be found a post about "Hearing The Uncertainty Principle", and partly because analyzing the datasets of Gregor Weihs' experiment (arXiv:quant-ph/9810080, but it's good to look at his thesis as well), I suggested there that we can say that "QFT is about modulation of a random signal", in contrast to a common signal processing approach, in which we talk about modulation of a periodic signal.
Comment #9 more-or-less repeats what I said in my #3 (the part where I say "There is no quantum noise/fluctuations in your post, and there's none in the paper I cite above, so there's no Planck constant, which is, needless to say, a big difference."), but then goes on to something conventional, but unsupportable, "when you look for the QM particle, you will only find it in one (random) location". No to that. When you insert a high gain avalanche photodiode somewhere in an experiment, (1) changing the configuration of the experiment will cause interference effects in other signals; (2) the avalanche photodiode signal will from time to time (by which I mean not periodically) be in the avalanche state (for the length of time known as the dead time). The times at which avalanche events occur will in some cases be correlated with eerie precision with the times at which avalanche events occur at remote places in the apparatus. Although it's entirely conventional to say that a "particle" causes an avalanche event in the avalanche photodiode, that straightjackets your understanding of QFT, and is, besides, only remotely correct if you back far away from any lingering classical ideas of what a "particle" is that aren't explicitly contained in the mathematics of Hilbert space operators and states.
Try saying, instead, "QFT is about modulation of a random signal". The post more-or-less talks about modulation of a periodic signal, but we can also talk about modulation of a Lorentz invariant vacuum state. If we use probability theory to model the vacuum state (we could also use stochastic processes, but that's a different ballgame), the mathematics is raised a level above ordinary signals, in the sense that we have introduced probability measures over the linear space of ordinary signals, as a result of which the tensor product emerges quite naturally.
For me, Matt Springer's posts are somewhat variable, perhaps because he's attempting to keep it simple, which as we know is one of the hardest things to attempt, but he hits the spot often enough to remain interesting. For my comment #3, see his post.

The eeriness of the correlations of the times at which avalanche events happen in avalanche photodiodes that I mention above is pretty extreme in Gregor Weihs experiment and others like it. There's a central parametric down conversion apparatus that feeds two fiber optic cables that are 500 meters long, which at the speed of light is equivalent to about 1600 nanoseconds.When an avalanche photodiode is set up at the two remote ends of the two 500 meter fiber optic cables, about 1/20th of the time avalanche events happen within 1 nanosecond of each other. Compared to 1600 nanoseconds. The other 19/20ths of the time, there's not much of a match. We can plot the avalanche events that match within 200 nanoseconds from 2 seconds of Gregor Weihs data:

In this plot, there's some additional information, which shows, at "Alice"'s end of the experiment, which direction an electromagnetically controlled polarization device was set at (0 or 90 degrees is one setting, 45 or 135 degrees is the other setting, which is switched at random, but on average every few hundred nanoseconds), and in which of two avalanche photodiodes there was an avalanche event (in effect choosing between 0 or 90 degree polarization or choosing between 45 or 135 degree polarization).

There are lots of events within about 1 nanosecond, there is a small excess of events that have a match within about 20 nanoseconds, then the rest are distributed evenly. Beyond the 200 nanosecond extent of this plot, the time difference between events in "Alice"'s and "Bob"'s avalanche photodiodes that match most closely are just as uniformly distributed as here, out to a difference of about 20000 nanoseconds, then there's a slightly decreasing density, until all 9711 [edit: this number of events happened in the first 1/4 second, multiply by 8, more-or-less, for the number of events in 2 seconds] of the times of avalanche events in Alice's data are within 120,000 nanoseconds of some avalanche event in Bob's data. The graph below shows how close each of Alice's events is to the closest event in Bob's events in the same 2 second fragment of the dataset [edit: the graph is in fact for the first quarter second of data from the same run. The general features are unchanged]:
There are about 500 hundred of Alice's events that are so close to events in Bob's events that they don't show on the graph, then most of Alice's events are, fairly uniformly, within a time difference of about 2e-5 (=20,000 nanoseconds). The graph is steep at the far right because there are very few of Alice's events that are so separated in time from any of Bob's events.

Trying to make some sense of large amounts of data, with good dollops of muddy randomness thrown in! Modulations of a random signal.

Thursday, February 25, 2010

A response to a video

I posted the comment below at The Poetry of Science (which is on Discovery News).
This is a theme! Scientists are trying to circle the wagons because of ClimateGate and the general perception that Science has got presentation problems. I think circling the wagons is not a good strategy.

Great poetry is more often not lovely. Do the video again showing how lovely are the ways that Science helps the US help Iraqis and Afghanis. Perhaps adopt the style of Wilfrid Owen's Anthem for doomed Youth.


I worry that this kind of thing doesn't step outside preaching to the converted. Lovely places are lovely without Science telling us how to turn them into something else, whether abstractly, into a theoretical idealization, or concretely, into a parking lot. Do we implicitly say, as Scientists, that there is no beauty without understanding? Worse, is there any untouched beauty?

Abrogating everything beautiful and awesome in the world to Science is an unwarranted pretension. It is also a denial that understanding everything that is ugly or inconsequential is certainly also the subject of Science. When what is ugly seems to be the fault of the technological and industrial use of Science, such denial is culpable, and there are many who cry bitterly at the power of Science to change the world. The double standard seems almost always seen through by everyone except Scientists. Is it as it appears, that we claim credit wherever Science does good but reject blame for enabling others to be uncaring, rapacious, or evil?

There are the usual sincerely meant nods to the humility of Science in this video, but exactly how Science constructively critiques its past successes is subtle enough that this claim looks only ingratiating. That a critique is only allowed to be constructive should be honestly admitted to be self-serving --- without this constraint on critique, Science would presumably soon be dead, right? --- but it's just what Science does, for as long as people see Science to be beneficial. The continued existence of Science as a highly structured pattern of behavior depends on a flow of entropy no less than do the people who depend on Science

Monday, February 22, 2010

Is Science a "meme"?

I'm interested in the first tangent on the Forum Thread question about “Natural Phenomenon” of whether Science is a meme.

The idea that the "methods of science" are empirically successful, or that "Science" is empirically successful, is premature. The usefulness of Science and its methods is also questionable, on a long enough time frame, and depending on what you consider success. The methods of Science and the technological use of its product, Scientific theories, have arguably allowed exponential population growth and exponential increase of resource use over the last couple of Centuries, so over that time frame one can say fairly clearly that Science is a successful meme. The real question, however, is whether the human race will wipe itself out in the next hundred years or in the next thousand years or not. If we do, Science, insofar as we take it to be characteristic of us relative to other animals, is a pretty poor meme. Perhaps 20 generations. Hopeless.

Science has been subjected to a number of challenges to its value, but one of the most damning was Rachel Carson's "Silent Spring". Scientists were shown not to have understood more than a small part of the consequences of the technological and industrial use of Science. The ripples of disbelief that Science is necessarily a good thing are reflected every time a Scientist decries Global Warming and is ignored. One can say that it is technology's and industry's use of Science that is at fault, and more broadly that it is the individuals in society that are at fault for wanting washing machines, TVs, cheaply manufactured food, ..., but splitting the whole system up in that way is beside the point. Indeed, the reductionist move of saying that Science is a useful meme, war is a bad meme, ..., misses that it is the whole system that is under the knife at every moment. We cannot do much more than guess how the system will evolve, but we make wild statements about what is good or bad.

Thursday, February 11, 2010

Three changes to 'Comment on "A glance beyond the quantum model" '

As a result of correspondence with Miguel Nevascués, I have communicated three changes to Proc. Roy. Soc. A,
  • to include an acknowledgment, "I am grateful for very helpful correspondence with Miguel Nevascués";
  • to change the first sentence of the Summary so that it does not make two incorrect claims about the aims of NW's paper, so that it would read "“A glance beyond the quantum model” uses a modernized Correspondence Principle that begins with a discussion of particles, whereas in empirical terms particles are secondary to events.";
  • to change one sentence in my concluding paragraph to say "Navascués’ and Wunderlich’s paper requires comment where something less ambitious would have gone unchallenged", saying "comment" instead of saying "a vigorous condemnation", which I can hardly believe I wrote.
Miguel's comments were indeed very helpful. There are other changes I would like to make to my Comment, but I will only think about making them if, against likelihood, the referees recommend acceptance with changes.

For anyone reading this who hasn't submitted papers to journals, waiting for the acceptance or rejection letter is hard work, with an inhuman time-scale that is usually months but can be much shorter, depending on the vicissitudes of referees' schedules and whims. So every time an e-mail arrives from the time a paper is submitted, for several months, it may be the dreaded rejection. It's easier if you're relatively sure you've hit a sweet spot of conventional ideas that you feel sure most Physicists will get, but then the paper is perhaps not close enough to the edge to be more than a little interesting.

The Copenhagen Interpretation and thoughts that arise.

Notification e-mails are wonderful, particularly when they bring a table of contents for Studies in History and Philosophy of Modern Physics. I found the highlight this month to be James R.Henderson's "Classes of Copenhagen interpretations: Mechanisms of collapse as typologically determinative", which classifies some of the versions of the Copenhagen Interpretation quite nicely, in terms of a class of four Physicists, Bohr, von Neumann, Heisenberg, and Wheeler. Henderson is nicely careful to say that each of these four has a view of CI that has spawned its own industry of claims about what each of the big names really said and meant.
The citation is: Studies in History and Philosophy of Modern Physics 41 (2010) 1–8.

Henderson's very clear presentation points out for me the way in which the discussion so much starts with QM and tries to construct classical physics from it, because QM is supposed to be better and more fundamental than the old classical mechanics. As indeed it is, but when discussing foundations it's perhaps better not to start with such a strong assumption. Starting, conversely, from a purely classical point of view, discrete events can be taken as thermodynamic transitions, without any causal account for why they happen (that being the nature of thermodynamics, in contrast to statistical mechanics), so that Heisenberg's or Wheeler's records are the given experimental data, from which unobserved causes might be inferred. There's no question of there being a philosophically based measurement problem in this view, because there is so far no such thing as QM, there are just records in an experimenter's notebook or computer memory.

If we start from the recorded data that comes from an experiment, the question is how we come to have QM? The fundamental issue is that we have to split the world into two types of pieces, pieces that we will model with operators Si, which ordinarily we call states, and pieces that we will model with operators Mj, which ordinarily we call observables or measurement operators. When we record a number Vij when we use the piece of the world that we model with the operator Si with the piece of the world that we model with the operator Mj, we write the equation Vij=Trace[Si Mj]. When we've got a few hundred or million such numbers Vij, we solve the few hundred or million simultaneous nonlinear equations for Si and Mj.

This is a little strange, because QM is supposed to be a linear theory, but these are nonlinear equations --- if we know what the measurement operators should be a priori, we would have a set of linear equations for the states, and vice versa if we know what the states should be a priori, but it's not clear that we know either a priori, so in fact and in principle we have a nonlinear system of equations to solve. In practice, we solve these nonlinear equations iteratively, alternately as linear equations for the set of states, guessing what the measurement operators are, so that after a while we know what the states we are using are (a process often known as characterization), and then as linear equations for the measurement operators, but this is just an approximation method for solving a system of nonlinear equations.

Also peculiarly, the dimensions of the state and measurement operators are not determined, except by experience. In quantum optics there are some choices of experimental data for which it is enough to use 2-dimensional matrices to get good, useful models, but sometimes we have to introduce higher dimensional matrices, sometimes even infinite dimensional matrices, which is rather surprising given that we only have a finite number of numbers Vij. Indeed, given any finite set of experimental results, QM is not falsifiable, because we can always introduce a higher dimensional matrix algebra or set of abstract operators, so we can always solve the equations Vij=Trace[Si Mj].

Instead of solving the equations Vij=Trace[Si Mj], we can minimize the distance between Vij and Trace[Si Mj], using whatever norm we think is most useful. With this modification, we introduce an interesting question: what dimensionalities give us good models? We might find that 2-dimensional matrices give us wonderfully effective models even for millions of data points, in which case we might be tempted not to introduce higher dimensional matrices. Higher dimensionality will certainly allow greater accuracy, a smaller distance between the data and the models, but it may not be worth the trouble. If we find that some matrix dimensions work very well indeed for a given class of experiments, however, we are tempted to think that the world is that way, even though a better model that we haven't discovered yet may be possible.


This is far from being all of QM. We can also introduce transformations that can be used to model the effect of intermediate pieces of the world, so that we effectively split the world into more than two pieces, and we can introduce transformations that model moving pieces of the world relative to each other instead of introducing different operators for different configurations.


The point, I suppose, is that this is a very empirical approach to what QM does for us. There is a fundamental critique that I think QM in principle may not be able to answer, which is that when we use a piece of the world S1 with two pieces of the world M1 and M2, one after the other, we cannot be sure that the piece of the world that we model using S1 is not changed, so we appear to have no warrant for using the same S1 in two equations, V11=Trace[S1 M1] and V12=Trace[S1 M2]. We can say that we can use the same operator in the two different experimental contexts as a pragmatic matter, which we have to do to obtain a set of equations that we can solve for the states and measurement operators, but ultimately everything surely interacts with everything else, indeed QM is more insistent about this than is classical Physics, so as a matter of principle we cannot use the same operator, we cannot solve the equations, and we cannot construct ultimately detailed QM models for experimental apparatuses.


Finally, this way to construct QM does not explain much. For that we have to introduce something between the preparation apparatus and the measurement apparatus. The traditional way of talking is in terms of particle trajectories, but that can only be made to work by introducing many prevarications, the detailed content of which can be organized fairly well in terms of path integrals. An alternative, random fields, a mathematically decent way to bring probability measures and classical fields into a single structure, is the ultimate topic of this blog.

Tuesday, February 09, 2010

This is just a geeky "wow!" post. The engineering side of graphene is starting to really move along. Solid State is just going from strength to strength, while foundations, particle physics, etc., struggles away.
 
 
Special issue on Graphene
   A F Morpurgo and B Trauzettel
   2010 Semicond. Sci. Technol. 25 030301 (1p)
   Abstract: http://www.iop.org/EJ/abstract/-alert=2643/0268-1242/25/3/030301
   Full text PDF: http://www.iop.org/EJ/article/-alert=2643/0268-1242/25/3/030301/sst10_3_030301.pdf

     Since the revolutionary experimental discovery of graphene in the year 2004, research on this new two-dimensional carbon allotrope has progressed at a spectacular pace. The impact of graphene on different areas of research-- including physics, chemistry, and applied sciences-- is only now starting to be fully appreciated.
     There are different factors that make graphene a truly impressive system. Regarding nano-electronics and related fields, for instance, it is the exceptional electronic and mechanical properties that yield very high room-temperature mobility values, due to the particular band structure, the material `cleanliness' (very low-concentration of impurities), as well as its stiffness. Also interesting is the possibility to have a high electrical conductivity and optical transparency, a combination which cannot be easily found in other material systems. For other fields, other properties could be mentioned, many of which are currently being explored. In the first years following this discovery, research on graphene has mainly focused on the fundamental physics aspects, triggered by the fact that electrons in graphene behave as Dirac fermions due to their interaction with the ions of the honeycomb lattice. This direction has led to the discovery of new phenomena such as Klein tunneling in a solid state system and the so-called half-integer quantum Hall effect due to a special type of Berry phase that appears in graphene. It has also led to the appreciation of thicker layers of graphene, which also have outstanding new properties of great interest in their own right (e.g., bilayer graphene, which supports chiral quasiparticles that, contrary to Dirac electrons, are not massless). Now the time is coming to deepen our knowledge and improve our control of the material properties, which is a key aspect to take one step further towards applications.
     The articles in the Semiconductor Science and Technology Graphene special issue deal with a diversity of topics and effectively reflect the status of different areas of graphene research. The excitonic condensation in a double graphene system is discussed by Kharitonov and Efetov. Borca et al report on a method to fabricate and characterize graphene monolayers epitaxially grown on Ru(0001).
     Furthermore, the energy and transport gaps in etched graphene nanoribbons are analyzed experimentally by Molitor et al. Mucha-Kruczynski et al review the tight-binding model of bilayer graphene, whereas Wurm et al focus on a theoretical description of the Aharonov-Bohm effect in monolayer graphene rings. Screening effects and collective excitations are studied by Roldan et al.
     Subsequently, Palacios et al review the electronic and magnetic structures of graphene nanoribbons, a problem that is highly relevant for graphene-based transistors. Klein tunneling in single and multiple barriers in graphene is the topic of the review article by Pereira Jr et al, while De Martino and Egger discuss the spectrum of a magnetic quantum dot in graphene. Titov et al study the effect of resonant scatterers on the local density of states in a rectangular graphene setup with metallic leads. Finally, the resistance modulation of multilayer graphene controlled by gate electric fields is experimentally analyzed by Miyazaki et al. We would like to thank all the authors for their contributions, which combine new results and pedagogical discussions of the state-of-the-art in different areas: it is this combination that most often adds to the value of topical issues. Special thanks also goes to the staff of Institute of Physics Publishing for contributing to the success of this effort.