Tuesday, April 30, 2013

Students should learn the method of steepest descent

Today I gave a lecture to a Solid State Physics class on "Magnetic quantum oscillations and mapping out the Fermi surface." I basically follow chapter 14 of Ashcroft and Mermin.

The central result is Onsager's 1952 equation that the period of the magnetic oscillations is related to extremal areas of the Fermi surface perpendicular to the magnetic field.

[Aside: This is an amazing result because it only involves fundamental constants and so the interpretation of the experiments is not "theory laden", a rare thing in condensed matter].

There is one point I struggle to explain: why extremal areas?
Ashcroft and Mermin have a figure to justify this. It is lost on me, no matter how many times I read it and stare at the pictures.

Does anyone know a clear and convincing way to demonstrate this?

The only way I know how to get this result of extremal areas is to do a very fancy calculation (Lifshitz-Kosevich) which evaluates the magnetisation (or thermodynamic potential or partition function), summing over all the Landau levels and integrating over the momentum direction parallel to field. One then evaluates the last integral using the method of steepest descent (saddle point approximation), which then picks out the extremal areas of the Fermi surface.
However, this is way beyond what one should be doing at this level (final year undergraduate).
Furthermore, the students said they have never encountered the method of steepest descents before.
That is reasonable since neither did I as an undergraduate in Australia.
I only learnt it in graduate school after learning how to evaluate Feynman path integrals by this method. Only later did I learn it also applied simple one-dimensional integrals!

Should undergraduate students learn the method of steepest descent?
In mathematics and/or physics?

Monday, April 29, 2013

When is a property emergent?

I write posts with titles such as "The Dirac cone in graphene is emergent" and "Holes are emergent quasi-particles?"
What do I mean?
To me emergent properties and phenomena have the following distinguishing characteristics.

1. They are collective phenomena resulting from the interactions between the constituent particles of the system and occur at different length/energy/time scales.
For example, superconductivity results from interactions between the electrons and ions in a solid and involves energy (temperature) scales much less than the underlying interaction energies.

2. They are qualitatively different from the properties of the constituent particles.
For example, individual gold atoms in a metallic crystal are not "shiny". One cannot speak about superfluidity of individual (or small groups of) atoms.

3. The property is difficult (or almost impossible) to anticipate or predict from a knowledge of the microscopic constituents and the associated laws. In particular, emergent properties and phenomena (especially new phases of matter) are almost always observed experimentally first before they are explained theoretically. They are often discovered by serendipity.

4. The property is weakly dependent on microscopic details and can occur in a chemically and structurally diverse range of systems.
For example, many different metals are "shiny". Adding impurities or changing the mass of the electron has little effect. One can observe superfluidity in liquid helium and in cold atomic gases.

5. Understanding and describing the property involves introducing new concepts and organising principles. For example, symmetry breaking and order parameters.

Some of these ideas are contained in an old post, "Illustrating emergence with an example from geometry," which generated some nice comments.
I thank Fei Zhan for asking me the question.

I welcome comments.
Do you think the characteristics above are reasonable criteria?
How might they be sharpened or modified?

Saturday, April 27, 2013

When a Dean fakes data

The Sunday New York Times magazine has a fascinating and disturbing article The Mind of a Con Man about Diederik Stapel, former Dean of Behavioural and Social Sciences, at Tilburg University in the Netherlands. He had a stellar academic career which was based on fabricating experimental data.

The article is rather long but worth reading. Here are a few of the extracts I found particularly pertinent:
Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. 
In his early years of research — when he supposedly collected real experimental data — Stapel wrote papers laying out complicated and messy relationships between multiple variables. He soon realized that journal editors preferred simplicity. 
What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman. I am on the road. People are on the road with their talk. With the same talk. It’s like a circus.”  
Stapel’s atypical practice of collecting data for his graduate students wasn’t questioned,  [How many Deans do that ?] 
[The official report from the University stated] The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be.
It may be tempting for physicists and chemists to look down our noses at the social scientists, but I think these issues are just as pertinent for us. Don't forget Hendrik Schon!

As Kauzmann said: we tend to believe what we want rather than what the data tells us we should believe. Often the data is messy and inconclusive.

Friday, April 26, 2013

A refreshing experience

Recently I was asked by a university to evaluate an individual for tenure and promotion. The process was fascinating and I found refreshing.

The university send me copies of a selection of the individuals papers and a copy of a short CV. I was asked to review the scientific merit of the papers and thus comment on the suitability of the individual for tenure. There was no discussion of grant money received, numbers of Ph.D students graduated, number of publications, citation metrics, university "service", public outreach, journal impact factors, speaking invitations, .....

I found this refreshing, since it was in striking contrast to the values and emphasis of most institutions, which are very concerned with these other criteria, that I consider are secondary.

Wednesday, April 24, 2013

Did Fritz London surpass Einstein and Bohr?

I learned today of two impressive endorsements of Fritz London as a great theoretical physicist.

First, after John Bardeen got his second Nobel Prize he used the money to endow the Fritz London lectures at Duke University.

Second, in 2005 Phil Anderson wrote an essay in Nature, Thinking big which lauded London for having the vision that quantum theory was correct on all length scales, including the macroscopic, as manifested in superconductivity and superfluidity. Furthermore, Anderson argues this allows a "common sense" understanding of the quantum measurement problem.

London's vision is contrasted with that of Bohr and Einstein, of whom the "thoughtful curmudgeon" says
In reading about these [Einstein-Bohr] debates I have the sensation of being a small boy who spots not one, but two undressed emperors. Niels Bohr’s ‘complementarity principle’ — that there are two incompatible but equally correct ways of looking at things — was merely a way of using his prestige to promulgate a dubious philosophical view that would keep physicists working with the wonderful apparatus of quantum theory. 
Albert Einstein comes off a little better because he at least saw that what Bohr had to say was philosophically nonsense. But Einstein’s greatest mistake was that he assumed that Bohr was right — that there is no alternative to complementarity and therefore that quantum mechanics must be wrong. This was a far greater mistake, as we now know, than the cosmological constant.
I first learnt of these two endorsements in a beautiful essay by David Pines, Emergent behavior in quantum matter.

Aside: London's theory of the van der Waals interaction may have been the first case of deriving an effective low-energy Hamiltonian by integrating out high energy states, as discussed in the last point of this post.

Tuesday, April 23, 2013

Holes are emergent quasi-particles

When I first taught Solid State Physics [following Ashcroft and Mermin] I would introduce holes as the absence of an electron. I then discuss the effective mass of "electrons" and holes near the bottom and top of bands, respectively. I would not introduce the concept of a quasi-particle until several weeks later when I discussed electron-electron interactions and Landau's Fermi liquid theory.

Now I do it differently.
I explain how holes are an example of a quasi-particle with a positive charge and an effective mass [which can be significantly larger or smaller than the free electron mass].
This is a nice "simple" example of emergence. When you put interacting particles ["non-interacting" electrons interacting with a nuclei in a periodic lattice] together new entities emerge which has properties that are qualitatively different from the constituent particles.
[Aside: it is interesting that the only "interactions" between the electrons themselves are those associated with Fermi statistics]

As an aside, I then try to get students to think about some of the philosophical questions asking them to vote on and discuss the following questions:

Do you believe electrons exist? Are they real? Why?

Do you believe holes exist? Are they real? Why?

Monday, April 22, 2013

5 Grand challenges for condensed matter science

In 2007 an advisory committee for the USA Department of Energy published a report Directing matter and energy: 5 challenges for science and the imagination.

They decided a "grand challenge" must
  • be scientifically deep and demanding
  • be clear and well defined
  • be relevant to the broad portfolio of basic energy sciences,
  • promise real dividends in devices or methods that can significantly improve
  • the quality of life and help provide a secure energy future for the US.
Here are their five grand challenges:

1. Control material processes at the level of electrons

2. Design and perfect atom- and energy-efficient syntheses of new forms of matter with tailored properties.

3. Understand and control the remarkable properties of matter that emerge from complex correlations of atomic and electronic constituents.

4. Master energy and information on the nanoscale to create new technologies with capabilities rivaling those of living things

5. Characterize and control matter away—especially far away—from equilibrium.

A good introduction to the full report is the 2008 Physics Today article, co-authored by Graham Fleming and Mark Ratner, co-chairs of the DoE committee.

It is six years after the report was written, but the challenges remains the same.

To me these 5 challenges are actually broader than "basic energy sciences" that the DoE should fund. In fact, they define what should be the research agenda for the chemistry and physics of condensed phases of matter in any country.

Double chekc your results again...

Otherwise it may lead to economic ruin, or at least bad government policy!
Well not for scientists.
Earlier I posted that MIstakes happen in research, describing one of my recent ones.

Paul Krugman has a nice New York Times op-ed piece The Excel Depression which describes a flawed paper by two Harvard economists that led to a widely quoted claim that once government debt exceeds 90 per cent of Gross Domestic Product (GDP) economic growth declines sharply.

One of the mistakes the authors made was an Excel spread sheet coding error that meant they left five countries (including Australia!) out of their analysis. You can read a more detailed discussion here.

Thursday, April 18, 2013

I talked about that last time

I find it mildly irritating when speakers at conferences or seminar series say something like, "I am not going to explain all this background material or my earlier results because last year I talked about that."

There are several reasons I don't like this.
1. It assumes that everyone present was there last time.
2. It assumes that those present on the last occasion understood and remember the relevant material! [To me that is fantasy.]
3. It seems to be shutting off certain topics for questions and discussion.

Obviously, one can't cover everything and must leave out material. Everyone knows and accepts that. But, I think it is better not to deal with the problem in this way.

Wednesday, April 17, 2013

Shear viscosity of bad metals

Until a few years ago the shear viscosity of a metal was not a topic of interest. However, that has changed, largely stimulated by some calculations based on string theory techniques! The history is described here.

In particular, these calculations suggest that for a quantum critical metal the ratio of the shear viscosity to the entropy density has a universal lower bound, hbar/(4 pi k_B).
Calculations for graphene suggest the ratio is close to the lower bound leading it to be dubbed "a nearly perfect fluid".
Recent experiments on fermionic cold atoms find the ratio is several times the universal minimum.
The quark-gluon plasma is close to the minimum.

Somehow this "minimum viscosity" (which in simple kinetic theory scales with the relaxation time) is related to a minimum conductivity, and thus the somewhat elusive and poorly defined Mott-Ioffe-Regel limit, which bad metals comfortably violate.

The exact relationship between viscosity [a hydrodynamic concept] and conductivity is a rather subtle one I don't understand. Some of the issues for strongly correlated systems are discussed by Andreev, Kivelson, and Spivak.
I welcome clarifications.

I am only aware of a few model calculations of the shear viscosity of metals, starting from model Hamiltonians.
In 1958 Steinberg did it for the Sommerfeld model, including electron-phonon scattering.
Calculations for ferromagnetic spin fluctuations (paramagnon model) are reviewed by Beal-Monod.

It would be nice to see some calculations of the shear viscosity for the bad metal state of a Hubbard model using a technique such as Dynamical Mean-Field Theory.

In July I am going to a workshop in Korea on "Bad metals and Mott criticality" and am sure these issues will be discussed extensively there.

But, in the mean time I would love to generate some discussion on this issue.

Tuesday, April 16, 2013

Effective "Hamiltonians" for the stock market

Why do so many physicists end up working on Wall Street?
Is it an accident that one of the most successful hedge funds ever, was founded by James Simons, known to many of us from Chern-Simons theory?
What unique insights might physicists bring to modelling financial markets?
Lots of fields use mathematical models to understand the world. But physicists have a particular way of thinking about approximation and idealization. To make progress on interesting problems, physicists always have to make assumptions and approximations. They work with the zero temperature limit, or the thermodynamic limit, or the mean field approximation.  They are trained to think these approximations through–to justify their assumptions with physical arguments. Most importantly, physicists are taught how to think about what happens when their assumptions fail. They are taught to calculate, or at least estimate, the second order corrections to their first order equations. 
This is from a fascinating article Fisics and Phynance on The Back Page of the APS News. It is by James Owen Weatherall, the author of The Physics of Wall Street. 
[Aside: see a review by science blogger Chad Orzel and the critical New York Times review].

Implicit in his discussion is the importance of "effective" theories for emergent phenomena, i.e. description of dynamics on some coarse-grained level.

Monday, April 15, 2013

The Dirac cone in graphene is emergent

The nature of the gapless excitations in a quantum many-body system is an emergent property, including underlying order. A simple "classical" example is that the sound waves in crystals result from the breaking of continuous translational symmetry. More profound examples are the Goldstone modes associated with spontaneously broken symmetry and the edge states associated with topological order in fractional quantum Hall states.  Furthermore, the presence and gapless character of these excitations are particularly robust against perturbations and variations in microscopic details. This property is dubbed by Laughlin and Pines, a quantum protectorate.

A key property of graphene is the Dirac cone that describes the elementary electronic excitations. This can be "derived" from considering the band structure of a tight-binding model on the honeycomb lattice [a hexagonal lattice with two identical atoms per unit cell]. Hence, one might think that the Dirac cone is "just" a one-electron effect and is not an emergent phenomenon arising from many-particle interactions. I have said and thought that in the past. However, real graphene involves interacting electrons and the presence of disorder and impurities. The Dirac cone is quite robust against these additional interactions.


If the electron-electron interactions were stronger in graphene it would be an insulator. If the interactions were purely short range and described by a Hubbard model, then undoped graphene with a larger lattice constant [and so a larger U/t] would be a Mott insulator, and possibly a spin liquid. In reality there are longer range screened Coulomb interactions. The emergence of the metallic state [and the quantum criticality associated with the Dirac cone] is an extremely subtle and profound question, nicely discussed in a Physics article by Igor Herbut. There is also an intriguing proposal that free standing graphene may be an insulator (Pauling's dream!) because of the reduced screening due to the absence of a substrate.

Aside: somewhat similar issues are associated with the reasons for the validity of a band structure picture for simple metals. This actually arises for the profound reason of Landau's Fermi liquid theory.

The fact that the Dirac cone and metallicity survives the presence of disorder is also a subtle question. In a conventional two-dimensional metal weak localisation leads to an "insulating state" at zero temperature. Graphene is different. There are also interesting questions about the "minimum metallic conductivity" that I need to learn about.

This post was inspired by a nice colloquium that Michael Fuhrer gave at UQ on friday.

Friday, April 12, 2013

Physics in prime time TV

My family has been enjoying watching the comedy TV series 3rd Rock from the sun from the 1990s. We thank Jure Kokalj for introducing us to it. The main character Dick Solomon is a physics professor so occasionally there is some physics. Unfortunately, some of the "physics" is just mumbo-jumbo. This is unlike The Big Bang Theory which has accurate physics.

Nevertheless, this clip has some pretty classic physics lines.

Pure plutonium is a strongly correlated metal

I often contrast elemental metals to strongly correlated electron materials such as cuprates, organic charge transfer salts, and heavy fermion compounds. However, this is not strictly correct. Some of the lanthanide and actinide elements are strongly correlated metals. This is most clearly demonstrated in the case of cerium and plutonium which undergo isostructural phase transitions involving large volume increases.

This is particularly nicely illustrated in the figure below, taken from a beautiful 2001 Nature news and views by Bob Albers, An expanding view of plutonium. Electronic structure methods based on Density Functional Theory (DFT) completely fail here.
A nice explanation of the figure was given by Savrasov, Kotliar, and Abrahams, in terms of strong electronic correlations that can be captured by Dynamical Mean-Field Theory (DMFT). In particular the expansion from the alpha to the delta phase is associated with a delocalised-localised transition of the f electrons. The lighter actinides have more delocalised f electrons leading to stronger chemical bonding and smaller volumes per atom in the crystal.

The strong correlations are reflected in other properties of plutonium.
This is highlighted in a nice review Plutonium condensed matter physics by Michael Boring and Jim Smith, which contrasts Pu to lighter elemental metals and heavy fermion compounds.

The figure below shows the temperature dependence of the thermal expansion of Pu and Iron. Note the magnitude of the slope is much greater for Pu.
The resistivity of plutonium is compared to potassium (K) and the heavy fermion compound UBe13. The resistivity for Pu is non-monotonic, saturating at a value comparable to the Mott limit, characteristic of a bad metal.

Thursday, April 11, 2013

Learning how bees navigate and make smooth landings

Last friday I attended a very nice physics colloquium by Mandyam Srinivasan on "Honeybees as a Model for the Study of Visually Guided Flight, Navigation, and Biologically Inspired Robotics"

Here are two fascinating things that really stood out to me. The picture below [taken from this review] gives a schematic of the experiment showing that image flow is the key property that bees balance to navigate obstacles.
Building on this led to insights as to how bees make smooth landings.
Analysis of the landing trajectories revealed that the flight speed of the bee decreases steadily as she approaches the surface. In fact, both the forward speed and the descent speed are approximately proportional to the height above the surface (Fig. 8), indicating that the bee is holding the angular velocity of the image of the surface approximately constant as the surface is approached. This strategy automatically ensures that both the forward and the descent speeds are close to zero at touchdown. Thus a smooth landing is achieved by an elegant and surprisingly simple process that does not require explicit knowledge of the bee's instantaneous speed or height (250).
This analysis leads to two linear first order differential equations which can be solved to make predictions that the bee regulates its height to decrease exponentially with time. This is indeed observed to the case. [A colleague commented how the agreement between experiment and theory was much more impressive than the average biophysics research].

Srinivasan then went on to describe how some of these ideas are being implemented in algorithms for automated flight.
He stressed that his view was one should not follow a biomimetic strategy but rather a bio-princip one, i.e. finding what principles are used in nature [e.g. constant image flow] and using appropriately adapted implementations in artificial flight.

Wednesday, April 10, 2013

The dark side of open access

The last post was about spam.
Unfortunately, this one is too.
It turns out there is a dark side to open access journals.
This is covered in a fascinating and disturbing New York Times article Scientific articles accepted (Personal checks too).
Think twice before you receive an invitation to submit an article, speak at a conference, or serve on an editorial board, from an organisation you have not had prior dealings with.
I get a lot of this spam and it almost all automatically goes to my Junk mail folder. But, I did not realise just how bad the problem is, particularly for those who are duped.

Monday, April 8, 2013

Filtering postdoc application spam

Eric Bittner suggested I post about the following problem:

Pretty much every day I receive email solicitations from potential post-doc applicants (I presume you do as well).  Most of these look as if they are on some sort of fishing expedition since it's clear from their description of their research interests and expertise, they have not even bothered looking at any of my papers, read my web-site, or have any idea of what I might be working on.  As rude as it seems, my response is to give them the same amount of time they spent looking into my research before hitting the delete button on my email browser.  
I'm thinking of making a link on my  group page for potential post-doc applicants with a list of things--which would include a brief research outline in addition to their CV--that postdoc and grad students should send to potential advisors.
Does anyone who reads your forum have a better suggestion? What's the best (and polite) way to filter robo-postdocs? 
I probably only average one or two of these "applications" per week. I also just delete them. I doubt that putting a relevant link on the group web page will achieve much.

There is a flip side to this annoying problem. If you are interested in a postdoc with a faculty member and you send them a carefully crafted email which shows you have actually read some of their papers and thought about how you might fit into their research program it may actually get their attention.

In 1987 Phil Anderson wrote a Physics Today column on Advice on applying for a postdoc which does discuss related issues (before email!). Faculty need to guide their students through the process. Students need to be realistic, well-informed and focussed.

One thing I do find disturbing about all this. There are people who have "earned" a Ph.D but seem incapable of doing basic "research" about what kind of person might hire them.

Saturday, April 6, 2013

Documenting wasted effort

After drowning in paperwork I asked Do grant applications ever get shorter?
I was interested to learn that several researchers from down the river at QUT recently got a letter published in Nature, Funding: Australia's grant system wastes time. They focussed on the National Health and Medical Research Council claiming:
Australian scientists spent around 550 years' worth of research time writing applications for project grants in 2012, many of which were more than 100 pages long. 
Yet 400 years' worth of that valuable research time was wasted because only around 20 per cent of applications were successful. 
Applicants spent an average of 34 days writing preparing their proposals, at a combined estimated salary cost of $66 million.
Unfortunately, I doubt this is that different from most countries. I review applications from overseas which look just as bulky, full of frivolous information, and with comparable low success rates.

Friday, April 5, 2013

A challenge for the theory of organic superconductors

A straight-forward measurement for any superconductor is how the transition temperature varies with pressure (and thus volume). Calculating this variation is not easy. For example, in a simple BCS superconductor one would have to calculate how the phonon spectrum, electron phonon-coupling constant, and density of states vary with pressure. Presumably this can be done with some electronic structure method such as based on density functional theory (DFT).
But, how about in a strongly correlated electron system?  One would have to first find how the parameters in some Hubbard type model varied with pressure. Then one has the extremely tricky issue of calculating Tc as a function of the Hubbard model parameters.

It turns out that the volume dependence of Tc in organics is dramatically larger than in cuprates and elemental superconductors.

I was recently drawn to the paragraph below from this paper by a football team (11 authors!)
Comparative thermal-expansion study of β″-(ET)2SF5CH2CF2SO3 and 
κ-(ET)2Cu(NCS)2: Uniaxial pressure coefficients of Tc and upper critical fields


Note that simple models (e.g., Sommerfeld, Debye) predict that energy scales such as the Fermi energy and Debye temperature scale with the volume according to some exponent of order one. Hence, this exponent of 40 for the organics Tc is amazing!

I disagree somewhat with the conclusion of the last sentence of the paragraph: that this extreme sensitivity "underlies the role of the lattice degrees of freedom for the superconducting instability for this class of materials." I don't think this shows that electron-phonon coupling is involved directly in superconductivity. It could be just that Tc is quite sensitive to the Hubbard model parameters. Small variations in the latter do drive the system closer or further from the Mott transition.

Yet, the challenge remains to calculate this volume sensitivity from theory!
A very modest but useful first step would be to first see if a DFT-based calculation can reproduce the measured bulk compressibility.

Thursday, April 4, 2013

Quantum commercialisation

Yesterday I was looking up the phone number of a local business in the White Pages for Brisbane. I leafed past Q and was encouraged to see how many local businesses involved quantum technologies. Names I saw included

Quantum quartz
Quantum mechanical
Quantum chemicals
Quantum trading distribution services
Quantum racing industries
Quantum reading learning vision....

Sorry I did not post this on the first day of the month....

Tuesday, April 2, 2013

Help!

I have quite a few conversations with students and postdocs who are floundering because of (what they perceive as) inadequate supervision and feedback.
What should they do?
What should I tell them?

First, it is hard for me to discern how much of the problem lies with with the supervisee and how much with the supervisor. This is particularly so when I don't really know the parties involved or the particular research area. I am also reluctant to get involved in problems possibly created by other faculty colleagues.

It is hard to know how to move forward.
But, here are few basic things that I sometimes tell people and I think may help.

1. Take responsibility and take action. The clock is ticking. Don't wait for someone else to do something.

2. Write. Putting things down on paper can help. This is whether it is the details of a stalled calculation, a draft of a paper or thesis chapter, or a new research idea. This can sometimes clarify your thinking to the point that more feedback or supervision is not required.

3. Rewrite. Checking and polishing what you wrote can again clarify your thinking.

4. Talk. Find other people to talk to about what you are working on. The more the better. Try other students and postdocs and faculty. Just the process of explaining what you are trying to do can clarify your thinking, even if the other person does not understand it.
Finding helpful people can require some effort and courage.

5. Pause. Next time, think twice before you sign up to work with someone. May sure you check out their reputation as a supervisor.

6. Be thankful. This difficult experience may refine the process of you becoming an independent researcher.

I welcome other suggestions.