Thursday, December 22, 2016

Are power laws good for anything?

It is rather amazing that many complex systems, ranging from proteins to stock markets to cities, exhibit power laws, sometimes over many decades.
A critical review is here, which contains the figure below.

Complexity theory makes much of these power laws.

But, sometimes I wonder what the power laws really tell us, and particularly whether for social and economic issues they are good for anything.
Recently, I learnt of a fascinating case. Admittedly, it does not rely on the exact mathematical details (e.g. the value of the power law exponent!).

The case is described in an article by Dudley Herschbach,
Understanding the outstanding: Zipf's law and positive deviance
and in the book Aid at the Edge of Chaos, by Ben Ramalingam.

Here is the basic idea. Suppose that you have a system of many weakly interacting (random) components. Based on the central limit theorem one would expect that a particular random variable would obey a normal (Gaussian) distribution. This means that large deviations from the mean are extremely unlikely. However, now suppose that the system is "complex" and the components are strongly interacting. Then the probability distribution of the variable may obey a power law. In particular, this means that large deviations from the mean can have a probability that is orders of magnitude larger than they would be if the distribution was "normal".

Now, lets make this concrete. Suppose one goes to a poor country and looks at the weight of young children. One will find that the average weight is significantly smaller than in an affluent country, and most importantly the average less than is healthy for brain and physical development. These low weights arise from a complex range of factors related to poverty: limited money to buy food, lack of diversity of diet, ignorance about healthy diet and nutrition, famines, giving more food to working members of the family, ...
However, if the weights of children obeys a power law, rather than a normal, distribution one might be hopeful that one could find some children who have a healthy weight and investigate what factors contribute to that. This leads to the following.
Positive Deviance (PD) is based on the observation that in every community there are certain individuals or groups (the positive deviants), whose uncommon but successful behaviors or strategies enable them to find better solutions to a problem than their peers. These individuals or groups have access to exactly the same resources and face the same challenges and obstacles as their peers. 
The PD approach is a strength-based, problem-solving approach for behavior and social change. The approach enables the community to discover existing solutions to complex problems within the community. 
The PD approach thus differs from traditional "needs based" or problem-solving approaches in that it does not focus primarily on identification of needs and the external inputs necessary to meet those needs or solve problems. A unique process invites the community to identify and optimize existing, sustainable solutions from within the community, which speeds up innovation. 
The PD approach has been used to address issues as diverse as childhood malnutrition, neo-natal mortality, girl trafficking, school drop-out, female genital cutting (FGC), hospital acquired infections (HAI) and HIV/AIDS.

Tuesday, December 20, 2016

More subtleties in protein structure and function

Almost three years ago I posted about the controversy concerning whether the photoactive yellow protein has low-barrier hydrogen bonds [for these the energy barrier for proton transfer is comparable to the zero-point energy]. I highlighted just how difficult it is going to be, both experimentally and theoretically to definitively resolve the issue, just as for an enzyme I recently discussed.
A key issue concerns how to interpret large proton NMR chemical shifts.

Two recent papers weigh in on the issue

The Low Barrier Hydrogen Bond in the Photoactive Yellow Protein: A Vacuum Artifact Absent in the Crystal and Solution 
Timo Graen, Ludger Inhester, Maike Clemens, Helmut Grubmüller, and Gerrit Groenhof

A Dynamic Equilibrium of Three Hydrogen-Bond Conformers Explains the NMR Spectrum of the Active Site of Photoactive Yellow Protein 
Phillip Johannes Taenzler, Keyarash Sadeghian, and Christian Ochsenfeld

I think the caveats I have offered before need to kept in mind.
As with understanding the active sites of most proteins the problem is that we don't have very direct experimental probes, but have to use indirect probes which produce experimental results that require significant modelling and interpretation.


I thank Steve Boxer for bringing one of these papers to my attention.

Sunday, December 18, 2016

A possible Christmas gift for thoughtful non-scientists?

Are you looking for Christmas gifts?

I think that scientists should be writing popular books for the general public. However, I am disappointed by most I look at. Too many seem to be characterised by hype, self-promotion, over-simplification, or promoting a particular narrow philosophical agenda. The books lack balance and nuance. We should not be just explaining about scientific knowledge but also give an accurate picture of what science is, and what it can and can't do.
(Aside: Some of the problems of the genre, particularly its almost quasi-religious agenda, is discussed in a paper by my UQ history colleague, Ian Hesketh.)

There is one book I that I do often hear non-scientists enthusiastically talk about.
A Short History of Nearly Everything by the famous travel (!) writer Bill Bryson.
There is a nice illustrated edition.


I welcome comments from people who have read the book or given it to non-scientists.

Thursday, December 15, 2016

A DMFT perspective on bad metals

Today I am giving a talk in the Applied Physics Department at Stanford. My host is Sri Raghu.
Here is the current version of the slides.


Tuesday, December 13, 2016

The challenge of an optimal enzyme

Carbonic anhydrase is a common enzyme that performs many different physiological functions including maintaining acid-base equilibria. It is one of the fastest enzymes known and its rate is actually limited not by the chemical reaction at the active site but by diffusion of the reactants and products to the active site.

Understanding the details of its mechanism presents several challenges, both experimentally and theoretically. A key issue is the number and exact location of the water molecules near the active site. The most recent picture (from a 2010 x-ray crystallography study) is shown below.

The "water wire" is involved in the proton transfer from the zinc cation to the Histidine residue. Of particular note is the short hydrogen bond (2.4 Angstroms) between the OH- group and a neighbouring water molecule.

Such a water network near an active site is similar to what occurs in the green fluorescent protein and KSI.

Reliable knowledge of the finer details of this water network really does matter.

This ties in with theoretical challenges that are related to several issues I have blogged about before. Basic questions concerning proton transport along the wire include:

A. Is the proton transfer sequential or concerted?

B. Is quantum tunnelling involved?

C. What role (if any) does the dynamics of the surrounding protein play?

A 2003 paper by Cui and Karplus considers A., highlighting the sensitivity to the details of the water wire.
Another 2003 paper by Smedarchina, Siebrand, Fernández-Ramos, and Cui looks at the both questions through kinetic isotope effects and suggests tunnelling plays a role.

In 2003 it was not even clear how many water molecules were in the wire and so the authors considered different alternatives.

One can only answer these questions definitively if one has extremely accurate potential energy surfaces. This is challenging because:

Barrier heights and quantum nuclear effects vary significantly with small changes (even 0.05 Angstroms) in H-bond donor-acceptor distances.

The potential surface can vary significantly depending on the level of quantum chemistry theory or density functional that is used in calculations.

I thank Srabani Taraphder for introducing me to this enzyme. She has recently investigated question C.

Monday, December 12, 2016

Bouncing soap bubbles

My wife and I are often looking for new science demonstrations to do with children. The latest one she found was "bouncing soap bubbles".



For reasons of convenience [laziness?] we actually bought the kit from Steve Spangler.
It is pretty cool.

A couple of interesting scientific questions are:

Why do the gloves help?

The claim is that the grease on your hands makes bursting the bubbles easier.

Why does glycerin make the soap bubbles stronger?

Why does "ageing" the soap solution for 24 hours lead to stronger bubbles?

Journal of Chemical Education is often a source of good ideas and science discussions. Here are two relevant articles.

Clean Chemistry: Entertaining and Educational Activities with Soap Bubbles 
Kathryn R. Williams

Soap Films and the Joy of Bubbles
Mary E. Saecker

Friday, December 9, 2016

Metric madness: McNamara and the military

Previously, I posted about a historical precedent for managing by metrics: economic planning in Stalinist Russia.

I recently learnt of a capitalist analogue, starting with Ford motor company in the USA.
I found the following account illuminating and loved the (tragic) quotes from Colin Powell about the Vietnam war.
Robert McNamara was the brightest of a group of ten military analysts who worked together in Air Force Statistical Control during World War II and who were hired en masse by Henry Ford II in 1946. They became a strategic planning unit within Ford, initially dubbed the Quiz Kids because of their seemingly endless questions and youth, but eventually renamed the Whiz Kids, thanks in no small part to the efforts of McNamara. 
There were ‘four McNamara steps to changing the thinking of any organisation’: state an objective, work out how to get there, apply costings, and systematically monitor progress against the plan. In the 1960s, appointed by J.F. Kennedy as Secretary of Defense after just a week as Chair of Ford, McNamara created another Strategic Planning Unit in the Department of Defense, also called the Whiz Kids, with a similar ethos of formal analysis. McNamara spelled out his approach to defence strategy: ‘We first determine what our foreign policy is to be, formulate a military strategy to carry out that policy, then build the military forces to conduct that strategy.’ 
Obsessed with the ‘formal and the analytical’ to select and order data, McNamara and his team famously developed a statistical strategy for winning the Vietnam War. ‘In essence, McNamara had taken the management concepts from his experiences at the Ford Motor Company, where he worked in a variety of positions for 15 years, eventually becoming president in 1960, and applied them to his management of the Department of Defense.’ 
But the gap between the ideal and the reality was stark. Colin Powell describes his experience on the ground in Vietnam in his biography: 
Secretary McNamara...made a visit to South Vietnam. Every quantitative measurement, he concluded, after forty-eight hours there, shows that we are winning the war. Measure it and it has meaning. Measure it and it is real. Yet, nothing I had witnessed . . . indicated we were beating the Viet Cong. Beating them? Most of the time we could not even find them. 
McNamara’s slide-rule commandos had devised precise indices to measure the immeasurable. This conspiracy of illusion would reach full flower in the years ahead, as we added to the secure-hamlet nonsense, the search-and-sweep nonsense, the body-count nonsense, all of which we knew was nonsense, even as we did it. 
McNamara then used the same principles to transform the World Bank’s systems and operations. Sonja Amadae, a historian of rational choice theory, suggests that, ‘over time . . . the objective, cost-benefit strategy of policy formation would become the universal status quo in development economics—a position it still holds today.’ Towards the end of his life, McNamara himself started to acknowledge that, ‘Amid all the objective-setting and evaluating, the careful counting and the cost-benefit analysis, stood ordinary human beings [who] behaved unpredictably.’
Ben Ramalingam,  Aid on the Edge of Chaos, Oxford University Press, 2013. pp. 45-46.
Aside: I am working on posting a review of the book soon.

Given all this dubious history, why are people trying to manage science by metrics?

Wednesday, December 7, 2016

Pseudo-spin lattice models for hydrogen-bonded ferroelectrics and ice

The challenge of understanding phase transitions and proton ordering in hydrogen-bonded ferroelectrics (such as KDP, squaric acid, croconic acid) and different crystal phases of ice has been a rich source of lattice models for statistical physics.
Models include ice-type models (six-vertex model, Slater's KDP model), transverse field Ising model, and some gauge theories. Some of the classical (quantum) models are exactly soluble in two (one) dimensions.

An important question that seems to be skimmed over is the following: under what assumptions can one actually "derive" these models starting from the actual crystal structure and electronic and vibrational properties of a specific material?

That quantum effects, particularly tunnelling of protons, are important in some of the materials is indicated by the large shifts (of the order of 100 percent) seen in the transition temperatures upon H/D isotope substitution.

In 1963 de Gennes argued that the transverse field Ising model should describe the collective excitations of protons tunnelling between different molecular units in an H-bonded ferroelectric. Some of this is discussed in detail in an extensive review by Blinc and Zeks.
An important issue is whether the phase transition is an "order-disorder" transition or a "displacive" transition. I think what this means is the following. In the former case, the transition is driven by the pseudo-spin variables and there is no soft lattice mode associated with the transition.
Perhaps, in different language, is it appropriate to "integrate out" the vibrational degrees of freedom?
[Aside: this reminds me of some issues that I looked at in a Holstein model about 20 years ago].

There are a lot of papers that make quantitative comparisons between experimental properties and the predictions of a transverse field Ising model (usually treated in the mean-field approximation).
One example (which also highlights the role of isotope effects) is

Quantum phase transition in K3D1−xHx(SO4)2 
Y. Moritomo, Y. Tokura, N. Nagaosa, T. Suzuki, and K. Kumagai

One problem I am puzzling over is that the model parameters that they (and others) extract are different from what I would expect from knowing the actual bond lengths, vibrational frequencies, in the system and the energetics of different H-bond states. I can only "derive" pseudo-spin models with quite restrictive assumptions.

A recent paper that looks some of rich physics associated with collective quantum effects is
Classical and quantum theories of proton disorder in hexagonal water ice 
Owen Benton, Olga Sikora, and Nic Shannon

Monday, December 5, 2016

Hydrogen bonding at Berkeley

On Friday I am giving a talk in the Chemistry Department at Berkeley.
Here is the current version of the slides.

There is some interesting local background history I will briefly mention in the talk. One of the first people to document correlations between different properties (e.g. bond lengths and vibrational frequencies) of diverse classes of H-bond complexes was George Pimentel. 
Many correlations were summarised in a classic book, "The Hydrogen Bond" published in 1960.
He also promoted the idea of a 4-electron, 3 orbital bond which has similarities to the diabatic state picture I am promoting.
There is even a lecture theatre on campus named after him!


Friday, December 2, 2016

A central result of non-equilibrium statistical physics

Here is a helpful quote from William Bialek. It is a footnote in a nice article, Perspectives on theory at the interface of physics and biology.
The Boltzmann distribution is the maximum entropy distribution consistent with knowing the mean energy, and this sometimes leads to confusion about maximum entropy methods as being equivalent to some sort of equilibrium assumption (which would be obviously wrong). But we can build maximum entropy models that hold many different expectation values fixed, and it is only when we fix the expectation value of the Hamiltonian that we are describing thermal equilibrium. What is useful is that maximum entropy models are equivalent to the Boltzmann distribution for some hypothetical system, and often this is a source of both intuition and calculational tools.
This type of approach features in the statistical mechanics of income distributions.

Examples where Bialek has applied this includes voting patterns of the USA Supreme Court, flocking of birds, and antibody diversity.

For a gentler introduction to this profound idea [which I still struggle with] see
*James Sethna's textbook, Entropy, Order parameters, and Complexity.
* review articles on large deviation theory by Hugo Touchette, such as this and this.
I thank David Limmer for bringing the latter to my attention.

Wednesday, November 30, 2016

Photosynthesis is incoherent

Beginning in 2007 luxury journals published some experimental papers making claims that quantum coherence was essential to photosynthesis. This was followed by a lot of theoretical papers claiming support. I was skeptical about these claims and in the first few years of this blog wrote several posts highlighting problems with the experiments, theory, interpretation, and hype.

Here is a recent paper that repeats one of the first experiments.

Nature does not rely on long-lived electronic quantum coherence for photosynthetic energy transfer Hong-Guang Duan, Valentyn I. Prokhorenko, Richard Cogdell, Khuram Ashraf, Amy L. Stevens, Michael Thorwart, R. J. Dwayne Miller
During the first steps of photosynthesis, the energy of impinging solar photons is transformed into electronic excitation energy of the light-harvesting biomolecular complexes. The subsequent energy transfer to the reaction center is understood in terms of exciton quasiparticles which move on a grid of biomolecular sites on typical time scales less than 100 femtoseconds (fs). Since the early days of quantum mechanics, this energy transfer is described as an incoherent Forster hopping with classical site occupation probabilities, but with quantum mechanically determined rate constants. This orthodox picture has been challenged by ultrafast optical spectroscopy experiments with the Fenna-Matthews-Olson protein in which interference oscillatory signals up to 1.5 picoseconds were reported and interpreted as direct evidence of exceptionally long-lived electronic quantum coherence. Here, we show that the optical 2D photon echo spectra of this complex at ambient temperature in aqueous solution do not provide evidence of any long-lived electronic quantum coherence, but confirm the orthodox view of rapidly decaying electronic quantum coherence on a time scale of 60 fs. Our results give no hint that electronic quantum coherence plays any biofunctional role in real photoactive biomolecular complexes. Since this natural energy transfer complex is rather small and has a structurally well defined protein with the distances between bacteriochlorophylls being comparable to other light-harvesting complexes, we anticipate that this finding is general and directly applies to even larger photoactive biomolecular complexes.
I do not find the 60 fsec timescale surprising. In 2008, Joel Gilmore and I published a review of experiment and theory on a wide range of biomolecules (in a warm wet environment) that suggested that tens of femtoseconds is the relevant time scale for decoherence.

I found the following section of the paper (page 7) interesting and troubling.
The results shown in Figs. 3 (a) and (b) prove that any electronic coherence vanishes within a dephasing time window of 60 fs. It is important to emphasize that the dephasing time determined like this is consistent with the dephasing time of τhom = 60 fs independently derived from the experiment (see above). It is important to realize that this cross-check constitutes the simplest and most direct test for the electronic dephasing time in 2D spectra. In fact, the only unique observable in 2D pho- ton echo spectroscopy is the homogeneous lineshape. The use of rephasing processes in echo spectroscopies removes the inhomogeneous broadening and this can be directly inferred by the projection of the spectrum on the antidiagonal that shows the correlation between the excitation and probe fields. This check of self-consistency has not been made earlier and is in complete contradiction to the assertion made in earlier works. Moreover, our direct observation of the homogeneous line width is in agreement with independent FMO data of Ref. 12. This study finds an ∼ 100 cm−1 homogeneous line width estimated from the low-temperature data taken at 77 K, which corresponds to an electronic coherence time of ∼ 110 fs, in line with our result given the difference in temperature. In fact, if any long lived electronic coherences were operating on the 1 ps timescale as claimed previously (1), the antidiagonal line width would have to be on the order of 10 cm−1, and would appear as an extremely sharp ridge in the 2D inhomogeneously broadened spectrum (see Supplementary Materials). The lack of this feature conspicuously points to the misassignment of the long lived features to long lived electronic coherences where as now established in the present work is due to weak vibrational coherences. The frequencies of these oscillations, their lifetimes, and amplitudes all match those expected for molecular modes (41, 42) and not long-lived electronic coherences.

Monday, November 28, 2016

Polanyi and Emergence before "More is Different"

The common narrative in physics is that the limitations of reductionism, the importance of emergence, and the stratification of scientific fields and concepts were first highlighted in 1972,  by P.W. Anderson in a classic article, "More is Different" published in Science. Anderson nicely used broken symmetry as an example of an organising principle that occurs at one strata and as a result of the thermodynamic limit.

The article was based on lectures Anderson gave in 1967.
The article actually does not seem to contain the word "emergence". He talks about new properties "arising".

I recently learned how similar ideas about emergence and the stratification of fields was enunciated earlier by Michael Polanyi, in The Tacit Dimension, published in 1966, based on his 1962 Terry Lectures at Yale.
The book contains a chapter entitled "Emergence".

Here is a quote:
you cannot derive a vocabulary from phonetics; you cannot derive the grammar of language from its vocabulary; a correct use of grammar does not account for good style; and a good style does not provide the content of a piece of prose. ... it is impossible to represent the organizing principles of a higher level by the laws governing its isolated particulars.
Much of the chapter focuses on biology and the inadequacy of genetic reductionism. These ideas were expanded in a paper, "Life's irreducible structure," published in Science in 1968.

I recently learned about Polanyi's contribution from
The concept of emergence in social sciences: its history and importance 
G.M. Hodgson

Here is a bit of random background.

Before turning to philosophy, Polanyi worked very successfully in Physical Chemistry. Some readers will know him for his contributions to reaction rate theory, the transition state, a diabatic state description of proton transfer, the LEPS potential energy surface based on valence bond theory, ...

Polanyi was the Ph.D. advisor of Eugene Wigner. Melvin Calvin, a postdoc with Polanyi, and his son, John Polanyi, went on to win Nobel Prizes in Chemistry.

Google Scholar lists "The Tacit Dimension" with almost 25,000 citations.
The book was recently republished with a new foreword by Amartya Sen, Nobel Laureate in Economics.

Friday, November 25, 2016

Should you quit social media?

The New York Times has an interesting Op-ed. piece Quit Social Media. Your Career May Depend on It, by Cal Newport, a faculty member in computer science at Georgetown University.

When I saw the headline I thought the point was going to be an important one that has been made many times before; people sometimes post stupid stuff on social media and get fired as a result. Don't do it!
However, that is not his point.
Rather, he says social media is bad for two reasons:

1. It is a distraction that prevents deep thinking and sustained  "deep" work. Because you are constantly looking at your phone, tablet, or laptop or posting on it, you don't have the long periods of "quiet" time that are needed for substantial achievement.

2. Real substantial contributions will get noticed and recognised without you constantly "tweeting" or posting about what you are doing or have done. Cut back on the self-promotion.

Overall, I agree.

When I discussed this and my post about 13 hour days with two young scientists at an elite institution they said: "you really have no idea how much time some people are wasting on social media while in the lab." Ph.D students and postdocs may be physically present but not necessarily mentally or meaningfully engaged.

A similar argument for restraint, but with different motivations, is being advocated by Sherry Turkle, a psychologist at MIT. Here is a recent interview.

I welcome discussion.

Thursday, November 24, 2016

The many scales of emergence in the Haldane spin chain

The spin-1 antiferromagnetic Heisenberg chain provides a nice example of emergence in a quantum many-body system. Specifically, there are three distinct phenomena that emerge that were difficult to anticipate: the energy gap conjectured by Haldane, topological order, and the edge excitations with spin-1/2.

An interesting question is whether anyone could have ever predicted these from just knowing the atomic and crystal structure of a specific material. I suspect Laughlin and Pines would say no.

To understand the emergent properties one needs to derive effective Hamiltonians at several different length and energy scales. I have tried to capture this in the diagram below. In the vertical direction, the length scales get longer and the energy scales get smaller.


It is interesting that one can get the Haldane gap from the non-linear sigma model. However, it coarse grains too much and won't give the topological order or the edge excitations.

It seems to me that the profundity of the emergence that occurs at the different strata (length scales) is different. At the lower levels, the emergence is perhaps more "straightforward" and less surprising or less singular (in the sense of Berry).

Aside. I spend too much time making this figure in PowerPoint. Any suggestions on a quick and easy way to make such figures?

Any comments on the diagram would be appreciated.

Wednesday, November 23, 2016

How I got a Wikipedia page

It has dubious origins.

Some people are very impressed that I have a Wikipedia page.
I find this a bit embarrassing because there are many scientists, more distinguished than I, who do not have pages.
When people tell me how impressed they are I tell them the story.

Almost ten years ago some enthusiasts of "quantum biology" invited me to contribute a chapter to a book on the subject. The chapter I wrote, together with two students, was different from most of the other chapters because we focussed on realistic models and estimates for quantum decoherence in biomolecules. (Some of the material is here.) This leads one to be very skeptical about the whole notion that quantum coherence can play a significant role in biomolecular function, let alone biological processes. Most other authors are true believers.

I believe that to promote the book one of the editors had one of his Ph.D. students [who appeared to also do a some of the grunt work of the book editing] create a Wikipedia page for the book and for all of the senior authors. These pages emphasised the contribution to the book and the connection to quantum biology.

The "history" of my page states it was created by an account that
An editor has expressed a concern that this account may be a sock puppet of Bunzil (talk · contribs · logs).
I have since edited my page to remove links and references to the book since it is not something I want to be defined by.

An aside. Today I updated the page because when giving talks I got tired of sometimes being introduced based on outdated information on the page.

Hardly, a distinguished history....


The xkcd cartoon is from here.

Monday, November 21, 2016

The "twin" excited electronic state in strong hydrogen bonds

One of the key predictions of the diabatic state picture of hydrogen bonding is that there should be an excited electronic state (a twin state) which is the "anti-bonding" combination of the two diabatic states associated with the ground state H-bond.
Recently, I posted about a possible identification of this state in malonaldehyde.

The following recent paper is relevant.

Symmetry breaking in the axial symmetrical configurations of enolic propanedial, propanedithial, and propanediselenal: pseudo Jahn–Teller effect versus the resonance-assisted hydrogen bond theory
Elahe Jalali, Davood Nori-Shargh

The key figure is below. The lowest B2 state is the twin state.
In the diabatic state picture, Delta is half of the off-diagonal matrix element that couples the two diabatic states.
Similar diagrams occur when O is replaced with S or Se.



The paper does not discuss twin states, but interprets everything in terms of the framework of the
 (A1 + B2) ⊗ bpseudo-Jahn-Teller effect. 

Two minor issues might be raised about this work.
It uses TD-DFT (Time-dependent Density Functional Theory). It is contentious how reliable that is for excited states in organic molecules.
The diabatic states are not explicitly constructed.
These issues could be addressed by using higher level quantum chemistry and constructing the diabatic states by a systematic procedure, as was done by Seth Olsen for a family of methine dye molecules.

A video illustrating the length scales of the universe

Sometimes when I speak about science to church groups I show the old (1977) video Powers of Ten which nicely illustrates the immense scale of the universe and orders of magnitude.
I often wished there was a more polished modern version.
Yesterday it was pointed out to me there is, Cosmic Eye.



The phone app can be purchased here for $1.

Friday, November 18, 2016

Desperately seeking Weyl semi-metals

In 2011 it was proposed that pyrochlore iridates (such as Y2Ir2O7) could exhibit the properties of a Weyl semi-metal, the three-dimensional analog of the Dirac cone found in graphene.
Since the sociology of condensed matter research is driven by exotica this paper stimulated numerous theoretical and experimental studies.
However, as often is the case, things turn out to be more complicated and it seems unlikely that these materials  exhibit a Weyl semi-metal.

This past week I have read several nice papers that address the issue.

Variation of optical conductivity spectra in the course of bandwidth-controlled metal-insulator transitions in pyrochlore iridates
K. Ueda, J. Fujioka, and Y. Tokura

There is a very nice phase diagram which shows systematic trends as a function of the ionic radius of the rare earth element R=Y, Dy, Gd, ...
Most of the materials are antiferromagnetic insulators.


The colour shading describes the low energy spectral weight in the optical conductivity up to 0.3 eV.
Blue is an insulator and red actually means a very small low energy spectral weight.
N can be thought of as the number of charge carriers per unit cell. Specifically, if this was a simple weakly interacting Fermi liquid N=1. Thus, the value of 0.05 for Pr signifies strong electron correlations. [Unfortunately, the paper talks about this as "weak correlations"].

In fact, as shown below even in the metallic phase at T=50 K one cannot see the Drude peak down to 10 meV.
This presents a theoretical challenge to explain this massive redistribution of spectral weight.


Slater to Mott Crossover in the Metal to Insulator Transition of Nd2Ir2O7
M. Nakayama, Takeshi Kondo, Z. Tian, J. J. Ishikawa, M. Halim, C. Bareille, W. Malaeb, K. Kuroda, T. Tomita, S. Ideta, K. Tanaka, M. Matsunami, S. Kimura, N. Inami, K. Ono, H. Kumigashira, L. Balents, S. Nakatsuji, and S. Shin

This ARPES study does find band touching at the magnetic metal-insulator transition temperature but as the temperature is lowered the spectral weight is suppressed and there is no sign of Weyl points.

Phase Diagram of Pyrochlore Iridates: All-in–All-out Magnetic Ordering and Non-Fermi-Liquid Properties 
H Shinaoka, S Hoshino, M Troyer, P Werner

This LDA+DMFT study shows that a three-band description is important for the R=Y compound.
This sets the stage for describing the phase diagram above.

I thank Prachi Telang for discussions at IISER Pune about these materials and bad semi-metals that stimulated this post.

Wednesday, November 16, 2016

Many reasons why you should NOT work 13 hours per day

I am very disturbed at how I encounter people, particularly young people, who work ridiculously long hours. Furthermore, it worries me that some are deluded about what they might achieve by doing this. Due to a variety of cultural pressures I think Ph.D. students from the Majority World are particularly prone to this.

First let's not debate exactly how many hours is too many or exceptions to the generalisations below. At the end I will give some caveats.

Here are some reasons why very long hours are not a good idea.

Something may snap.
And, when it does it will be very costly.
It may be your mental or physical health, or your spouse, or your children, ...
Don't think it won't happen. It does.

Long hours may be making you quite inefficient and unproductive.
You become tired and can't think as clearly and so make more mistakes, have less ideas, and find it harder to prioritise.

It is a myth that long hours is mostly what you need to do to survive or prosper in science.
I claim dumb luck is the biggest determining factor in getting a faculty position. Furthermore, when I look at people [students, postdocs, facutly] I don't observe a lot of correlation between real productivity and the hours they work.
There are other things that are much more important than long hours. Some of these I have covered in posts about basic but important skills. Others include knowing the big picture, giving good talks, ...
These are necessary but not sufficient conditions for survival.
Yet many who are "lab slaves" seem oblivious to do this. They may have unrealistic expectations about what the long hours will lead to. Some even think long hours are a sufficient condition for survival.

You may be wasting a lot of time.
Because you can't think clearly and/or just do whatever your boss or manager tells you to, you may spend a lot of time on tasks that have almost no chance of succeeding: poorly formulated experiments or calculations, applying for grants or jobs out of your league, submitting papers to luxury journals, ...
There are also all those papers that you or your boss did not finish. You worked long hours in the lab to get the data and then the paper was never brought to completion because you and/or your boss had moved on to the next crisis/opportunity/hot topic.

It may rob you of your joy of doing science.

It may be an addiction. 
Workaholism is as dangerous and as costly as alcoholism, drug and sexual addictions. The only difference is that workaholism is often seen as a virtue.

You DO have a choice.
One of the great lies of life in the affluent modern West is that people do not have many choices. This is exactly what employers and governments want us to believe. A problem is that people make choices [e.g. I have to get a permanent job in a research university, I have to have a big house, I have to send my kids to a private school, ...] that then severely constrain other choices.

You may be being exploited.
Universities and many PI's love cheap and compliant labour, whether it is grad students, "adjunct faculty" [teaching staff on short term contracts],  or "visiting scholars" from the Majority World.

A few years from now you may regret it.
You may have left academia and realise you could have got your current job without working 3 extra hours a day. Why did you do it? Your spouse [if they are still around] sure wishes you hadn't.

How many hours is too many?
I don't know.
There is significant variability in people's stamina and makeup.
There are also differences in personal circumstances [e.g. a single person versus someone with two young children at home].
Different tasks in science [analytical calculations, writing, discussing, device fabrication, computer coding, babysitting experiments, ...] differ significantly in how taxing they are intellectually, physically, or emotionally. Also, there may be certain deadlines or tasks that require long hours for a short period of time [a visit to a synchrotron, monitoring a chemical reaction that takes 18 hours, the last week of finishing a thesis, ...] .
This is not what I am talking about.
I am talking about an unhealthy lifestyle that does not deliver what it claims to.

How do you get out of this?
First take a break so you can see more clearly the problem.
Set some boundaries. Just say NO!
Talk to others about the issue.
Aim to work smarter not longer.

I welcome comments.

Monday, November 14, 2016

Why are the macroscopic and microscopic related?

Through a nice blog post by Anshul Kogar,
I became aware of a beautiful Physics Today Reference Frame (just 2 pages!) from 1998 by Frank Wilczek
Why are there Analogies between Condensed Matter and Particle Theory?

It is worth reading in full and slowly. But here a few of the profound ideas that I found new and stimulating.

A central result of Newton's Principia was
"to prove the theorem that the gravitational force exerted by a spherically symmetric body is the same as that due to an ideal point of equal total mass at the body's center. This theorem provides quite a rigorous and precise example of how macroscopic bodies can be replaced by microscopic ones, without altering the consequent behavior. " 
More generally, we find that nowhere in the equations of classical mechanics [or electromagnetism] is there any quantity that fixes a definite scale of distance.
Only with quantum mechanics do fundamental length scales appear: the Planck length, Compton wavelength, and Bohr radius.

Planck's treatment of blackbody radiation [macroscopic phenomena] linked it to microscopic energy levels.

Einstein then performed a similar link between the specific heat of a crystal and the existence of phonons: the first example of a quasi-particle.

Aside: I need to think of how these two examples do or do not fit into the arguments and examples I give in my emergent quantum matter talk.

Wilczek says
it is certainly not logically necessary for there to be any deep resemblance between the laws of a macroworld and those of the microworld that produces it  
an important clue is that the laws  must be" upwardly heritable" 
[This is Wilczek's own phrase which does not seem to have been picked up by anyone later, including himself.]
the most basic conceptual principles governing physics as we know it - the principle of locality and the principle of symmetry  .... - are upwardly inheritable.
He then adds the "quasi material nature of apparently empty space."

Overall, I think my take might be a little different. I think the reason for the analogies in the title are that there are certain organising principles for emergence [renormalisation, quasi-particles, effective Hamiltonians, spontaneous symmetry breaking] that transcend energy and length scales. The latter are just parameters in the theory. Depending on the system they can vary over twenty orders of orders of magnitude (e.g., from cold atoms to quark-gluon plasmas).

But, perhaps Wilczek would say that once you have symmetry and locality you get quantum field theory and the rest follows....

What do you think?

Friday, November 11, 2016

Telling students my personal teaching goals and philosophy

It is strange that I have never done this. Furthermore, I don't know anyone who does.
Why do this?
First, it is helpful for me to think about and decide what my goals actually are, particular relating to the big picture.
Second, it will be helpful for students to know. Too often they are guessing. Even worst, I fear that most just assume that my goals are theirs. Then they get frustrated if/when they discover their goals and/or values  are different.

So here are some goals I could think of. They are listed in order of decreasing importance to me.

To help you learn to THINK.

To inspire you to learn.

To help you see this is a beautiful subject.

To help you learn skills that are useful in other endeavors (including outside physics).

The help you put this subject in the context of others.

To help you learn the technical details of the subject.

To be your ally not your adversary .

My goals are NOT the following.
(Listed in no particular order).

To make you happy.

To spoon feed you.

To make life difficult for you.

To get high scores on your evaluations of my teaching.

To recruit you as a Ph.D. student to work with me.

Have you ever done anything like this?
Have you ever been in a class where it was done?
Do you know anyone who does it?
Are there benefits?

Thursday, November 10, 2016

Irreversibility is an emergent property

Time has a direction. Macroscopic processes are irreversible. Mixing is a simple example. The second law of thermodynamics encodes universal property of nature.
Yet the microscopic laws of nature [Newton's equations or Schrodinger's equation] are time reversal invariant. There is no arrow of time in these equations. So, where does macroscopic irreversibility come from?


It is helpful to think of irreversibility [broken time-reversal symmetry] as an emergent property. It only exists in the thermodynamic limit. Strictly speaking for a finite number of particles there is a "recurrence time" [whereby the system can return to close to its initial state]. However, for even as few as a thousand particles this becomes much longer than any experimental time scale.
There is a nice analogy to spontaneously broken symmetry in phase transitions.  Strictly speaking for a finite number of particles there is no broken symmetry as the system can tunnel backwards and forwards between different states. However, in reality for even a small macroscopic system the time scale for this is ridiculously long.

Deriving irreversibility from microscopic equations is a major theoretical challenge. The first substantial contribution was that of Boltzmann's H-theorem. There are many subtleties associated with why it is not the final answer, but my understanding is superficial...




This post was stimulated by some questions from students when I recently visited Vidyasagar University.

Monday, November 7, 2016

A concrete example of a quantum critical metal

I welcome comments on this preprint.

Quantum critical local spin dynamics near the Mott metal-insulator transition in infinite dimensions Nagamalleswararao Dasari, N. S. Vidhyadhiraja, Mark Jarrell, and Ross H. McKenzie
Finding microscopic models for metallic states that exhibit quantum critical properties such as $\omega/T$ scaling is a major theoretical challenge. We calculate the local dynamical spin susceptibility $\chi(T,\omega)$ for a Hubbard model at half filling using Dynamical Mean-Field Theory, which is exact in infinite dimensions. Qualitatively distinct behavior is found in the different regions of the phase diagram: Mott insulator, Fermi liquid metal, bad metal, and a quantum critical region above the finite temperature critical point. The signature of the latter is $\omega/T$ scaling where $T$ is the temperature. Our results are consistent with previous results showing scaling of the dc electrical conductivity and are relevant to experiments on organic charge transfer salts.
Here is the omega/T scaling, which I think is quite impressive.
We welcome comments.

Saturday, November 5, 2016

The role of simple models and concepts in computational materials science

Today I am giving the first talk in a session on Computational materials science at the 4th International Conference on Advances in Materials and Materials Processing.

Here are the slides for my talk "The role of simple models and concepts in computational materials science".

I will be referring the audience to the article such as those mentioned here, here and here that give a critical assessment of computer simulations and stress the importance of concepts.

I welcome comments, particularly as I think the talk could be stronger and clearer.

Thursday, November 3, 2016

Visit to a state university in India

Like everything in India, higher education is incredibly diverse, both in quality, resources, and culture. These statistics give some of the flavour. There are about 800 universities. A significant distinction is between state and central universities. The former are funded and controlled by state governments. The latter (and IITs, IISERs, IISc, TIFR...)  are funded and controlled by the central (i.e. national/federal) government. Broadly, the quality, resources, and autonomy (i.e. freedom from political interference) of the latter is much greater. On my many trips to India I have only visited these centrally funded institutes and universities.

This afternoon I looking forward to visiting the Physics Department of Vidyasagar University. It is funded by the West Bengal state government, and was started in 1981. It is named in honour of Ishwar Chandra Vidyasagar, a significant social reformer from the 19th century.

I am giving my talk on "Emergent Quantum Matter".
Here are the slides.

Update. I enjoyed my visit and interacting with the faculty and students. On the positive side, people were enthusiastic and there were some excellent questions from the students. I want to write a blog post about one question. On the negative side, it is sad to see how poorly places like this are resourced: whether infrastructure, lab equipment, lab supplies, library, faculty, or salaries. For example, there are 5 physics faculty members and they teach a full M.Sc. [2 years course work] to about 100 students. This is 2 courses per faculty per semester and obviously, their expertise is stretched to cover all courses. The Ph.D. students mostly have full-time jobs elsewhere and come in the afternoons and evenings to work on their projects. One travels 2 hours each way on public transport.

Wednesday, November 2, 2016

Hydrogen bonding talk at IIT-Kgp

Today I am giving a seminar, "Effect of quantum nuclear motion on hydrogen bonding" in the Chemistry Department at IIT Kharagpur. My host is Srabani Taraphder.

Here are the slides. The talk is mostly based on this paper.


Tuesday, November 1, 2016

Organic spin liquid talk at IIT-Kgp

Today I am giving a seminar in the Physics Department at Indian Institute of Technology (IIT) Kharagpur,
"Frustrated organic Mott insulators: from quantum spin liquids to superconductors."
Slides are here.

Due to the recent Nobel Prize to Haldane, I included one slide about quantum spin liquids in one dimension.

The talk material is covered in great detail in a review article, written together with Ben Powell.


Monday, October 31, 2016

H-bond correlations and NMR chemical shifts

For a diverse range of chemical compounds, the strength of hydrogen bonds [parametrised by the binding energy and/or bond length] is correlated with a wide range of physical properties such as bond lengths, vibrational frequencies and intensities, and isotope effects. I have posted about many of these and a summary of the main ones is in this paper.

One correlation which is particularly important for practical reasons is the correlation of bond strength (and length) with the chemical shift associated with proton NMR.

The chemical shift is the difference between the NMR resonant frequency of the proton in a specific molecule and that of a free proton. The first important point is that although this shift is extremely small (typically one part in 100,000!) one can measure it extremely accurately.
More importantly, this shift is quite sensitive to the local chemical bonding and so one can use it to actually identify the bonding in unknown molecules (e.g. protein structure determination).
Indeed, if you go to the library and open up a book on NMR or organic chemistry you will find tables and figures giving the chemical shifts associated with different functional groups.

The figure below (taken from here) shows proton chemical shifts for some different molecules.


Why does this happen?
Very roughly the chemical shift is largely determined by the local electronic charge density near the proton and this is modified by the local chemical bonding.

What about hydrogen bonds?
The figure below shows the correlation between the chemical shift and the donor-acceptor distance R for a range of molecules, as found in a 1980 paper.
Confusing aside: the figure shows the chemical shift relative to the standard TMS and so involves negative values.

Why does this matter?
The correlation provides a means to accurately "measure" the donor-acceptor bond length when one does not have direct measurements (e.g. by X-ray diffraction). This is particularly useful in proteins. Indeed, some of the first claims in 1994 on the controversial topic of low-barrier H-bonds (i.e. strong H-bonds) in enzymes were largely based on the observation of unusually small chemical shifts. Some of the subtle issues are discussed here. Another signature is the isotopic fractionation factor.

Although one can calculate these chemical shifts with "black box" computational chemistry, using formulae originally derived by Ramsey in 1950, understanding the underlying physics of the correlations is not clear.

I thank my student Anna Symes for helpful discussions about this topic.

Thursday, October 27, 2016

Emergent quantum matter and topology

Today I am giving a talk at IISER Kolkata. My host Chiranjib Mitra requested that I include some discussion of this year's Nobel Prize in Physics. This was very helpful as I think the talk now flows better and there are more illustrations of my main points. But, there is less time to talk about my own work...
Here is the current version of the slides. I welcome comments.


Tuesday, October 25, 2016

A nice demonstration of classical chiral symmetry breaking

I like concrete classroom demonstrations.

Andrew Boothroyd recently showed me a very elegant demonstration based on this paper
Spontaneous Chirality in Simple Systems
Galen T. Pickett, Mark Gross, and Hiroko Okuyama

It considers hard spheres confined to a cylinder. Different phases depending on the value of D, the ratio of the diameters of the cylinder and the spheres. The phase diagram is below.


Andrew has a nice demonstration using ping pong balls and a special transparent plastic cylinder that has the right diameter to produce a chiral phase. He shows it during a colloquium and sometimes even gets an applause!

I found a .ppt that has the nice pictures below.

Wednesday, October 19, 2016

How to give a bad science talk

Amongst his one page guides John Wilkins [my postdoc supervisor] has
Guidelines for giving a truly terrible talk
"Strict adherence to the following time-􏰀tested guidelines will ensure that both you and your work remain obscure and will guarantee an audience of minimum size at your next talk􏰁."
Independently, David Sholl has illustrated the problems in concrete ways with an actual talk "The Secrets of Memorably Bad Presentations"

 

I am not sure if it is funny or just plain painful. But it does drive home the points.
All students (and some faculty) should be forced to watch it in full.

Friday, October 14, 2016

Recommendations needed on software to correct English grammar

A necessary ingredient to surviving and possibly prospering in science is the ability to write clearly in English. Yet many students are not native English speakers and some have had poor education and training. For some, it is even difficult to write basic sentences without grammar and spelling mistakes.

This is a serious issue for both students and advisors.
Unfortunately, what happens too often is that advisors spend too much time correcting the English in drafts of papers and thesis chapters rather than focusing on the scientific content.
Even, worse lazy or over-committed advisors don't do the corrections and referees, examiners, or editors are left with the problem.

Advisors, co-authors, and examiners can get quite irritated in the process.
Students need to realise they are really hurting themselves in not addressing this issue.

Is there a solution?
I try to encourage students and postdocs to pair up and read each other's drafts. However, this is not really quid pro quo (i.e. a fair transaction) if one is a much stronger English writer than another.

A colleague recently told me about a solution he found worked very well. His institution bought the Grammarly software for a graduate student, who was excellent in science but poor in English.  Before giving any document to the advisor the student had to run it through the software. This not only finds spelling and grammatical errors but suggests alternatives and gives the reasons for the error.
Thus, it not only corrects the errors but trains the students. It does work. The student can now write better even without the software.

There is a free version of the software that has limited capability. The premium version costs US$140 per year. You can buy just 3 months for US$60.

I downloaded the free version to test it. It found a few minor mistakes in this blog post! I also tested it on a student essay and a weekly report. It found some errors, missed a few, and pointed out that there were several errors that could be corrected with the premium version (very clever marketing!).

Does anyone else have experience with this software, either themselves or getting their students or postdocs to use it?

Unless they are at a poor institution in the Majority world I think faculty need to bite tell students they need to buy the software.
If students are serious about getting a Ph.D with less stress and keeping their advisor on good terms they should buy it.
In the grander scheme of things this is a small amount of money.

I welcome recomnendations on alternatives.

On the lighter side do a Google image search on "funny English signs"

Wednesday, October 12, 2016

A quantum dimension to the Kosterlitz-Thouless transition

In my previous post about the 2016 Nobel Prize in Physics I stated that the Kosterlitz-Thouless transition was a classical phase transition (involving topological objects = vortices), in contrast to the quantum phase transitions associated with topological phases of matter.

However, on reflection I realised that it should not be overlooked that there is something distinctly quantum about the KT transition. In a two-dimensional superfluid it involves the binding of pairs of vortices and anti-vortices. These each have a quantum of circulation (+/-h/m where h is Planck's constant and m is the particle mass).

At the KT transition temperature Tc there is a finite jump in the superfluid density rho. The value just below Tc is related to Tc by


Note that Planck's constant appears in this equation.
In a classical world (h=0), Tc would be zero and there would be no KT transition!

This universal relation was derived by Nelson and Kosterlitz in 1977

The figure below contains a range of experimental data testing this relation.

Friday, October 7, 2016

Faculty job candidates need to know and articulate the big picture

Are there any necessary or sufficient conditions for getting a faculty position?
Previously, I suggested that a key element is actually dumb luck: being in the right place at the right time. But that is not my focus here.

Twenty years ago when I was struggling to find a faculty job the mythology was that you had to have at least two PRLs and get an invited talk at an APS March Meeting. And doing a postdoc at certain places (e.g. ITP Santa Barbara) would help...

Now the mythology seems to be that you need to have Nature and Science papers....

But, this in actually not the case.
This is not a sufficient condition.
Search committees want to hire someone who can lead an independent research program and can move into new areas.

Several department chairs have said things to me along the lines of "It is amazing how we interview some candidates who have impressive publication lists involving papers in luxury journals but when we actually talk to them we quickly lose interest. We find they lack any sort of big picture. Some cannot even articulate why their own papers are scientifically important, let alone future directions. It seems they have been a student or postdoc in some big group and they have developed some narrow (but important) technical expertise (e.g. device fabrication, running computational chemistry codes, using an STM, ...) that is indispensable to the group."

I find this quite sad. It is sad for the individuals. All the hard work in the hope of getting a faculty position will have gone to waste. I also find it sad when faculty don't prioritise training group members.

How can you stop this being you?

Read papers, including outside your actual research project.
Talk to people in different research groups about what they and you are doing.
Go to seminars, even when you are busy.
But, particularly write papers yourself. 
If you are the first author you really should write the first draft, including the introduction yourself. Don't let your boss (or someone more experienced) do it or expect them to.
Your draft may be poor and get heavily edited or even discarded completely. But you will learn from the process and with time confidence and competence will follow.

Wednesday, October 5, 2016

2016 Nobel Prize in Physics: Topology matters in condensed matter

I was delighted to see this year's Nobel Prize in Physics awarded to Thouless, Haldane, and Kosterlitz 
”for theoretical discoveries of topological phase transitions and topological phases of matter”.

A few years ago I predicted Thouless and Haldane, but was not sure they would ever get it. I am particularly glad they were not bypassed, but rather pushed forward, by topological insulators.

There is a very nice review of the scientific history on the Nobel site.

Here are a few random observations, roughly in order of decreasing importance.

First, it is important to appreciate that there are two distinct scientific discoveries here. They do both involve Thouless and topology, but they really are distinct and so Thouless’ contribution in both is all the more impressive.
The “topological phase transition” concerns the Kosterlitz-Thouless transition which is a classical phase transition (i.e. driven by thermal fluctuations) which is driven by vortices (topological objects,
which can also be viewed as non-linear excitations).
The KT transition and the low temperature phase is remarkably different from other phase transitions and phases of matter. It is a truly continuous transition in that all the derivatives of the free energy are continuous and a Taylor expansion about the critical temperature is not defined.
Yet the superfluid density undergoes a jump at the KT transition temperature.
The low temperature phase has power law correlations with an exponent which is not only irrational but non-universal (i.e. it depends on the coupling constant and temperature).
There are deep connections to quantum phase transitions in one-dimensional systems, e.g. in a spin-1/2 XXZ spin chain, but that is another story.

Topological states of matter are strictly quantum.
Having done the KT transition there is no reason why Thouless would have been led to the formulation of the quantum Hall effect in terms of topological invariants.
That is really an independent discovery. Furthermore, the topology and maths is much more abstract because it is not in real space but involves fibre bundles, Chern numbers, and Berry connections.


All of this phenomena are striking examples of emergence in physics: surprising new phenomena, entities, and concepts.
But, here there is a profound issue about theory preceding experiment.
Almost always emergent phenomena are discovered experimentally and later theory scrambles to explain what is going on.
But, here it seems to be different. KT was predicted and then observed.
The Haldane phase was predicted and then observed in real materials.
When I give my emergent quantum matter talk, I sometimes say: “I can’t think of an example of where a new quantum state of matter was predicted and then observed. Sometimes people give the example of BEC in ultracold atomic gases and of topological insulators but they are essentially non-interacting systems."

On the other hand, it is important to acknowledge that all of this was done with effective Hamiltonians (XY models and Heisenberg spin chains). No one started with a specific material (chemical composition) and then predicted what quantum state it would have without any input from experiment.

The background article helped me better appreciate the unique contributions of Kosterlitz. I was in error not to suggest him before. By himself he worked out the renormalisation group (RG) equations for the transition. Also with Nelson he predicted the universal jump in the superfluid density.
As an aside, it is fascinating that the same RG equations appear in the anisotropic Kondo model and were discovered earlier by Phil Anderson, which was also before Wilson did RG.

The background article also notes how it took a while for Haldane’s 1983 conjecture (that integer spin chains had an energy gap to the lowest excited triplet state) to be accepted, and suggests experiment decided.  It should be pointed out that on the theory side that the numerics was not clear (see e.g., this 1989 review by Ian Affleck) until Steve White developed the DMRG (Density Matrix Renormalisation Group) for one-dimensional quantum many-body systems and laid the matter to rest in 1994 by calculating the energy gap and correlation length to five significant figures!

Later I have some minor sociology comments, but don’t want to spoil all the lovely science in this post.

Monday, October 3, 2016

A critical review of holographic claims about condensed matter

There is a very helpful review article
Demystifying the Holographic Mystique by Dmitri Khveshchenko

In order to motivate a proper full reading I just give a few choice quotes.
Thus far, however, a flurry of the traditionally detailed (hence, rarely concise) publications on the topic have generated not only a good deal of enthusiasm but some reservations as well. Indeed, the proposed ’ad hoc’ generalizations of the original string-theoretical construction involve some of its most radical alterations, whereby most of its stringent constraints would have been aban- doned in the hope of still capturing some key aspects of the underlying correspondence. This is because the target (condensed matter) systems generically tend to be neither conformally, nor Lorentz (or even translationally and/or rotationally) invariant and lack any supersymmetric (or even an ordinary) gauge symmetry with some (let alone, large) rank-N non-abelian group. 
Moreover, while sporting a truly impressive level of technical profess, the exploratory ’bottom-up’ holographic studies have not yet helped to resolve such crucially important issues as: 
• Are the conditions of a large N, (super)gauge sym- metry, Lorentz/translational/rotational invariance of the boundary (quantum) theory indeed necessary for establishing a holographic correspondence with some weakly coupled (classical) gravity in the bulk? 
• Are all the strongly correlated systems (or only a precious few) supposed to have gravity duals? 
• What are the gravity duals of the already documented NFLs? 
• Given all the differences between the typical condensed matter and string theory problems, what (other than the lack of a better alternative) justifies the adaptation ’ad verbatim’ of the original (string-theoretical) holographic ’dictionary’? 
and, most importantly: 
• If the broadly defined holographic conjecture is indeed valid, then why is it so? 
Considering that by now the field of CMT holography has grown almost a decade old, it would seem that answering such outstanding questions should have been considered more important than continuing to apply the formal holographic recipes to an ever increasing number of model geometries and then seeking some resemblance to the real world systems without a good understanding as to why it would have to be there in the first place. In contrast, the overly pragmatic ’shut up and calculate’ approach prioritizes computational tractability over phys- ical relevance, thus making it more about the method (which readily provides a plethora of answers but may struggle to specify the pertinent questions) itself, rather than the underlying physics.