I tend to avoid being on committees. However, this year I became chair of one, leading to this reflection. How do you find a balance between democracy and autocracy, between transparency and secrecy, and between efficiency and wasting a peoples precious time?

Over the years I have noticed committees can tend to one of two extremes.

1. Some are very democratic and transparent. All business is discussed in great detail. Votes are held about many things. Between meetings, committee members are emailed about the latest "urgent" matter, asked their opinion, and sometimes asked to vote to approve some small action.

The problem is this takes a lot of time. It would be quicker and more efficient if on these small matters that the chair or a subgroup simply made a unilateral decision.

2. Some committees are secretive and merely "rubber stamp" a bunch of decisions that have already been made by the chair or a select subgroup of members. This is efficient, particularly with regard to the small matters. However, it is problematic when this happens for weightier matters for which committee members could have given useful input and/or have a significant stake in the outcome.

Is this a fair and useful characterisation?

Humorous aside: When I was a teenager, I remember seeing the satirical movie The Rise and Rise of Michael Rimmer. If I recall correctly the main character becomes a dictator by a devious strategy. First, he makes every citizen vote on every piece of government legislation. They quickly get sick of this and so pass all power and authority to him.

I am not sure what the appropriate balance is between the extremes. The compromise I made so far for my current committee, is that I have a shared folder in which I put all my documents relating to the committee. That way members who want to can see what I am dealing with. However, between meetings I try to make unilateral decisions on small matters that I think they would agree with but don't want to be bothered with.

Having written this, I realised that similar issues actually arise in scientific collaborations, particularly international ones, where the collaborators rarely meet in person. From my experience with many different collaborators, I have noticed there is a challenge to find a balance between two extremes.

1. A collaborator is constantly asking others for approval of or suggestions on next steps [should I do this extra measurement or calculation?] and/or changes to a draft manuscript. For small initiatives or changes this can be very inefficient, particularly when there are a large number of co-authors. On the other hand, for large changes clear communication and discussion is important.

2. A collaborator communicates little and sometimes some of the co-authors see a draft manuscript that contains large sections (including methods and results) that had never been discussed before. This is problematic if one could have been asked earlier and had the opportunity to make useful comments before some major parts of the project were embarked upon. The horse has already bolted.

Again I am not sure what the balance is. Some depends on personalities, tastes in working styles, and respective expertise. But, one practical option is to have a shared Dropbox folder containing progress reports, detailed results, and draft manuscripts. Then, all the collaborators can peruse these as frequently as they want.

I welcome suggestions or experiences.

## Monday, March 30, 2015

## Friday, March 27, 2015

### Future challenges with nuclear quantum effects in water

Last October I enjoyed attending a meeting, Water: the most anomalous liquid at NORDITA. One of the goals of the workshop was to produce a review article, co-authored by about a dozen working groups, each covering a specific aspect of water. I was in the group on "Nuclear quantum effects in water", led by Tom Markland. I was worried that this goal was a bit too ambitious. After all, I am into modest goals! However, it is all coming together, a great credit to the organisers. Our group is now finalising our "chapter". An important and difficult task is to write something concrete and useful about future challenges and directions.

Here I give a few of my own biased tentative thoughts. Comments and suggestions would be very welcome.

Over the past decade there have been

Deep inelastic neutron scattering reveals the momentum distribution of protons, and can be compared to path integral simulations, as described here. Furthermore, this has illuminated competing quantum effects, as described here.

New accurate intermolecular potential energy surfaces and force fields, such as MB-pol.

Path integral simulations. Besides significant increases in computational power [Moore's law] making simulation of much larger systems and better "statistics" possible, there have been significant methodological advances, such as Ring Polymer Molecular Dynamics, and PIGLET.

Dynamical properties such as proton transport being dominated by

The coarse-grained monatomic Water (mW) model captures many anomalies of classical water, showing their origin is in the tetrahedral bonding. A diabatic state model captures essential features of the potential energy surface of single hydrogen bonds, particularly the variation with the distance between oxygen atoms. The model does describes competing quantum effects.

These advances present some

Resolving the ambiguity associated with interpreting the deep inelastic neutron scattering experiments. Going from the data to robust (i.e. non-controversial) spatial probability distributions for protons, particularly ones involving proton delocalisation would be nice.

The path integral simulations will only be as good at the potential energy surfaces that they use. For example, recent work shows how calculated isotope effects vary significantly with the DFT functional that is used. This is because the potential energy surface, particularly with respect to the proton transfer co-ordinate, is quite sensitive to the oxygen atom separation, and to the level of quantum chemical theory. This becomes particularly important for properties that are determined by rare events [i.e. thermal and quantum fluctuations to short hydrogen bonds].

Monatomic Water (mW) is completely classical. It would be nice to have a quantum generalisation that can describe how the water phase diagram changes with isotope (H/D substitution). Note there is already a problem because mW is so coarse-grained that it does not contain the O-H stretch. On the other hand, mW does describe the librational modes, and these do make a significant contribution to quantum nuclear effects in water, as described here.

I welcome suggestions and comments.

Here I give a few of my own biased tentative thoughts. Comments and suggestions would be very welcome.

Over the past decade there have been

*several significant advances*that are relevant to understanding nuclear quantum effects in water. It was only by writing this summary that I realised just how tangible and significant these advances are. I am not sure other fields I am familiar with have experienced comparable advances.**Experiment.**Deep inelastic neutron scattering reveals the momentum distribution of protons, and can be compared to path integral simulations, as described here. Furthermore, this has illuminated competing quantum effects, as described here.

**Quantum chemistry.**New accurate intermolecular potential energy surfaces and force fields, such as MB-pol.

**Computational.**Path integral simulations. Besides significant increases in computational power [Moore's law] making simulation of much larger systems and better "statistics" possible, there have been significant methodological advances, such as Ring Polymer Molecular Dynamics, and PIGLET.

**New concepts and organising principles.***Competing quantum effects*associated with the zero-point energy of O-H stretching and bending modes. The competition is particularly subtle in water, to the point that it can change the sign of isotope effects.Dynamical properties such as proton transport being dominated by

*extremely rare events*, associated with short hydrogen bonds.

**Simple models.**The coarse-grained monatomic Water (mW) model captures many anomalies of classical water, showing their origin is in the tetrahedral bonding. A diabatic state model captures essential features of the potential energy surface of single hydrogen bonds, particularly the variation with the distance between oxygen atoms. The model does describes competing quantum effects.

These advances present some

*significant opportunities and challenges.***Experiment.**Resolving the ambiguity associated with interpreting the deep inelastic neutron scattering experiments. Going from the data to robust (i.e. non-controversial) spatial probability distributions for protons, particularly ones involving proton delocalisation would be nice.

**Simulation.**The path integral simulations will only be as good at the potential energy surfaces that they use. For example, recent work shows how calculated isotope effects vary significantly with the DFT functional that is used. This is because the potential energy surface, particularly with respect to the proton transfer co-ordinate, is quite sensitive to the oxygen atom separation, and to the level of quantum chemical theory. This becomes particularly important for properties that are determined by rare events [i.e. thermal and quantum fluctuations to short hydrogen bonds].

**Simple models.**Monatomic Water (mW) is completely classical. It would be nice to have a quantum generalisation that can describe how the water phase diagram changes with isotope (H/D substitution). Note there is already a problem because mW is so coarse-grained that it does not contain the O-H stretch. On the other hand, mW does describe the librational modes, and these do make a significant contribution to quantum nuclear effects in water, as described here.

I welcome suggestions and comments.

## Thursday, March 26, 2015

### A basic but important research skill, 5: solving homework problems

Carl Caves has a helpful two pager, tips for solving physics homework problems. It nicely emphasises the importance of drawing a clear diagram, dimensional analysis, thinking before you calculate, and checking the answer.

He also discusses moving from homework problems to "real world" problems, e.g. research. Then, just formulating the problem is crucial.

I wonder if the goals of some Ph.D projects might be revised if the supervisor and/or student simply combined dimensional analysis with a realistic order of magnitude estimate. Just doing the exercise might also significantly increase the students understanding of the underlying physics.

He also discusses moving from homework problems to "real world" problems, e.g. research. Then, just formulating the problem is crucial.

I wonder if the goals of some Ph.D projects might be revised if the supervisor and/or student simply combined dimensional analysis with a realistic order of magnitude estimate. Just doing the exercise might also significantly increase the students understanding of the underlying physics.

## Wednesday, March 25, 2015

### Enhanced teaching of crystal structures

This past week I taught my condensed matter class about crystal structures and their determination by X-rays. This can be a little dry and old. Here, are few things I do to try and make things more interesting and relevant. I emphasise that many of these developments go beyond what was known or anticipated when Ashcroft and Mermin was written. Furthermore,

Discuss whether the first X-ray crystallography experiment the most important experiment in condensed matter, ever?

Take crystal structure "ball and stick" models to the lectures.

Give a whole lecture on quasi-crystals.

Use the

Show a crystal structure for a high-Tc cuprate superconductor and an organic charge transfer salt. Emphasize the large number of atoms per unit cell and how small changes in distances can totally change the ground state (e.g. superconductor to Mott insulator). Furthermore, these small changes may be currently beyond experimental resolution. This is very relevant to my research and that of Ben Powell.

Very briefly mention the Protein Data Bank, and its exponential growth over the past few decades. It now contains more than 100,000 bio-molecular structures. Mention the key concept that

Next year I may something about the importance of synchrotrons and neutron sources, and crystallographic databases such as the Cambridge Structural Database, which contains more than 700,000 structures for small organic molecules and organometallics.

*significant challenges remain.*Discuss whether the first X-ray crystallography experiment the most important experiment in condensed matter, ever?

Take crystal structure "ball and stick" models to the lectures.

Give a whole lecture on quasi-crystals.

Use the

*bravais*program in Solid State Simulations to illustrate basic ideas. For example, the equivalence of each reciprocal lattice vector to an X-ray diffraction peak, to a family of lattice planes in real space, and to a Miller indice.Show a crystal structure for a high-Tc cuprate superconductor and an organic charge transfer salt. Emphasize the large number of atoms per unit cell and how small changes in distances can totally change the ground state (e.g. superconductor to Mott insulator). Furthermore, these small changes may be currently beyond experimental resolution. This is very relevant to my research and that of Ben Powell.

Very briefly mention the Protein Data Bank, and its exponential growth over the past few decades. It now contains more than 100,000 bio-molecular structures. Mention the key concept that

*Structure determines Property determines Function.*Mention that although many structures resolve bond lengths to within 0.2-0.6 Angstroms, that this just isn't good enough to resolve some important questions about chemical mechanisms related to function. I am currently writing a paper on an alternative "ruler" using isotopic fractionation.Next year I may something about the importance of synchrotrons and neutron sources, and crystallographic databases such as the Cambridge Structural Database, which contains more than 700,000 structures for small organic molecules and organometallics.

## Tuesday, March 24, 2015

### Is liquid 3He close to a Mott-Hubbard insulator transition?

Is it ever a "bad metal"?

Liquid 3He mostly gets attention because at low temperatures it is a Fermi liquid [indeed it was the inspiration for Landau's theory] and because it becomes a superfluid [with all sorts of broken symmetries].

How strong are the interactions? How "renormalised" are the quasi-particles?

The effective mass of the quasi-particles [as deduced from the specific heat] is about 3 times the bare mass at 0 bar pressure and increases to 6 times at 33 bar, when it becomes solid. The compressibility is also renormalised and decreases significantly with increasing pressure, as shown below.

This led Anderson and Brinkman to propose that 3He was an "almost localised" Fermi liquid. Thirty years ago, Dieter Vollhardt worked this idea out in detail, considering how these properties might be described by a lattice gas mode with a Hubbard Hamiltonian. The system is at half filling with U increasing with pressure, and the solidification transition (complete localisation of the fermions) having some connection to the Mott transition. All his calculations were at the level of the Gutzwiller approximation (equivalent to slave bosons). [The figure above is taken from his paper].

A significant result from the theory is it describes the weak pressure dependence and value of the Sommerfeld-Wilson ratio [which is related to the Fermi liquid parameter F_0^a].

At ambient pressure U is about 80 per cent of the critical value for the Mott transition.

Vollhardt, Wolfle, and Anderson also considered a more realistic situation where the system is not at half-filling. Then, the doping is determined by the ratio of the molar volume of the liquid to the molar volume of the solid [which by definition corresponds to half filling].

Later Georges and Laloux argued 3He is a Mott-Stoner liquid, i.e. one also needs to take into account the exchange interaction and proximity to a Stoner ferromagnetic instability.

If this Mott-Hubbard picture is valid then one should also see a crossover from a Fermi liquid to a "bad metal" with increasing temperature. Specifically, above some "coherence" temperature T_coh, the quasi-particle picture breaks down. For example, the specific heat should increase linearly with temperature up to a value of order R (the ideal gas constant) around T_coh, and then decrease with increasing temperature.

Indeed one does see this crossover in the experimental data shown in the figure below, taken from here.

Aside: the crossing point in the family of curves is an example of an isosbestic point.

Extension of the Vollhardt theory to finite temperatures was done by Seiler, Gros, Rice, Ueda, and Vollhardt.

One can also consider 3He in two dimensions. John Saunders and his group have done a beautiful series of experiments on monolayers and bilayers of 3He. The data below is for a monolayer, of different layer densities, taken from here. They suggest that as one tunes the density one moves closer to the Mott transition.

The experiments on bilayers deserve a separate post.

Liquid 3He mostly gets attention because at low temperatures it is a Fermi liquid [indeed it was the inspiration for Landau's theory] and because it becomes a superfluid [with all sorts of broken symmetries].

How strong are the interactions? How "renormalised" are the quasi-particles?

The effective mass of the quasi-particles [as deduced from the specific heat] is about 3 times the bare mass at 0 bar pressure and increases to 6 times at 33 bar, when it becomes solid. The compressibility is also renormalised and decreases significantly with increasing pressure, as shown below.

This led Anderson and Brinkman to propose that 3He was an "almost localised" Fermi liquid. Thirty years ago, Dieter Vollhardt worked this idea out in detail, considering how these properties might be described by a lattice gas mode with a Hubbard Hamiltonian. The system is at half filling with U increasing with pressure, and the solidification transition (complete localisation of the fermions) having some connection to the Mott transition. All his calculations were at the level of the Gutzwiller approximation (equivalent to slave bosons). [The figure above is taken from his paper].

A significant result from the theory is it describes the weak pressure dependence and value of the Sommerfeld-Wilson ratio [which is related to the Fermi liquid parameter F_0^a].

At ambient pressure U is about 80 per cent of the critical value for the Mott transition.

Vollhardt, Wolfle, and Anderson also considered a more realistic situation where the system is not at half-filling. Then, the doping is determined by the ratio of the molar volume of the liquid to the molar volume of the solid [which by definition corresponds to half filling].

Later Georges and Laloux argued 3He is a Mott-Stoner liquid, i.e. one also needs to take into account the exchange interaction and proximity to a Stoner ferromagnetic instability.

If this Mott-Hubbard picture is valid then one should also see a crossover from a Fermi liquid to a "bad metal" with increasing temperature. Specifically, above some "coherence" temperature T_coh, the quasi-particle picture breaks down. For example, the specific heat should increase linearly with temperature up to a value of order R (the ideal gas constant) around T_coh, and then decrease with increasing temperature.

Indeed one does see this crossover in the experimental data shown in the figure below, taken from here.

Aside: the crossing point in the family of curves is an example of an isosbestic point.

Extension of the Vollhardt theory to finite temperatures was done by Seiler, Gros, Rice, Ueda, and Vollhardt.

One can also consider 3He in two dimensions. John Saunders and his group have done a beautiful series of experiments on monolayers and bilayers of 3He. The data below is for a monolayer, of different layer densities, taken from here. They suggest that as one tunes the density one moves closer to the Mott transition.

The experiments on bilayers deserve a separate post.

## Monday, March 23, 2015

### Two new books on career advice for Ph.Ds

The Professor Is In: The Essential Guide To Turning Your Ph.D. Into a Job by Karen Kelsky.

The author left a tenured position at a research university and now has an excellent blog and runs a career advice consulting business, for people in academia, both those who want to stay and those who want to (or have to) leave.

Navigating the Path to Industry: A Hiring Manager's Advice for Academics Looking for a Job in Industry by M.R. Nelson

Has anyone read either book? I welcome comments.

The author left a tenured position at a research university and now has an excellent blog and runs a career advice consulting business, for people in academia, both those who want to stay and those who want to (or have to) leave.

Navigating the Path to Industry: A Hiring Manager's Advice for Academics Looking for a Job in Industry by M.R. Nelson

Has anyone read either book? I welcome comments.

## Saturday, March 21, 2015

### The teaching bag

When I started teaching sometimes I would arrive at the lecture to find that I left behind something I needed (chalk!, laser pointer, computer connector, textbook, notes, ...).

Do I go back to my office and get it and start the lecture late, or do without?

Sometimes these are things that should be in the room but are not. (e.g. chalk, erasers, or markers).

Even worse was to discover I was missing something in the middle of the lecture!

I eventually came up with a simple solution. Have a separate bag in which I store absolutely everything I need or may need (white board markers, eraser, Mac adapters, text, clicker receiver, course profile, ....)

When I leave for the lecture I don't have to remember or find all these things.

For things like Mac adapters I have an extra one just for the bag.

It is a simple thing but it does reduce anxiety and problems.

Do I go back to my office and get it and start the lecture late, or do without?

Sometimes these are things that should be in the room but are not. (e.g. chalk, erasers, or markers).

Even worse was to discover I was missing something in the middle of the lecture!

I eventually came up with a simple solution. Have a separate bag in which I store absolutely everything I need or may need (white board markers, eraser, Mac adapters, text, clicker receiver, course profile, ....)

When I leave for the lecture I don't have to remember or find all these things.

For things like Mac adapters I have an extra one just for the bag.

It is a simple thing but it does reduce anxiety and problems.

## Friday, March 20, 2015

### Physicists are bosons; mathematicians are fermions

The first observation is that each mathematician is a special case, and in general mathematicians tend to behave like “fermions” i.e. avoid working in areas which are too trendy whereas physicists behave a lot more like “bosons” whichAlain Connes, Advice to beginning mathematicianscoalesce in large packsand are often“over-selling” their doings, an attitude which mathematicians despise.

I learnt this quote today, courtesy of Elena Ostrovskaya, who gave todays Physics Colloquium at UQ.

## Wednesday, March 18, 2015

### An alternative to cosmic inflation

On Friday Robert Mann gave a very nice colloquium at UQ, The Black Hole at the Beginning of Time. The video is below.

The (end of) the talk is based on the recent paper

Out of the white hole: a holographic origin for the Big Bang

Razieh Pourhasan, Niayesh Afshordi, and Robert B. Mann

The key idea is to consider our universe as the 4-dimensional boundary (brane or hologram) of a 5-dimensional space-time in which there is a black hole.

In our universe one then has not just 4D gravity and matter, but also induced gravity and an effective fluid from the 5D "bulk".

(For better or worse) this work was recently featured on the cover of Scientific American.

Robert covered a massive amount of material moving through special relativity, general relativity, black holes, big bang, cosmology, recent results from the Planck satellite, inflation, the multiverse,... and finally his alternative model.

I took several pages of notes.

He went overtime. I think this was one of the rare cases where I did not mind the speaker doing it.

Besides learning some interesting physics, what was most interesting to me was the refreshing way the work was presented. The tone was something like, "cosmology has some amazing successes but there are a few paradoxes, inflation is an interesting idea but also presents some problems, ...fine tuning is a challenge, ... so let me throw out a different idea.... it is a bit weird... but lets see where it goes ... it also has some strengths and weaknesses .... I am not sure this is better than inflation, but it is worth looking at." There was no hype or sweeping things under the rug.

Many in the audience were undergrads. I thought it was a great talk for them to hear. It was largely tutorial, there was some fascinating physics, connections to experiment were emphasised, healthy skepticism was modelled, and there was no hype.

I also liked the talk because it confirms my prejudice that people need to work harder, more creatively, and more critically, on foundational problems in cosmology. Dark matter, dark energy, inflation, and fine tuning are all really weird. They may be right. But they may not be. I think just accepting them as the only option and regressing to even weirder ideas like the multiverse is a mistake. [Of course, it is easy as an outsider to tell colleagues to work harder and more creatively.]

The physical model of the early universe that was presented was

The most important and interesting bits are from about 52:00 to 58:00.

The (end of) the talk is based on the recent paper

Out of the white hole: a holographic origin for the Big Bang

Razieh Pourhasan, Niayesh Afshordi, and Robert B. Mann

The key idea is to consider our universe as the 4-dimensional boundary (brane or hologram) of a 5-dimensional space-time in which there is a black hole.

In our universe one then has not just 4D gravity and matter, but also induced gravity and an effective fluid from the 5D "bulk".

(For better or worse) this work was recently featured on the cover of Scientific American.

Robert covered a massive amount of material moving through special relativity, general relativity, black holes, big bang, cosmology, recent results from the Planck satellite, inflation, the multiverse,... and finally his alternative model.

I took several pages of notes.

He went overtime. I think this was one of the rare cases where I did not mind the speaker doing it.

Besides learning some interesting physics, what was most interesting to me was the refreshing way the work was presented. The tone was something like, "cosmology has some amazing successes but there are a few paradoxes, inflation is an interesting idea but also presents some problems, ...fine tuning is a challenge, ... so let me throw out a different idea.... it is a bit weird... but lets see where it goes ... it also has some strengths and weaknesses .... I am not sure this is better than inflation, but it is worth looking at." There was no hype or sweeping things under the rug.

Many in the audience were undergrads. I thought it was a great talk for them to hear. It was largely tutorial, there was some fascinating physics, connections to experiment were emphasised, healthy skepticism was modelled, and there was no hype.

I also liked the talk because it confirms my prejudice that people need to work harder, more creatively, and more critically, on foundational problems in cosmology. Dark matter, dark energy, inflation, and fine tuning are all really weird. They may be right. But they may not be. I think just accepting them as the only option and regressing to even weirder ideas like the multiverse is a mistake. [Of course, it is easy as an outsider to tell colleagues to work harder and more creatively.]

The physical model of the early universe that was presented was

**completely different**to inflation. Yet it solves most of the same problems (horizon, flatness, and no monopoles). Its biggest problem is that it does not predict the observed 4 per cent deviation from scale invariance.The most important and interesting bits are from about 52:00 to 58:00.

## Tuesday, March 17, 2015

### Has the quality of Physical Review B increased?

I was recently asked to complete an online questionnaire about my experiences with Physical Review B: both as an author, reader, and a referee. Many readers probably also did. I felt the survey was fishing for the conclusion, "PRB has increased in quality lately." I was a bit ambivalent in my responses. However, on further reflection I feel I now agree with this conclusion. But, I was disappointed I did not get to make the following point that I think is very important.

Making generalisations is difficult and dangerous because my experience is limited to reading, submitting, and refereeing just a few papers each year. This is an extremely small percentage of the total. But, here are a few impressions. About 20 years ago I sent my first paper to PRB. For the next decade I got pretty generic referee reports, along the lines of "this is interesting work and should be published." However, about 12 years ago I was co-author of some papers submitted to PRA. I immediately noticed a difference. Some of the referees had read the paper in detail and sometimes had constructive scientific criticisms that were extremely helpful, including finding subtle technical errors. As I blogged before, these are the best referee reports. I think the past few years I have now been getting some reports like this from PRB. During this 20 years I have been writing detailed critical referee reports for PRB. Sometimes on a resubmission I see the other referee reports. For the first decade I was disappointed that these were usually generic. Now sometimes they are more detailed and critical. These positive experiences make me tone down (just a little) my polemic, we should abolish journals.

So, I welcome other perspectives. Has PRB increased in quality?

**With the rise of High Impact Factor Syndrome, luxury journals, hype, and fashions, I think the scientific importance and stature of PRB has increased significantly in the past 20 years.**It provides an avenue to publish solid detailed honest reliable careful non-sexy research without the need to indulge in hype, speculation, or hiding important details. This is the research that will have real scientific impact.Making generalisations is difficult and dangerous because my experience is limited to reading, submitting, and refereeing just a few papers each year. This is an extremely small percentage of the total. But, here are a few impressions. About 20 years ago I sent my first paper to PRB. For the next decade I got pretty generic referee reports, along the lines of "this is interesting work and should be published." However, about 12 years ago I was co-author of some papers submitted to PRA. I immediately noticed a difference. Some of the referees had read the paper in detail and sometimes had constructive scientific criticisms that were extremely helpful, including finding subtle technical errors. As I blogged before, these are the best referee reports. I think the past few years I have now been getting some reports like this from PRB. During this 20 years I have been writing detailed critical referee reports for PRB. Sometimes on a resubmission I see the other referee reports. For the first decade I was disappointed that these were usually generic. Now sometimes they are more detailed and critical. These positive experiences make me tone down (just a little) my polemic, we should abolish journals.

So, I welcome other perspectives. Has PRB increased in quality?

## Monday, March 16, 2015

### Relative merits of different numerical methods for correlated fermions in 2D

This helpful table appears in a review article

Studying Two-Dimensional Systems with the Density Matrix Renormalization Group

E.M. Stoudenmire and Steven R. White

The review also shows that comparisons of the 2D DMRG with methods such as PEPS and MERA (heavily promoted by quantum information theorists) imply that 2D DMRG performs significantly better.

I thank Seyed Saadatmand for bringing the table to my attention.

Labels:
computing,
entanglement,
strong correlations

## Saturday, March 14, 2015

### The peaceful atom is a bomb

Previously, I wrote about my concern that little attention and publicity is given these days to issues of nuclear security and proliferation. Hence, it was good to see the cover (and a lead editorial) of

On related matters there is an interesting article [and cover story] in the February Physics Today, Pakistan’s nuclear Taj Mahal by Stuart W. Leslie

*The Economist*this past week.On related matters there is an interesting article [and cover story] in the February Physics Today, Pakistan’s nuclear Taj Mahal by Stuart W. Leslie

Inspired by the promise of Atoms for Peace, the Pakistan Institute of Nuclear Science and Technology eventually succumbed to the demands of the country’s nuclear weapons program.One thing I learnt was the central role that Abdus Salam played. I found the following rather disturbing.

Salam, tA thoughtful and careful analysis of the implications of India and Pakistan obtaining nuclear weapons was given in 2000 by Amartya Sen.hough still the director of the ICTP,organized the theoretical-physics group that performed the sophisticated calculations for the bomb, and he personally asked his former student and protégé Riazuddin to head it. Riazuddin, then teaching at the University of Islamabad (now Quaid-i-Azam University), took several trips to the US, where he collected unclassified documents on nuclear weapons design. .... Salam left the details to others, thoughhe did open the ICTP library to Riazuddin’s groupand kept in close touch with its members. Riazuddin, describing his research team, later acknowledged, “We were the designers of the bomb.... ”

Labels:
energy research,
Majority world,
nuclear physics,
politics

## Thursday, March 12, 2015

### Teaching students to be more critical

One on the many disturbing things I find about science today is people claiming that because a particular theory agrees with a particular experiment that the theory must be valid.

Little consideration is given to the possibility that the agreement may just be an accident. The "correct" theory may actually be quite different. They may be getting the "right" answer for the "wrong" reasons.

I am never sure if the people who make these kind claims are sincere, naive, and/or just engaging in marketing.

Students need to be taught to be more critical.

I am currently teaching an advanced undergraduate course on solid state physics, PHYS4030. It follows Ashcroft and Mermin closely.

I have just taught the Drude and Sommerfeld model. Drude provides a nice example of getting the "right" answer for the "wrong" reasons. In both models the thermal conductivity is given by the following expression from kinetic theory

where c_p is the specific heat capacity and u_f^2 denotes the average kinetic energy of the heat carriers.

In Drude, the first factor is independent of temperature and "large", being of order k_B.

The second factor is proportional to temperature, and "small".

However, in the Sommerfeld model, which gets the physics correct, the specific heat is proportional to temperature, "small", and of order k_B T/T_F, where T_F is the Fermi temperature.

The average kinetic energy is independent of temperature and "large", being proportional to the Fermi energy.

The different terms in the Drude model are off by a factor of order one thousand, but these errors cancel beautifully so it gives an answer that agrees with Sommerfeld and with experiment to within a factor of two!

I stressed to the students that this is a good example how sometimes you get the right answer for the wrong reason. The fact your theoretical model agrees with a particular experiment does not prove it is correct.

This underscores the need for the method of multiple working hypotheses.

Little consideration is given to the possibility that the agreement may just be an accident. The "correct" theory may actually be quite different. They may be getting the "right" answer for the "wrong" reasons.

I am never sure if the people who make these kind claims are sincere, naive, and/or just engaging in marketing.

Students need to be taught to be more critical.

I am currently teaching an advanced undergraduate course on solid state physics, PHYS4030. It follows Ashcroft and Mermin closely.

I have just taught the Drude and Sommerfeld model. Drude provides a nice example of getting the "right" answer for the "wrong" reasons. In both models the thermal conductivity is given by the following expression from kinetic theory

where c_p is the specific heat capacity and u_f^2 denotes the average kinetic energy of the heat carriers.

In Drude, the first factor is independent of temperature and "large", being of order k_B.

The second factor is proportional to temperature, and "small".

However, in the Sommerfeld model, which gets the physics correct, the specific heat is proportional to temperature, "small", and of order k_B T/T_F, where T_F is the Fermi temperature.

The average kinetic energy is independent of temperature and "large", being proportional to the Fermi energy.

The different terms in the Drude model are off by a factor of order one thousand, but these errors cancel beautifully so it gives an answer that agrees with Sommerfeld and with experiment to within a factor of two!

I stressed to the students that this is a good example how sometimes you get the right answer for the wrong reason. The fact your theoretical model agrees with a particular experiment does not prove it is correct.

This underscores the need for the method of multiple working hypotheses.

## Wednesday, March 11, 2015

### A brilliant insight about quantum decoherence in electronic circuits

Yesterday, Matthew Woolley gave an interesting Quantum science seminar at UQ about some of his recent work on Photon assisted tunnelling with non-classical light.

I just want to focus on one point that was deeply imbedded in the talk. It is a idea that is profound and central to the physics of quantum electronic circuits. The idea is so old now its profoundness and brilliance may be lost on a new generation.

The idea and result is easiest for me to explain in terms of the figure below which describes a superconducting (Josephson junction) qubit connected to an electrical circuit. It is taken from this review.

Similar physics is at play in normal tunnel junctions (see for example this important paper, highlighted by Matthew in his talk).

First, this is a very simple formula that depends only on macroscopic parameters of the electrical circuit. One does not have to know anything about the microscopic details of all the different electronic degrees of freedom in the circuit or how they individually couple to the qubit. I find this surprising.

Second, the underlying physics is the fluctuation-dissipation theorem. The quantum noise in the electronic circuit is related to fluctuations in the current. By Kubo and the fluctuation-dissipation relation tell us the fluctuations in the current are essentially the conductivity [the inverse of the resistivity].

Who was the first to have this insight and calculate this?

I feel it was Caldeira and Leggett, but I can't find the actual equation with the circuit resistance in their 1983 paper.

Or did someone else do this earlier?

Because of the above, whenever the spectral density depends linearly on the frequency, Leggett (and now everyone) calls it ohmic dissipation.

I first learnt this through the thesis work of my student Joel Gilmore, and described in this review. There we considered a more chemical problem, two excited electronic states of a molecule that are in a polar dielectric solvent. The coupling to the environment is completely specified in terms of the frequency dependent dielectric constant of the solvent (and some geometric factors).

Update: Caldeira answers the question in a comment below.

I just want to focus on one point that was deeply imbedded in the talk. It is a idea that is profound and central to the physics of quantum electronic circuits. The idea is so old now its profoundness and brilliance may be lost on a new generation.

The idea and result is easiest for me to explain in terms of the figure below which describes a superconducting (Josephson junction) qubit connected to an electrical circuit. It is taken from this review.

One can quantise the electromagnetic field and consider a spin-boson model to describe decoherence and dissipation of the qubit. This is associated with a spectral density that is proportional to frequency with a dimensionless pre factor alpha, which for this circuit is given by

where R_V is the electrical resistance of the circuit, R_K is the quantum of resistance, and the C's are capacitances.Similar physics is at play in normal tunnel junctions (see for example this important paper, highlighted by Matthew in his talk).

*Why do I find this profound?*First, this is a very simple formula that depends only on macroscopic parameters of the electrical circuit. One does not have to know anything about the microscopic details of all the different electronic degrees of freedom in the circuit or how they individually couple to the qubit. I find this surprising.

Second, the underlying physics is the fluctuation-dissipation theorem. The quantum noise in the electronic circuit is related to fluctuations in the current. By Kubo and the fluctuation-dissipation relation tell us the fluctuations in the current are essentially the conductivity [the inverse of the resistivity].

Who was the first to have this insight and calculate this?

I feel it was Caldeira and Leggett, but I can't find the actual equation with the circuit resistance in their 1983 paper.

Or did someone else do this earlier?

Because of the above, whenever the spectral density depends linearly on the frequency, Leggett (and now everyone) calls it ohmic dissipation.

I first learnt this through the thesis work of my student Joel Gilmore, and described in this review. There we considered a more chemical problem, two excited electronic states of a molecule that are in a polar dielectric solvent. The coupling to the environment is completely specified in terms of the frequency dependent dielectric constant of the solvent (and some geometric factors).

Update: Caldeira answers the question in a comment below.

Labels:
decoherence,
history,
quantum foundations,
solvation,
tunneling

## Monday, March 9, 2015

### The art and discipline of a good colloquium

There is a helpful and challenging article The Physics of Physics Colloquia by James Kakalios on The Back Page of the APS News. It is based around old notes Suggestions for giving talks by Robert Geroch.

Both Kakalios and Geroch are worth reading in full, but here are a few random things that stood out to me. [Things I need to keep working on].

"What is the key take-away point that you want to impress on everyone when they leave your talk?"

Divide the talk up, centred around 3 or 4 key messages.

"Figures are easier to understand than words."

"You have been staring at these data and plots for years, but many in the audience have not."

Don't include more than five non-trivial equations.

"It is almost always a disaster to run over time".

Much of this may seem "common sense". However, as management guru Steven Covey said, "Common sense is not common practise." Preparing and giving a good talk requires a lot of discipline, particularly with regard to cutting out material.

Both Kakalios and Geroch are worth reading in full, but here are a few random things that stood out to me. [Things I need to keep working on].

"What is the key take-away point that you want to impress on everyone when they leave your talk?"

Divide the talk up, centred around 3 or 4 key messages.

"Figures are easier to understand than words."

"You have been staring at these data and plots for years, but many in the audience have not."

Don't include more than five non-trivial equations.

"It is almost always a disaster to run over time".

Much of this may seem "common sense". However, as management guru Steven Covey said, "Common sense is not common practise." Preparing and giving a good talk requires a lot of discipline, particularly with regard to cutting out material.

## Friday, March 6, 2015

### Is DMFT "the only game in town"?

Last week Peter Woit kindly recommended my blog to his readers. This immediately doubled the number of daily page views for several days thereafter!

Peter also drew attention to my recent paper with Nandan Pakhira that shows that the charge diffusion constant in a bad metal violates a conjectured lower bound. This bound was conjectured, partly on the basis of arguments from string theory techniques [holographic duality, AdS-CFT]. Our calculations were all based on a Dynamical Mean-Field Theory (DMFT) treatment of the Hubbard model.

One commenter "Bernd" wrote

First, I think we have a pretty good idea of how accurate DMFT is.

Second, I think any analogy between DMFT and string theory is very weak.

We know that DMFT is exact in infinite dimensions. It is an approximation in lower dimensions, that can be compared to the results of other methods. Cluster versions of DMFT give a systematic way to look at how the neglect of spatial correlations in single site DMFT matter. It has also been benchmarked against a range of other numerical methods.

The "sociological" comparison between string theory and DMFT is debatable.

I have never heard anyone claim it is "the only game in town". On the other hand, there are probably hundreds of talks that I have never gone to where someone might have been said this.

I am a big fan of DMFT. For example, I think combining DMFT and DFT is one of the most significant achievements of solid state theory from the past 20 years. Yet, this is a long way from claiming it is the "only game in town".

It is worth comparing the publication lists of string theorists and DMFT proponents.

String theorists virtually only publish string theory papers reflecting their belief that it is the "only game in town". However, if you look at the publication lists of DMFT originators and proponents, such as Georges, Kotliar, Vollhardt, Metzner, Jarrell, Millis, ... you will see that they also publish papers using techniques besides DMFT. They know its limitations.

If you look at the research profiles of physics departments the analogy also breaks down. Virtually only string theorists get hired to work on "beyond the Standard Model". In contrast, when it comes to correlated fermions, DMFT is a minority, and not even represented in many departments.

Most importantly, there are no known comparisons between experimental data and string theory. DMFT is completely different. For just one example, I show the figure below, taken from this paper, that compares DMFT calculations (left) to experiments (right) on a specific organic metal close to the Mott insulator transition.

This organic material is essentially two-dimensional, a long way from infinite dimensions!

Furthermore, the parameter regime considered here corresponds to the same parameter regime [bad metal] that DMFT gives violations of the conjectured bound in the diffusion constant.

DMFT is not "the only game in town". We have a pretty good idea of how and when it is reliable.

Peter also drew attention to my recent paper with Nandan Pakhira that shows that the charge diffusion constant in a bad metal violates a conjectured lower bound. This bound was conjectured, partly on the basis of arguments from string theory techniques [holographic duality, AdS-CFT]. Our calculations were all based on a Dynamical Mean-Field Theory (DMFT) treatment of the Hubbard model.

One commenter "Bernd" wrote

The violation of the holographic duality bound is based on DMFT calculations, which is a bit like string theory for strongly correlated fermions in the sense that it is somtimes sold as “the only game in town”. Nobody knows how accurate these methods really are.For background, "the only game in town" refers to a common argument of string theorists that string theory must be correct because there are no others options for a mathematically self-consistent theory of quantum gravity. Woit's blog and book has many counter arguments to this point of view, as for example here. Ironically, a commenter Carl points out

The quote “the only game in town” is exceptionally apt. It’s most associated with con man “Canada Bill” Jones, master of the Three Card Monte, who ironically was himself addicted to gambling and lost his money to better professionals as fast as he took from marks. Supposedly, on being advised by a friend that the Faro game he was losing money at was rigged, he replied “I know, but it’s the only game in town!”.I wish to make two points in response to "Bernd".

First, I think we have a pretty good idea of how accurate DMFT is.

Second, I think any analogy between DMFT and string theory is very weak.

We know that DMFT is exact in infinite dimensions. It is an approximation in lower dimensions, that can be compared to the results of other methods. Cluster versions of DMFT give a systematic way to look at how the neglect of spatial correlations in single site DMFT matter. It has also been benchmarked against a range of other numerical methods.

The "sociological" comparison between string theory and DMFT is debatable.

I have never heard anyone claim it is "the only game in town". On the other hand, there are probably hundreds of talks that I have never gone to where someone might have been said this.

I am a big fan of DMFT. For example, I think combining DMFT and DFT is one of the most significant achievements of solid state theory from the past 20 years. Yet, this is a long way from claiming it is the "only game in town".

It is worth comparing the publication lists of string theorists and DMFT proponents.

String theorists virtually only publish string theory papers reflecting their belief that it is the "only game in town". However, if you look at the publication lists of DMFT originators and proponents, such as Georges, Kotliar, Vollhardt, Metzner, Jarrell, Millis, ... you will see that they also publish papers using techniques besides DMFT. They know its limitations.

If you look at the research profiles of physics departments the analogy also breaks down. Virtually only string theorists get hired to work on "beyond the Standard Model". In contrast, when it comes to correlated fermions, DMFT is a minority, and not even represented in many departments.

Most importantly, there are no known comparisons between experimental data and string theory. DMFT is completely different. For just one example, I show the figure below, taken from this paper, that compares DMFT calculations (left) to experiments (right) on a specific organic metal close to the Mott insulator transition.

This organic material is essentially two-dimensional, a long way from infinite dimensions!

Furthermore, the parameter regime considered here corresponds to the same parameter regime [bad metal] that DMFT gives violations of the conjectured bound in the diffusion constant.

DMFT is not "the only game in town". We have a pretty good idea of how and when it is reliable.

## Wednesday, March 4, 2015

### The most discouraging thing about the first week of semester

This week classes started for the first semester of the year at UQ.

Campus is swarming with students.

I live about three kilometres from campus, but the buses don't even stop by the time they pass near my home because they are already full. I have to find other ways to get to campus.

There are no spare seats in the library. The lines at the food outlets are very long.

But, this

It is that two weeks from now the buses won't be full. In about six weeks they will be half full. I will have no trouble getting a seat.

Why? After a few weeks a noticeable fraction of students have decided it is not worth attending lectures.

Previously I made a simple and concrete proposal to address the issue.

Campus is swarming with students.

I live about three kilometres from campus, but the buses don't even stop by the time they pass near my home because they are already full. I have to find other ways to get to campus.

There are no spare seats in the library. The lines at the food outlets are very long.

But, this

**overcrowding is not the discouraging thing**.It is that two weeks from now the buses won't be full. In about six weeks they will be half full. I will have no trouble getting a seat.

Why? After a few weeks a noticeable fraction of students have decided it is not worth attending lectures.

Previously I made a simple and concrete proposal to address the issue.

## Monday, March 2, 2015

### Quantum criticality near the Mott transition in organics?

There is an interesting paper

Quantum criticality of Mott transition in organic materials

Tetsuya Furukawa, Kazuya Miyagawa, Hiromi Taniguchi, Reizo Kato, Kazushi Kanoda

Some of the results were flagged several years ago by Kanoda in a talk at KITP.

Key to the analysis is theoretical concepts developed in three papers based on Dynamical Mean-Field Theory (DMFT) calculations

Quantum Critical Transport near the Mott Transition

by H. Terletska, J. Vučičević, Darko Tanasković, and Vlad Dobrosavljević

Finite-temperature crossover and the quantum Widom line near the Mott transition

J. Vučičević, H. Terletska, D. Tanasković, and V. Dobrosavljević

Bad-metal behavior reveals Mott quantum criticality in doped Hubbard models

J. Vučičević, D. Tanasković, M. J. Rozenberg, and V. Dobrosavljević

The experimental authors consider three different organic charge transfer salts that undergo a metal-insulator transition as a function of pressure, with a critical point at a finite temperature. One of the phase diagrams is below.

The dots denote the Widom line determined at each temperature by the inflexion point in the resistivity versus pressure curve shown below.

They have different insulating ground states (spin liquid or antiferromagnet) and different shapes for the Widom line associated with the metal-insulator crossover above the critical temperature. Yet, the same universal behaviour is observed for the resistivity.

Aside: there is a subtle issue I raised in an earlier post. For these materials the resistivity versus temperature is non-monotonic, raising questions about what criteria you use to distinguish metals and insulators.

Here one sees that if the resistivity is scaled by the resistivity along the Widom line, then it becomes monotonic and one sees a clear distinction/bifurcation between metal and insulator.

This "collapse" of the data is very impressive.

The above universal curves then determine

+ for insulator, - for metal

z= dynamical exponent

nu = exponent for the correlation length

The metal and insulator have the "same" temperature dependence, modulo a sign!

The data give z nu = 0.68, 0. 62, and 0.49 for the three different compounds.

This compares to the value of z nu = 0.57 obtained in the DMFT calculations, and 0.67 from a field theory from Senthil and collaborators. In contrast, Imada's "marginal theory" gives 2 and Si-MOSFETs give 1.67.

But the devil may be in the details. Some caution is in order because for me the paper raises a number of questions or concerns.

The interlayer resistivity near the critical point is about 0.1-1 Ohm-cm. Note that this about three orders of magnitude larger than the Mott-Ioffe-Regel limit.

[Aside: but caution is in order because it is very hard to accurately measure intralayer resistivity in highly anisotropic layered materials.]

Here, a rather limited temperature range is used to determine magnitude of critical exponents. The plot below shows less than a decade was used (e.g. 75-115 K). Furthermore, the authors focus on temperatures away from the critical temperature, Tc.

Ideally, critical exponents are determined over several decades and as close to the critical point as possible. The gold standard is superfluid helium in the space shuttle!

I thank Vlad for bringing the paper to my attention.

Update (26 March).

Alex Hamilton sent the following helpful comment and picture and asked me to post it.

Quantum criticality of Mott transition in organic materials

Tetsuya Furukawa, Kazuya Miyagawa, Hiromi Taniguchi, Reizo Kato, Kazushi Kanoda

Some of the results were flagged several years ago by Kanoda in a talk at KITP.

Key to the analysis is theoretical concepts developed in three papers based on Dynamical Mean-Field Theory (DMFT) calculations

Quantum Critical Transport near the Mott Transition

by H. Terletska, J. Vučičević, Darko Tanasković, and Vlad Dobrosavljević

Finite-temperature crossover and the quantum Widom line near the Mott transition

J. Vučičević, H. Terletska, D. Tanasković, and V. Dobrosavljević

Bad-metal behavior reveals Mott quantum criticality in doped Hubbard models

J. Vučičević, D. Tanasković, M. J. Rozenberg, and V. Dobrosavljević

The experimental authors consider three different organic charge transfer salts that undergo a metal-insulator transition as a function of pressure, with a critical point at a finite temperature. One of the phase diagrams is below.

The dots denote the Widom line determined at each temperature by the inflexion point in the resistivity versus pressure curve shown below.

They have different insulating ground states (spin liquid or antiferromagnet) and different shapes for the Widom line associated with the metal-insulator crossover above the critical temperature. Yet, the same universal behaviour is observed for the resistivity.

Aside: there is a subtle issue I raised in an earlier post. For these materials the resistivity versus temperature is non-monotonic, raising questions about what criteria you use to distinguish metals and insulators.

Here one sees that if the resistivity is scaled by the resistivity along the Widom line, then it becomes monotonic and one sees a clear distinction/bifurcation between metal and insulator.

This "collapse" of the data is very impressive.

The above universal curves then determine

**critical exponents**through a relation+ for insulator, - for metal

z= dynamical exponent

nu = exponent for the correlation length

The metal and insulator have the "same" temperature dependence, modulo a sign!

The data give z nu = 0.68, 0. 62, and 0.49 for the three different compounds.

This compares to the value of z nu = 0.57 obtained in the DMFT calculations, and 0.67 from a field theory from Senthil and collaborators. In contrast, Imada's "marginal theory" gives 2 and Si-MOSFETs give 1.67.

But the devil may be in the details. Some caution is in order because for me the paper raises a number of questions or concerns.

The interlayer resistivity near the critical point is about 0.1-1 Ohm-cm. Note that this about three orders of magnitude larger than the Mott-Ioffe-Regel limit.

[Aside: but caution is in order because it is very hard to accurately measure intralayer resistivity in highly anisotropic layered materials.]

Here, a rather limited temperature range is used to determine magnitude of critical exponents. The plot below shows less than a decade was used (e.g. 75-115 K). Furthermore, the authors focus on temperatures away from the critical temperature, Tc.

Ideally, critical exponents are determined over several decades and as close to the critical point as possible. The gold standard is superfluid helium in the space shuttle!

I thank Vlad for bringing the paper to my attention.

Update (26 March).

Alex Hamilton sent the following helpful comment and picture and asked me to post it.

Not that my experience in the 2D metal-insulator transition community has jaded me, but my experience is that one has to be extremely careful interpreting such 'collapses'.

1. If you have 6 orders of magnitude on the Y axis, there is no way if you can tell if an individual trace misses its neighbours by a significant margin unless there is ahugeoverlap between datasets (which there usually is not) - e.g. dataset Y(1) missed Y(2) by 30% for all data points.

2. Almost anything can be made to scale. For example consider an insulator with hopping. By definition this will show scaling behaviour. Now consider a metal with linear or quadratic in T resistance correction due to phonons. This will also fit on a scaling curve as long as the range of data in each individual dataset does not change too much on the Y-axis for the range of data in the dataset. In other words, if the overlap is not large, then I can often get scaling behaviour just by adjusting T_0 for each dataset to make them fit a common curve, because dataset Y(1) isn't that different from dataset Y(2), and neither covers an order of magnitude on the Y-axis.

Subscribe to:
Posts (Atom)