Friday, May 30, 2014

Is the mobility of protons in water high?

It is all relative.

I often read in physical chemistry papers statements along the line of "a major puzzle is the extremely high mobility of protons and hydroxide ions in liquid water ..... explaining this leads to consideration of non-diffusive transport mechanisms such as the Grotthuss mechanism."

Furthermore, a physics paper,  Ice: a strongly correlated system, cited by field theory enthusiasts [gauge theories, deconfinement, ....], states
ice exhibits a high static permittivity comparable with the one of liquid water, and electrical mobility that is large when compared to most ionic conductors. In fact, the mobility is comparable to the electronic conduction in metals.
It has taken a while for me to understand the real issues. Atkins' Physical Chemistry textbook actually has a helpful discussion, featuring the table below.


Thus, we see that the mobility of H+ and OH- (hydroxide) is about 3-7 times larger than that of other charged ions. This is hardly a gigantic effect!

Now, suppose the transport proceeds via diffusion. Then one can use the Stokes-Einstein relation to estimate the mobility of an ion in terms of the viscosity of the solvent (water).  This means the mobility scales inversely with the "hydrodynamic radii" of the diffusing particle. Atkins shows that this leads to reasonable estimates of the mobility [for ions in the bottom three lines to the Table above] with "hydrodynamic radii" of about 2 Angstroms.

But, H+ will be bound to H2O to form H3O+ which will be even larger thank K+ and so the mobility should be even smaller than for K+, not larger. OH- should be comparable.

Furthermore, the mobility of protons in ice would be dramatically less, since the solvent is no longer a fluid. However, the mobility at -5 degrees C is only a factor of six less than a 25 degrees C, as reported here.

Hence, it seems that proton transport cannot proceed by diffusion and it is necessary to consider alternative mechanisms such as the Grotthuss one.

Next, I should comment on the absolute magnitude of the mobility compared to a simple lower bound for coherent "hopping" transport, e^2 a^2/h ~ 1 V/(cm^2 sec), characteristic of energy bands. The value for H+ and OH- are about a factor of 30-50 smaller than this lower bound. Thus, water and ice are not even bad metals. Thus the claim in the physics paper of mobility comparable to electronic conduction in a metal is wrong.

Wednesday, May 28, 2014

The statistical mechanics of economic inequality

Economist Thomas Piketty has recently become a celebrity because of his new 700 page best selling book, Capital in the 21st century.

The latest issue of Science has a special section about "the science of inequality". It features a review by Piketty and his longtime Berkeley collaborator Emmanuel Saez. In the introduction the editors make an important point about an exciting future for economic research:
And in the past decade in developed capitalist nations, intensive effort and interdisciplinary collaborations have produced large data sets, including the compilation of a century of income data and two centuries of wealth data....
It is only a slight exaggeration to liken the potential usefulness of this and other big data sets to the enormous benefits of the Human Genome Project.
 
Researchers now have larger sample sizes and more parameters to work with, and they are also better able to detect patterns in the flood of data. Collecting data, organizing it, developing methods of analysis, extracting causal inferences, formulating hypotheses—all of this is the stuff of science and is more possible with economic data than ever before. 
Hopefully, economics will move beyond its current situation where there are popular introductory textbooks that contain virtually no real data, just schematic curves. Previously I wrote about how different the book Poor Economics is.

Econophysics does not get an article but a one page story with the graph below. It is the work of Victor Yakovenko [who I know for his nice work on angle-dependent magnetoresistance!] and is nicely described in a Reviews of Modern Physics Colloquium article. The essential "physics" is that income distribution follows an exponential [Boltzmann] distribution which can be derived on the assumption that through random "collisions/exchanges" the total amount of money [energy] is distributed among all citizens.


Tuesday, May 27, 2014

Quantum hydrogen bonds in antiferroelectric crystals

Ferroelectric materials develop a non-zero electric polarisation below a transition temperature, sometimes referred to as the Curie temperature Tc in analogue with ferromagnetic materials. Some materials have technological applications, including in dynamic RAM, as reviewed here.

The possibility of ferroelectricity and antiferroelectricity associated with different orderings of protons in some of the high pressure phases of ice is also a fascinating subject. In other materials hydrogen bonds also play a central role. Furthermore, the quantum dynamics of the protons is key, as revealed by isotope effects, where H is replaced with deuterium (D). For example, if you look in Ashcroft and Mermin, Table 27.4, you see that the transition temperatures of Potassium dihydrogen phosphate and Potassium dideuterium phosphate are 123 K and 213 K, respectively. This is a huge isotope effect! What does this tell us? What is its origin?

First, if we treat the nuclei classically, as in the Born-Oppenheimer approximation, the chemical bonding and potential energy surface is identical in the two crystals. What changes with H/D substitution? The vibrational energies (and quantum zero-point energy) of modes associated with the H/D. However, it remains controversial as to exactly what is going on. For example, how important is quantum tunnelling? See for example, this Nature paper from 1990 that argues it is all to do with changes in bond lengths.

An H-bonded anti-ferroelectric material is squaric acid. What a cool name! The molecule and associated H-bonding pattern are shown below.


Dark brown, red, and light brown circles represent carbon,  oxygen, and hydrogen atoms, respectively. With H/D substitution, the transition temperature changes from 373 K to 520 K, again a huge effect.

The above figure is taken from a recent paper
Ab initio simulations of hydrogen-bonded ferroelectrics: Collective tunneling and the origin of geometrical isotope effects 
K. T. Wikfeldt and Angelos Michaelides

They perform path integral molecular dynamics simulations using potentials derived from density functional theory using the vdW-DF2 functional. They mention that the results are significantly different from the PBE functional, MP2 theory, and the random phase approximation. This is consistent with earlier work, discussed in an earlier post, that "ab initio" results from hydrogen bonding, particularly the energy barrier for proton transfer, vary significantly, with the level of approximation.

Two of the significant findings of this paper are:

1. There is a large secondary geometric isotope effect (SGIE), i.e, the distance d_OO shown above [the separation of the oxygen atoms that share the proton] increases by about 0.02 Angstroms with H/D substitution as found in experiment.

2. The H/D substitution leads to the deuterium being more localised than the proton. Thus the antiferroelectric phase is less stable for the H material, leading to a lower transition temperature, Tc, as is observed experimentally. [Since they consider a supercell of 3 units they cannot calculate Tc but can see how the probability distribution for the H and D varies with temperature.]

These results are of particular interest to me, because they are consistent with the findings of my recent H-bonding paper. For a simple semi-empirical model potential we found that when the d_OO distance [the donor-acceptor distance R in our paper] is about 2.55 A, as in squaric acid, the SGIE is about 0.02 Angstroms [Figure 7].

Furthermore, for R about 2.45-2.55 A, this SGIE significantly changes the underlying potential for H/D motion, meaning that the level of delocalisation (ground state probability distribution) changes significantly. Roughly the origin of this large effect is that for this R range the zero-point energy of H is comparable to the height of the energy barrier, whereas for D it is below the barrier. This difference leads to a large vibrational frequency isotope effect [Figure 8].

Saturday, May 24, 2014

Are scientific press conferences bad?

I fear that may be the case.
Previous cases of premature announcements include cold fusion, "life on mars" [really dead germs on meteorites from mars], neutrinos travelling faster than the speed of light, a Caltech theoretical chemist claiming he had solved high-Tc superconductivity,.....

In march BICEP2 scientists called a press conference to announce they had discovered evidence for cosmic inflation. This coincided with them placing a paper on the arXiv and Stanford releasing a Youtube video, that subsequently went viral, showing Andrei Linde being presented with the exciting news.

However, now questions are being asked. The chronology is described by Peter Woit on Not Even Wrong and there is a nice discussion of the science by Matt Strassler. The key issue seems to be the method used for subtracting the background signal due to galactic dust. It seems that BICEP2 scientists estimated this background signal by "scraping data" off the powerpoint slide from a talk given by their Planck competitors! But was this a robust estimate?

The issue has received coverage in the press including this Washington Post article.

I think there is a broader issue here of the role of rumours in the social media age. I am skeptical that one can have a forthright, robust, constructive, and thoughtful scientific discussion via tweets and blog rumours, when not all parties have access to the relevant information and there are a bunch of journalists watching. The problem is accentuated if people have already make strong public claims that have been further hyped up by the media and institutional press offices.

I thought that this issue of science via the media was a relatively new one. However, I learned this week that even Einstein was not immune from it! There is an interesting article in APS News, A Unified Theory of Journalistic Caution by science journalist Calla Cofield. She points out how Einstein went to the press to publicise his [now discredited] theory of distant parallelism. The New York Times covered it uncritically, since he was Einstein, after all.

Thursday, May 22, 2014

The uncertain status of career moves

An interesting question is: to what extent does the local institutional environment and the status of an institution affect the quality of the science done by an individual?
If I move to a more highly ranked institution will I do better science?
Or, if I move to a more lowly ranked institution will the quality of my work decline?

Some scientists are obsessed with "moving up", thinking that being at the "best" place is essential. They cannot fathom that one could do outstanding work at a mediocre institution.
However, consider the following. People at a high status university may get Nobel Prizes but that is not necessarily where they actually did the prize-winning work. Here are a few examples.

John Van Vleck: Wisconsin to Harvard
Joe Taylor: U. Mass to Princeton
Tony Leggett: Sussex to Urbana
William Lipscomb: Minnesota to Harvard

Can anyone think of other examples?

So can one actually measure how career moves affect the quality of science? One recent attempt is
Career on the Move: Geography, Stratification, and Scientific Impact
Pierre Deville, Dashun Wang, Roberta Sinatra, Chaoming Song, Vincent Blondel & Albert-László Barabási

The authors give an exhaustive analysis of the authors, affiliations, and citations of more than 400,000 papers from Physical Review journals, concluding
while going from elite to lower-rank institutions on average associates with modest decrease in scientific performance, transitioning into elite institutions does not result in subsequent performance gain. 
This made it into an article in the Economist magazine, entitled Why climb the greasy pole?
It is worth looking at the figure that this conclusion is based on, noting the size of the error bars.

The vertical axis is the change in citations and the horizontal axis the change in university ranking.

Wednesday, May 21, 2014

Comparing statistical mechanics to real data

I have posted before that I think it is very important in teaching to present students with comparisons of theory with actual experimental data. It is disturbing that many teachers and textbook writers make little effort to do this. On a positive note, here is a particularly nice comparison.

In PHYS4030 Condensed Matter Physics this week I am teaching Paramagnetism and diamagnetism, closely following chapter 31 of Ashcroft and Mermin.
Consider non-interacting paramagnetic ions with total angular momentum J in a magnetic field B at thermal equilibrium at temperature T. Basic statistical mechanics can be used to derive an expression for the magnetisation, which is a universal function of B/T, known as the Brilloiun function. A&M do not compare this to experiment. However, I recalled that when I was an undergraduate we used a very nice book, Heat and Thermodynamics, [5th edition] by Mark Zemansky. It contains the comparison below, taken from a 1952 paper by Henry.

Tuesday, May 20, 2014

How many transition states are there on a potential energy surface?

Much of chemistry can be described in terms of potential energy surfaces. They describe the energy of an electronic state of a set of molecules as a function of the positions of the atoms in the molecules. Local minima on the surface describe stable molecules (reactants and products of chemical reactions). Chemical reactions proceed by thermal activation over saddle points (transition states). Hence, an interesting and important question concerns how many possible transition states there might be on a surface? How are the number of transition states related to the number of local minima?

In the process of writing a paper on double proton transfer I have stumbled across a very general result that I have never seen stated before. For me there is some curious personal history because the result uses a theorem in the first paper I ever published, thirty years ago, resulting from my undergraduate honours [final year] thesis on general relativity! More on that below.

Here is the result. Consider a smooth surface, i.e. one with no conical intersections, and with isolated extremal points.

I illustrate this below with two model surfaces for double proton transfer.

For example, in the bottom figure, 4+1-4=1.

Hence, if varying the system parameters introduces an extra maxima or minima then one additional saddle point must also appear. One can intuitively see how this works in two dimensions but it turns out it is true in any dimension.
This relation is a consequence of differential topology [essentially the Poincare-Hopf index theorem]. The minima and maxima are associated with an index +1 and saddle points with -1.
The general theorem I proved 30 years ago states that if a smooth function f(r) (where r is a vector) tends to infinity as the magnitude of r tends to infinity or if the gradient of f points outward
over a closed surface (curve in two dimensions), then the extrema of f inside that closed surface, must satisfy the above relation.

How might a potential energy surface satisfy this general requirement on f(r)=Energy(bond lengths)? Essentially it is because as one greatly stretches or compresses chemical bonds the energy of the system will become large.

Aside: it was really strange for me looking at my old paper, published in the Journal of the Australian Mathematical Society. I actually can't believe I wrote it! It is so formal and mathematical. There are parts of it I now struggle to understand. The theorem was not motivated by chemistry but rather proving a general theorem in general relativity that a gravitational lens must produce an odd number of images.

So, has anyone seen this result for potential energy surfaces stated before? I could not find it in David Wales' nice book Energy Landscapes.

Monday, May 19, 2014

Don't sign the form!

8 "Now, O king, establish the injunction and sign the document, so that it cannot be changed, according to the law of the Medes and the Persians, which cannot be revoked.” Therefore King Darius signed the document and injunction.
Daniel 6
[Later King Darius regretted he signed the document because it landed Daniel in the Lion's den.]

Over the years I been fortunate to work with some excellent undergraduates and Ph.D students on projects. However, there have been a few that I regret agreeing to work with. Problems can include a poor work ethic, disorganisation, a weak technical background, procrastination, ..... On reflection, I think there was a common dynamic whereby I originally took them on. Often they came to me in a rush that they had a form that had to be signed. There was some impending deadline for enrolment and if I could just sign it we can work out the details later. This is a mistake. This should be a warning sign. It can often signify disorganisation and also a focus on administrative procedures, rather than the much more important issue on the actual science of the project and what it would mean to work together.

I have found that getting prospective students to engage with my blog can be an effective filter. I ask them to read a few posts about relevant science and about my philosophy of supervision and write a few paragraphs in response. Do they agree or disagree? Why not? The effort they put into this response, their level of engagement, and their level of understanding can reveal quite a lot. Also, getting them to prepare an expectations document can be helpful.

So, don't just sign the form! Once it is signed it becomes much more complicated to "move on" a mediocre/difficult/frustrating student. Beginning faculty should take particular note.

Friday, May 16, 2014

The value of simple exam questions

Gradually with more teaching experience most lecturers slowly learn that "easy" exam questions can be a good test of students knowledge, understanding, and skills.

In several earlier posts I have discussed my concern that some undergraduate students seem to be able to get to second, third, or even fourth year without being able to perform basic tasks such as
  • sketch a simple function
  • keep track of physical units in a calculation
  • have a feel for orders of magnitude so they can notice if a calculation gives a ridiculous answer
  • examine a plot of experimental data and note whether its qualitative and quantitative features to the predictions of a theory
Hence, I like, and think it is important, to set exam questions that test some of these skills. Here are some samples for a fourth year solid state physics course. Slowly I learnt it is also important and worthwhile to slightly change recycled questions.

Wednesday, May 14, 2014

Are there quantum limits to transport coefficients?

An important and fundamental question concerning a strongly interacting many-body system is whether there are fundamental limits (lower bounds) to transport coefficients such as the conductivity and viscosity. A related issue is whether there are upper bounds on energy, phase, and momentum relaxation rates, such as the buzz-concept of planckian dissipation.

There are two basic reasons why some believe this is true.
First, a simple argument is that it "does not make sense" to have mean free paths less than a lattice constant and the wavelength of the relevant quasi-particles. This leads to the Mott-Ioffe-Regel limit for the conductivity.
Second, in calculations based on the AdS-CFT correspondence one does find such bounds do hold.

However, I remain to be convinced that such bounds must hold. One reason is the existence of bad metals. In some strongly correlated electron materials the resistivity can increase smoothly above the Mott-Ioffe-Regel limit as the temperature increases. This is also seen at the level of a Dynamical Mean-Field Theory treatment of the Hubbard model.
A second reason is that I am skeptical that AdS-CFT actually corresponds to any (and certainly not all) physically relevant Hamiltonians.

Today I read an interesting paper
Quantum Mechanical Limitations to Spin Diffusion in the Unitary Fermi Gas
Tilman Enss and Rudolf Haussmann

The paper seems to presuppose that quantum limits do/should exist.
It is motivated by some very nice recent experiments on ultra cold atoms that measure spin diffusion coefficient and spin susceptibility as a function of temperature.

The main result is the graph below. The authors calculations are the red curve. The two important points are that it is a minimum as a function of temperature and that the value is Ds≃1.3ℏ/m, close to the "quantum limit".


A few minor comments.

1. It should be noted that the experimental data has been scaled down by a factor of 4.7 to allow for the inhomogeneity associated with the trapping potential. This is not as as unreasonable or as arbitrary as I first thought. At high temperatures one can calculate the correction for a harmonic trap and it is about a factor of 5.

2. The diffusion coefficient D is calculated from the "Einstein relation", D=conductivity/susceptibility.
No reference is given. This is a rather non-trivial relation, which is discussed and proven by Sachdev on page 171 of the first edition of his Quantum Phase Transitions book.

3. The authors point out that the agreement of D with experiment involves a fortuitous cancellation of errors. Their value for both the spin conductivity and susceptibility is off by a factor of about two, compared to experiment, as shown in Figures 3 and 4 in the paper.

4. I feel calling the calculation a "strong coupling Luttinger-Ward (LW) theory" is a bit too terse. To me LW theory is just a formalism [a self-consistent theory?] and the key point is that to use it one must make an approximation and decide to include only certain Feynman diagrams in the LW functional for the free energy.  My point here is similar to my post, Green's functions are just a technique.

5. To me, the experiments highlight the complementary strengths of cold atom and solid state systems. In the latter (but not the former) it is straightforward to measure the particle conductivity, maintain a homogeneous system, have good thermometry, and cool to temperatures orders of magnitude below the Fermi temperature. However, in contrast, measurements of the spin conductivity in solids are virtually non-existent.

The authors also calculate the frequency-dependent spin conductivity and find that it exhibits a broad Drude peak [the width is of the order of the Fermi energy], with [a very small amount of] spectral weight transferred to a universal high-frequency tail that is proportional to the Tan contact density C.

It is great to see the ultra cold atom community addressing these fundamental questions.

Postscript. Today (May 16) there is a paper in Science of a new measurement [by a spin echo technique] of the spin diffusion constant D, giving a value of about hbar/m, for a three-dimensional gas. The authors also cite a paper from last year which measured a value of D for a two-dimensional gas, that is about 150 times smaller than the "lower bound" of hbar/m.

Tuesday, May 13, 2014

Is this a useless contribution from Science and AIP journals?

Am I the only person irritated by this?
When you download an article from Science, IOP,  or AIP journals the first page is content free. Samples are below.
The "Articles you may be interested in" are usually a random collection of peripheral connection to the article.

When you want to print the article you can always just not print this page; but that does require a little more effort beyond hitting the "Print" key.

I realise there are more important things in the world to get upset about [schoolgirls getting kidnapped in Africa, scientific fraud, the latest Australian government budget, climate change....], but I just don't see why this is necessary.....

Monday, May 12, 2014

A basic but important research skill, 4: breaking the project down

Any worthwhile scientific project will be large, challenging, and ambitious. Even a small project, particularly for a beginner, can be intimidating and overwhelming. A key skill is to learn how to break a project down into small and manageable parts.

This applies whether one is trying to solve a particular scientific problem [how does a particular enzyme work? what is the origin of superconductivity in the iron pnictides?], write a large computer code, perform a multi-step chemical synthesis, solve a quantum many-body Hamiltonian with a specific approximation, fabricating a solar cell....

How does one do this? Are there some general principles?

I am not sure and I welcome comments and suggestions.
I also fear that current pressures to publish quickly lead to hastily put together projects without due attention to the robustness of the sub-projects.

My main suggestions are:

*be realistic. make sure each part/step is arguably manageable/doable.

* start with easy steps and processes that will build confidence and understanding. first trying to reproduce someone else's results is always a good thing!

*make sure that you have "control" of each step or "sub-module", i.e. you are sure it really does work and know what is going on. For example, if you have a large computer code using lots of sub-routines you need to be sure that each of them is numerically stable. Just, quickly throwing them together may produce garbage or be very hard to debug.

*plan steps that will produce publons.

*talk to others about how they do it, both in general and for specific projects.

Any suggestions?

Friday, May 9, 2014

Colloquium on Emergent states of quantum matter

Here are the slides for the talk I am giving today at the UQ Physics colloquium.
I will show the video Quantum levitation, and discuss what is and isn't quantum about it.

A good discussion of some of the issues raised is Laughlin and Pines article The Theory of Everything. A more extensive and introductory discussion by Pines is at Physics for the 21st Century.


Postscript.
Based on comments and questions afterwards, particularly from some undergraduates, there are few things I would do slightly differently.

I should have said what a Hamiltonian is: a function that defines the energy as a function of the system variables, e.g., the position and velocity of all of the particles.

The stratification of reality shown by my boxes is a simplification for schematic purposes. There is no clearly defined boundary between strata. For example, at the boundary between chemistry and physics one has chemical physics and physical chemistry. The boundary between biology and biochemistry is blurred. On the other hand, anatomy is qualitatively different to enzyme mechanisms. Acid-base equilibria is chemistry not physics.

Ben Powell emphasized to me that the claim that "superconductors exhibit broken U(1) gauge symmetry" is problematic and subtle. There is a long detailed paper, Superconductors are topologically ordered that I have read several times but don't really understand.

Thursday, May 8, 2014

Resisting the temptation to make the best looking data plot

It is a fallible human tendency to want to include in a paper the most favourable comparison between your pet theory and experiment. My collaborators and I were recently confronted with this issue when writing our recent paper on Quantum nuclear effects in hydrogen bonding.

We calculated a particular vibrational frequency for both hydrogen and deuterium isotopes. Experimentalists had previously reported that this ratio has large and non-monotonic variations as a function of the donor-acceptor distance R. The plot below shows a comparison of our calculations [curves] to experimental data on a wide range of chemical complexes [each point is a separate compound].
I was quite happy with this result, particularly because getting the frequency ratio down to values as small as one was significant [Aside: this is an amazing thing because in most compounds the isotope frequency ratio is close to 1.4 = sqrt(2), as expected from a simple harmonic oscillator analysis].

It was tempting just to publish this plot.
But, there is a problem. Most previous plots by experimentalists did not use R as the horizontal axis but Omega_H, the frequency for the H case. [For example, see the plot I featured in a post  back in 2011 when I started thinking about this problem].
Below is the corresponding plot.


It is much less impressive!
Why? The problem is that for R ~ 2.5 Angstroms our theory does not give values of the frequency, that agree very well with experiment, as shown in a earlier Figure in the paper. We discuss some possible reasons for that.

So we decided that the best thing to do was to publish both figures and readers can make their own decisions about the strengths and weaknesses of our work.

Now here is another slant. The data above is for O-H...O bonds, which we focussed on in our paper. The data below is for N-H...N bonds [taken from here] and shows much clearer correlations than the data above. Again it would have been tempting to focus on that case.


I will also illustrate my point with a historically much more important example.
The figures below are also discussed in an earlier post. [It led to a Nobel Prize]. The upper version shows a moderately impressive comparison of data with a theoretical curve. However, the main point of the paper [and the Nobel Prize for cosmic acceleration] is not the linear component [Hubble constant] but the non-linear component [expansion]. The lower part of the figure has the linear part subtracted out and looks far less impressive. Nevertheless, it stood the test of time and complementary measurements, as discussed in the earlier post.

In conclusion, I think it is important that we not always present our work so it appears in the best possible light.

Wednesday, May 7, 2014

A tribute to liberal arts colleges in the USA

Which institutions the best job training scientists at the undergraduate level in the USA?
If you want a job teaching highly gifted and motivated undergraduates where should you try and work?

The answer is not what you might think? [Ivy League, Berkeley, Stanford, ....]

If you look at the undergraduate origin of the recipient of doctoral degrees from US universities you find something surprising. For all academic fields, of the top ten, six are small private liberal arts colleges [i.e. they have no Ph.D program]: Harvey Mudd, Swarthmore, Reed, Carleton, Grinnell, and Oberlin. For science, the results are similar.

Thomas Cech shared the 1989 Nobel Prize in Chemistry and was President of the Howard Hughes Medical Institute for a decade. He graduated from Grinnell and has an interesting article Science at Liberal Arts Colleges: A Better Education?

Asides: Reed College is interesting [for many reasons!] because it has resisted involvement in institutional ranking exercises (even though it is often very highly ranked) because it considers them flawed. I find this refreshing!
An earlier post considers the teaching philosophy of one of their distinguished physics faculty, David Griffiths.

This is all relevant to two of the claims made by Hunter Rawlings in an article featured in an earlier post:

Small colleges play an important role in making the diverse US system so strong overall.

At large research universities undergraduates have become peripheral to the whole enterprise [sports, hospitals, research, grad students, professional schools, infrastructure, ....]

There is a helpful article in Physics Today, Hunting for Jobs at Liberal Arts Colleges written by two faculty with experience at hiring people.

The Australian education minister recently announced that Australia needs to move towards a more USA-like university system. Somehow, I don't think small liberal arts colleges is what he has in mind!

Tuesday, May 6, 2014

Slides for latest talk on Mental Health for Scientists

Today I am giving a talk "Mental Health for Scientists" to the Early Career Researcher group at the Institute for Molecular Biosciences at UQ. Here are the slides.

Monday, May 5, 2014

Is there a Fermi liquid associated with the pseudogap state of the cuprates?

To me this seems at first to be a strange idea. The phenomenology of the cuprates and doped Hubbard models is roughly that as the doping decreases one goes from a Fermi liquid (large overdoping with no superconductivity) to an anisotropic marginal Fermi liquid  (overdoped but superconducting) to strange metal (marginal Fermi liquid) (optimal doping) to pseudogap state (underdoping). Hence, I would have thought that everything was rather non-Fermi liquid like in the pseudogap state.

However, the observation in the pseudogap range of copings of quantum magnetic oscillations (that could be associated with a small Fermi surface) and Fermi arcs, raised the question of a Fermi liquid state.

Over the past few years Martin Greven and collaborators have performed a range of transport measurements on a relatively clean single layer cuprate material Hg1201. They find Fermi liquid type behaviour [e.g. resistivity quadratic in temperature, scattering rates quadratic in frequency] for a range of temperatures below the pseudogap temperature T*.

A recent preprint is
Validity of Kohler's rule in the pseudogap phase of the cuprate superconductors
M. K. Chan, M. J. Veit, C. J. Dorow, Y. Ge, Y. Li, W. Tabis, Y. Tang, X. Zhao, N. Barišić, M. Greven

What is Kohler's rule?

In simple metals the temperature and magnetic field dependence of the magnetoresistance is dominated by the orbital motion of the electrons and described by some function of the product of omega_c and tau.

omega_c is the cyclotron frequency which is proportional to the magnetic field B and independent of temperature.
tau is the scattering time, which is temperature dependent and field independent, and should have the same temperature dependence as 1/rho where rho(B=0) is the resistivity in zero field.

These observations lead to Kohler's rule which is obeyed by simple metals.
A plot of the ratio of the rho(B)/rho(B=0) versus B/rho(B=0) should be independent of temperature.

In 1995 Ong's group observed significant violations of Kohler's rule in the underdoped and overdoped cuprates. Similar results were found by a Japanese group.

In 1998 I pointed out that in one mysterious organic metal there were also significant violations. The paper also has an extensive discussion of reasons why Kohler's rule can fail.


In the preprint, the authors find results consistent with Kohler's rule for Hg2201 samples with Tc=70 K and 81K, and temperatures between about 100 K and 200 K, and fields up to 30 Tesla.
The left plot is the bare data and the right plot is the data rescaled according to Kohler's rule.

Is the idea of Fermi liquid in the pseudogap region reasonable?
The scenario may be something like this.
There are Fermi liquid like quasi-particles near the nodes of the pseudogap. The non-Fermi liquid excitations occur towards the anti-nodal regions. This is the basic idea of the anisotropic marginal Fermi liquid developed for overdoped cuprates. Suppose one assumes something like that model actually applies for all doping. Then in the pseudogap region the non-Fermi liquid part will start to get gapped out and one will just be left with the Fermi liquid part. This can then undergo charge ordering instabilities to form Fermi surface pockets due to Fermi surface reconstruction.

Friday, May 2, 2014

More mental health issues and resources

Next week I am giving another talk on mental health issues for scientists. Since I am doing this more I have been doing a bit more reading. Also, people are starting to send me various relevant articles. Here are a few things I have learnt, in no particular order.

Andrew Lange was one of the world's leading observational cosmologists and Chair of the Division of Physics,  Maths, and Astronomy at Caltech. He was the lead investigator on BICEP, the forerunner of the experiment that recently found evidence for cosmic inflation. He suffered from depression. Tragically, he committed suicide in 2010.

Lewis Wolpert FRS is a distinguished developmental biologist who has suffered through several severe periods of depression. Years ago he wrote an article about his experience in The Guardian newspaper. He says that he received more feedback than for anything he had written in his whole career. This was followed with a book Malignant Sadness: The Anatomy of Depression and an associated BBC TV program, A Living Hell.

Doris Iarovici, a psychiatrist at the Duke University Counseling and Psychological Services has just published a book, Mental Health Issues and the University Student. Some of it is depressing reading, as it chronicles the pressures students in the USA are under, and some of the poor ways they try to cope with them. There is a whole chapter on the problem of perfectionism. Although the book is mostly concerned with undergraduates it does discuss graduate students. The author also has a New York Times blog post raising concerns about over-prescription of anti-depressants.

There is a research article The impact of funding deadlines on personal workloads, stress and family relationships: a qualitative study of Australian researchers. It concerns  how much time researchers spend/waste making grant applications, focussing on the case of the NHMRC [the main funding agency for medical research]. It contains the cryptic comment:
Additional impacts on mental health and well-being were identified through comments including ‘incredible anxiety’, ‘depressed’, ‘despondent’, ‘insecurity’ and ‘soul- destroying’. The mental health and welfare of researchers warrants further examination beyond this study.