Friday, June 23, 2017

Refereeing papers: recent experiences at the coal face

I am not a big fan of the peer review process. Too often it is a superficial ritual that adds little scientific value. Nevertheless, when it does work I think it can be very valuable. Here are some of my recent experiences that I thought were rather positive, and may be marginally interesting to readers.

I was sent a paper by JCP to review. Overall, I liked it but I thought it would benefit from some significant revisions. In a weird coincidence, I was visiting the same institution as some of the authors. I have been recently challenged about whether peer review really should be anonymous [see this discussion of SciPost] and so I took a risk. I signed my report and sent a copy to all the authors and told them I would be happy to meet to discuss the paper. We met and had a nice discussion. However, it was interesting that JCP told me that they had deleted my self-identification as it was against their policy.

I was sent a paper by PRL to review that I (and other referees) took a strong dislike to. The cover letter was also "Interesting". I wrote a concrete critical report. However, for the first time ever, I used the box for "Comments that will only be seen by the Editors". I said the authors had inappropriately used and cited my own work, that the paper was in the class "Not even wrong", and that if PRL published it, PRLs reputation in a certain community would suffer. Maybe I am a coward, but I am glad I was anonymous.

I got a paper I liked to review from JCP. However, the authors did not engage with a whole physics literature that was relevant to the paper and they needed to use it to sharpen their results. I think the final paper will be much better and more interesting.

A recent paper with my postdoc got two critical but constructive reports from PRB. This required some new calculations and comparisons but was much better as a result. Today we heard it was accepted.

I now decline all referee requests from luxury journals. I have limited time and would prefer to invest in journals that I think are making a positive contribution to science.

Have you had any recent positive experiences as a referee or received a helpful report?

Monday, June 19, 2017

The scientific relevance of your hobby

On the one hand to make progress in science you need to focus, work hard, and build your expertise. This leads some to think that it best that they not pursue outside interests and hobbies such as art, music, craft, puzzles, games, ...
However, scientific discoveries, particularly big ones, often involve creativity, serendipity, or thinking outside the box.

I noticed two examples of this recently.
The first was how fascination with a cheap child's toy led to the key idea behind the development of extremely cheap centrifuge [paperfuge] for health diagnostics in the Majority World.



The second example was a New York Times article about a recent paper that argues that key to Pasteur's discovery of molecular chirality was his interest in art.

Another example, is Harry Kroto who shared The Nobel Prize in Chemistry for the discovery of buckyballs. He credited playing with Meccano as a child as very important in his scientific development.

Can you think of other examples?

Friday, June 16, 2017

Why do some people think they can get something for nothing?

This is a small rant. I want to stress that it is not because of anything directly involving me. Rather it comes from things that come across my desk and frustrations that friends and colleagues vent to me.

Here is a sample situation.
Professor A in Department B at University C wants to apply to funding agency D for a joint multi-million dollar research grant with Professor E in Department F at University G. There is also an industrial partner, company H. Obviously, if the application is successful then A to H will all benefit. But now comes the rub. All parties need to commit to contributing something: whether it is time, lab space, matching funds, intellectual property rights, reduced teaching or admin. responsibilities, hiring new people, giving someone a permanent job, equipment, infrastructure, ..... and they need to divide up the grant if they get it.

My frustration and concern are that I encounter cases where one or more of the parties are completely unreasonable about how little they should contribute, if at all. They seem to want something for nothing. Furthermore, they will persist in this even if it means the application won't proceed or has virtually no chance of success. They fail to believe that there will be other applicants who will have strong support and contributions from all the parties involved.

I know that resources are scarce, budgets are tight, and people want to drive a hard bargain. That is not what I am talking about. The real "Art of the Deal" is not the Trump version, but compromising to a win-win situation, not sabotaging the deal because of fantasy and blind selfishness.

Do you encounter situations like this?

Tuesday, June 13, 2017

How might we teach students to actually think?

Four important goals to me are to teach students:
1. To think.
2. To think like a physicist.
3. To think like a condensed matter physicist.
4. The specific technical content of the course.

The last one is arguably easier than the others.
I also think it is the least important. Others will disagree.
We don't reflect enough on how we might achieve the other goals.
The biggest challenge of improving education in the Majority World is not lack of material resources but changing the culture of rote learning and teaching critical thinking.
[This is highlighted in a NYTimes piece about China and a very funny video about India ITs].

Last week the UQ School of Maths and Physics Teaching Seminar was given by Peter Ellerton who works for the UQ Critical Thinking project.

The slides from a similar talk are here.
In the talk he mostly walked us through the three graphics shown here.
[If you click on the image you can see a high resolution .pdf]

The main value of all this is it puts names, categories, and questions on what I want to do. I found the third graphic the most helpful because it has some very specific questions we can ask students to get them to reflect more on what they are learning and in the process learn to think more critically.

Thursday, June 8, 2017

A lucid lecture on the last 50 years of superconductivity

At the weekly condensed matter theory cake meeting today we watched a video of a KITP blackboard talk given by Piers Coleman in 2015.
Superconducting Surprises: five decades of discovery, in both temperature and time!

It is a very nice exposition of the history and some of the key physics.

A couple of minor comments.

Organic superconductors were discovered in 1980 not 1973.

Piers claims that the difference between the thermodynamic entropy of the superconducting and metallic states (determined from integrating the temperature dependent specific heat) is related to the quantum entanglement entropy of the superconducting ground state.
The relationship between entanglement entropy (defined on a pure quantum state (at zero temperature) which is divided in two) and thermal entropies (defined for a bulk system in a mixed state at finite temperature) is an incredibly subtle and complex issue that I don't think is resolved. See for example the discussion in this paper.

Monday, June 5, 2017

The challenge of applied research

Last friday we were fortunate to have David Sholl give a physics colloquium at UQ,
``What Does Quantum Mechanics Have To Do With The Chemical Industry? Reflections On A Journey From Pure To Applied Research.''
Here are the slides.

David has a background in theoretical physics and has been particularly successful at using atomistic simulations to study problems that chemical engineers care about. He is co-author of a book, Density Functional Theory: A Practical Introduction
His three main points in the talk were
  • Applied research is worth doing and is intellectually satisfying
  • Applied research relies on fundamental insights 
  • How to waste time and money doing applied research
The piece of science I found most interesting was the figure below which shows how the calculated self-diffusion constant D of small hydrocarbons in a zeolitic imidazolate framework varies with the size of the hydrocarbon molecule.
Note how D varies over 14 orders of magnitude.

Some of the key physics is that this large variation arises because the diffusion constant is essentially determined by the activation energy associated with the transfer of a molecule through the molecular hole between adjacent pores. When the molecular size is comparable to the hole size, D rapidly diminishes because of steric effects.
It would be nice to have "simple" theory of the correlation.

The figure is taken from the paper
Temperature and Loading-Dependent Diffusion of Light Hydrocarbons in ZIF-8 as Predicted Through Fully Flexible Molecular Simulations 
Ross J. Verploegh, Sankar Nair, and David S. Sholl

Friday, June 2, 2017

The educational value of undergraduate research projects

This past semester I have been supervising two undergraduate research projects. One student is doing a one semester course (1/4 of the students load) for a third year student. The second student has a year long project for a fourth year student (1/2 of their load). I am very happy with how both have gone in terms of their educational value. The amount of research results is of secondary importance to me. Previously, I posted about possible ingredients for a good undergrad project. Both students are working on a simple model for hydrogen bonds. I recommend this because it has an "easy" learning curve and so they can start "doing science" quick. It also has a nice mix of theory and experiment, chemistry and physics.

Things that struck me as particularly valuable include the following very basic things. Some of which relate to basic but important skills.

Seeing calculations to completion. 
In an undergrad problem set or exam the student has limited time and gets partial marks for incomplete or wrong answers. In research you have to keep working on the problem until you have an answer and have checked it enough that you are confident it is the correct answer.

Personal attention.
Each week they get to meet one-to-one with a faculty member and get advice and feedback.

Units! 
Learning that they really do matter and you have to get them right. This converting between different unit systems.

Writing and debugging code.
Even a short Matlab or Mathematica code.

Reading papers not textbooks.
Gifted students can find textbooks quite manageable and understandable. Papers are in a different league.

Experiencing what research is often like.
Hard. Confusing. Boring. Tedious... But, progress and understanding can be quite satisfying.

Communication skills.
Giving a talk and writing reports, and getting feedback on them.

Job skills.
Time management. Showing up for meetings on time. Writing meeting summaries. Coming up with action plans. Listening to constructive criticism. Working with others.

Tuesday, May 30, 2017

Quantity swamps quality

Every now and then I have to review a lot of CVs. What is increasingly striking is the sheer quantity of "line items": papers, grants, citations, talks, seminars, Ph.Ds supervised, public outreach activities, referee activities, committee service, conference organisation, .....
Sometimes teaching, particularly of undergraduates, is almost an afterthought.

One concern is the difficulty of evaluating the quality of this hyperactivity.
This is why metrics are so seductive, particularly to the non-expert.
But even if you want to give some weight to numbers of papers, journal "quality", total research funding, ... I think they are quite hard to interpret.
In some research areas,  papers often have ten authors, and so it is very difficult to know an individual's contribution, even if they are first or last author. I increasingly encounter statements such as "Since I became a faculty member ten years ago I have attracted $7M of external funding". This sounds very impressive. However, once you look at the details you find a mix of grants with long lists of CIs, such as infrastructure grants. Again it is not clear whether the individual was really that central to many of the grants.

Another concern, is I am skeptical that these "highly productive" people have the time, energy, and focus needed to think deeply, work on challenging and ambitious projects, and produce much that is scientifically significant. Quantity crowds out quality.

Finally, it worries me that for many of these people the "system" seems to be "working" for them, i.e. they are getting jobs, promoted, funded, ...

Now I do concede that in some cases I do not have the expertise to fully appreciate the significance or value of what people are doing. However, I fear that is the exception, not the rule.

If my concerns are legitimate, what is the way forward?
If assessing, it is very important to have people involved who have the necessary expertise to critically and fairly evaluate the quality of people's individual scientific contributions. This means asking for and reading letters of reference, and actually reading some of their papers. Neither is infallible but it is a lot better than bean counting.

It is possible for individual fields to preserve a culture of quality taking precedence over quality. For example, in pure mathematics and economics, you will find that "publication rates" can be almost an order of magnitude lower than in physics and chemistry.

On the practical side, you might also consider editing and shortening your own CV so the signal to noise ratio is higher.

Finally, I hope you will personally resist uncritically following this rush to mediocrity.

Is my concern legitimate? If so, what are the ways forward?

Tuesday, May 23, 2017

How should undergraduate quantum theory be taught?

Some of my colleagues and I have started an interesting discussion about how to teach quantum theory to undergraduates. We have courses in the second, third, and fourth years. The three courses have independently evolved, depending on who teaches each. Some material gets repeated and other "important" topics get left out. One concern is that students seem to not "learn" what is in the curriculum for the previous year. The goal is to have a cohesive curriculum. This might be facilitated by using the same text for both the second and third-year courses.
This has stimulated me to raise some questions and give my tentative answers. I hope the post will stimulate lots of comments.

The problem that students don’t seem to learn what they should have in pre-requisite courses is true not just for quantum. I encounter second-year students who can’t do calculus and fourth-year (honours) students who can’t sketch a graph of a function or put numbers in a formula and get the correct answer with meaningful units. As I have argued before, basic skills are more important than detailed technical knowledge of specific subjects. Such skills include relating theory to experiment and making order of magnitude estimates.

Yet, given the following should we be surprised?
At UQ typical lecture attendance probably runs at 30-50 per cent for most courses. About five per cent watch the video. [University policy is that all lectures are automatically recorded]. The rest are going to watch it next week… Only about 25 per cent of the total enrolment in my second-year class are engaged enough to be using clickers in lectures. Exams are arguably relatively easy, similar to previous years, usually involve choosing questions/topics, and a mark of only 40-50 per cent is required to pass the course.
I do not think curriculum reform is going to solve this problem.

Having the same textbook for 2nd and 3rd year does have advantages. This is what we do for PHYS2020 Thermodynamics and PHYS3030 Statistical Mechanics. But, some second years do struggle with it... which is not necessarily a bad thing. The book is Introduction to Thermal Physics, by Schroeder.

Another question is what approach do you take for quantum: Schrodinger or Heisenberg, i.e. wave or matrix mechanics? The mathematics of the former is differential equations, that of the latter is linear algebra. Obviously, at some point you teach both, but what do you start with. It is interesting that the Feynman lectures really start with and develop the matrix approach, beating the two level system to death...
At what point do you solve the harmonic oscillator with creation and annihilation operators?
When do you introduce Dirac notation?

I would be hesitant about using Dirac notation throughout the second year course. I think this is too abstract for many of our current students. They also need to learn and master basic ideas/techniques about wave mechanics: particle in a box, hydrogen atom, atomic orbitals, … and connecting theory to experiment... and orders of magnitude estimates for quantum phenomena.

What might be a good text to use?

Twenty years ago (wow!) I taught second (?) year quantum at UNSW. The text I used is by Sara McMurry. It is very well written. I would still recommend it as it has a good mix of experiment and theory, old and new topics, wave and matrix mechanics….
It also had some nice computer simulations. But it is out of print, which really surprises and disappoints me.

Related to this there is a discussion on a Caltech blog about what topics should be in undergraduate courses on modern physics. Currently, most "modern" physics courses actually cover few discoveries beyond about 1930! Thus, what topics should be added? To do this one has to cut out some topics. People may find the discussion interesting (or frustrating…). I disagree with most of the discussion, even find it a little bizarre. Many of the comments seem to be from people pushing their own current research topic. For example, I know it is Caltech, but including density matrix renormalisation group (DMRG), does seem a little advanced and specialised...
There is no discussion of one of the great triumphs of "modern" physics, biophysics! I actually think every undergraduate should take a course in it.

What do you cut out?
I actually think the more the better, if the result is covering a few topics in a greater depth that develops skills, creates a greater understanding of foundations, that all leads to a greater love of the subject and a desire and ability to learn more.
In teaching fourth year condensed matter [roughly Ashcroft and Mermin] it is always a struggle to cut stuff out. Sometimes we don't even talk about semiconductor devices. This year I cut out transport theory and the Boltzmann equation so we could have more time for superconductivity. This is all debatable... But I hope that the students learned enough so that they if they need to they have the background they need to easily learn these topics.

A key issue that will divide people concerns the ultimate goal of a physics undergraduate education. Here are three extreme views.

A. It should prepare people to do a PhD with the instructor.
Thus all the background knowledge needed should be covered, including the relevant specialised and advanced topics.

B. It should prepare people to do a physics PhD (usually in theory) at one of the best institutions in the world.
Thus, everyone should have a curriculum like Caltech.

C. It should give a general education that students will enjoy and will develop skills and knowledge that may be helpful when they become high school teachers or software engineers.

What about Academic Freedom?
This means different things to different people. In some ways I think that the teacher should have a lot of freedom to add and subtract topics, to pitch the course at the level they want, and to choose the text. I don't think department chairs or colleagues should be telling them what they "have" to do. Obviously, teachers need to listen to others and take their views into account, particularly if they are more experienced. But people should be given the freedom to make mistakes. There are risks. But I think they are worth them in order to maintain faculty morale, foster creativity, maintaining standards, and honouring the important tradition of academic freedom. Furthermore, it is very important that faculty are not told by administrators, parents, or politicians what they should or should not be doing. Here, we should bear a thought for our colleagues in the humanities and social sciences, particularly in the USA, who are under increasing pressure to act in certain ways.

I welcome comments on any of the above.
My colleagues would particularly like to hear any text recommendations. Books by Griffiths, Shankar, Sakurai, and Townsend have been mentioned as possibilities.

Tuesday, May 16, 2017

A radical procedure for evaluating applicants: read one of their papers!

Like most faculty I have to evaluate the scientific "performance" and "potential" of applicants for jobs, promotion, prizes, and grants. This is a difficult task because we are often asked to evaluate people who we don't know, are unfamiliar with their work in our (somewhat related) field of expertise, or are working in completely different fields we know nothing about. For example, I have been on a committee where I had the ridiculous task of evaluating people in fields such as veterinary medicine, geography, and agriculture! This is one reason why metrics are so seductive and deceptive.

Here I want to focus on evaluating applicants who work in an area that is close enough to my own expertise, according to the following criteria. I can read one of their papers and make a reasonably informed assessment of its value, significance, and validity. This is important because it is easy to loose sight of the fact that there is only ONE measure that really matters: the ability of a person to produce valuable scientific knowledge. All the metrics, invited talks, grants, hype, slickly presented grant applications, enthralling presentations, .... are not what really matters.  They are not research accomplishments. This measure can only be assessed from the actual content of papers.

I am embarrassed to admit that I am finally trying to work harder at ``practising what I preach.'' When I need to assess a credible application I try to identify just one paper that the applicant identifies as significant or for a grant application a paper that is central to the proposed project. I then look at this paper and then go back to the (copious) paperwork of the submitted application. You might think that this takes more time. But, actually it may save time because it can be so definitive to my view (positive, neutral, or negative) that I have to read and agonise less about my assessment.

I have always done this before when assessing applicants for postdocs to work directly with me, but much more irregularly for other situations.

Ideally, this should not be necessary. Rather, a good applicant would be someone whose papers we have already read because we wanted to. However, that is not the world we live in. 

Is this a reasonable approach? Do you do something like this? Any other recommended approaches?

Friday, May 12, 2017

How should you engage with Trump and Brexit supporters?

Trump's election win surprised many, including me. Most people underestimated the level of resentment towards "elites" and the "establishment", particularly among working class white voters. This was also a factor in Brexit.

This post is about how scientists and university faculty might engage more effectively with such groups. Some of the concerns I have are similar to those I have about Marching for Science.

Over the past six months I have read some articles and had some interesting conversations with people in both the USA and Australia, who either voted for Trump or would have. What surprised and shocked me was, even among some ``well educated'' people, was the level of distrust and resentment towards the "elites"? [``You should not believe anything in the New York Times... Trump is telling the truth about crime statistics... We aren't safe.. All these liberals had it coming....'']
I also found helpful the book, Hillbilly Elegy.

My wife is a US citizen and one of her relatives in the USA works at a state university and sent us a popular newspaper article, written by a local faculty member, that aims to open a dialogue with a particular demographic that voted heavily for Trump. I largely agree with the author and have some similar political and religious views. I particularly liked the sincerity of the attempt to open a dialogue on several specific issues. However, there were several aspects to the article that I thought underlies the problem we face, rather than moving towards a real dialogue. I have made the debatable decision to not link to the actual article, because of the negative comments I make about the author and I am worried some of the content and issues discussed may distract readers from my points, which I think are part of broader challenges.

We are the elite.
The author claims they are not elite because they have a working class background and never studied at, or worked at, an Ivy League university.
I disagree. The author is a full Professor and has an annual salary of US$100 K [At that state university the salary of all employees is public and can be looked up in minutes.] In addition, they and their family may receive significant health care benefits and college tuition subsidies. They will keep their job until they choose to retire in their sixties (or even later) on a generous pension.
A survey found that the average american thinks that $122K per year makes someone "rich".
I would think the author (like me) is in the global one per cent. Furthermore, they have a job where they have fun teaching and researching subjects [in the social sciences and humanities] that many (but not me) would consider are "soft" or "left wing", and of no "economic" benefit.
Let me be clear, I think the remuneration, job security, and academic freedom of faculty is generally appropriate and important. We may not be Wall Street investment bankers or highly paid political consultants. But, we should acknowledge that we are part of the elite and privileged, regardless of our backgrounds.

Why should people trust us?
The author says that Trump voters should trust and respect the expertise of faculty on scientific, social,  economic and policy matters. "You don't seem to understand that our work goes through the extremely rigorous process of peer review."
Seriously!
In physical sciences, a lot of nonsense still gets published, especially in ``high impact'' journals. The social sciences and humanities are arguably even worse.
Hype from scientists, soft action by universities on scientific fraud (this New York Times article includes the photo below of the relevant professor with his private art collection!), and excessive university administrator salaries, is not helping create trust and respect.

Don't talk down to people who are less educated.
Although, I felt the author tried hard to engage the Trump supporters I could not but help feel (perhaps unfairly) that they were talking down to their audience. ``We know what is right and what is best for you. You really aren't smart enough to understand...''
This problem can also occur when scientists interact with groups such as climate change ``skeptics'' and young earth creationists.
I know that at times I am not as patient or as diplomatic as I could be.


What do you think? Are these general concerns part of the problem?

Tuesday, May 9, 2017

Is this a reasonable exam?

I struggle to set good exam questions. One wants to test knowledge and understanding in a way that is realistic within the constraints of students abilities and backgrounds.

I do not have a well-defined philosophy or approach, except for often recycling my old questions...
I think I do have a prejudice towards two goals.

A. Testing higher level skills [e.g. relating theory to experiment, putting things in context, ...] as much as specific technical knowledge [e.g. state Bloch's theorem or solve the Schrodinger equation for a charged particle in constant magnetic field].

B. Testing general and useful knowledge. For basic undergraduate courses [e.g. years 1 to 3] the question should be one that another faculty member could do, even if they have not taught the course. Sometimes, colleagues write questions that I cannot do. You have to have done the problem before, e.g. in a tutorial. We seeming to be testing whether someone has done this course, not "essential" knowledge.

However, I am not sure I really go anywhere near reaching these goals.
Here is a recent mid-semester exam I set for my solid state class of fourth-year undergraduates.
Is it reasonable?

How do you set exam questions?
Do you have a particular approach?

Friday, May 5, 2017

Talk on "crackpot" theories

At UQ there is a great student physics club, PAIN. Today they are having a session on "crackpot" theories in science. Rather than picking on sincere but misguided amateurs I thought I would have a go at "mainstream" scientists who should know better. Here are my slides on quantum biology.

A more detailed and serious talk is a colloquium that I gave six years ago. I regret that the skepticism I expressed then seems to have been justified.

Postscript.
I really enjoyed this session with the students. Several gave interesting and stimulating talks, covering topics such as flat earth, last thursdayism, and The Final Theory of gravity [objects don't fall to the earth but rather the earth rises up to them...]. There were good discussions about falsifiability, Occam's razor, Newton's flaming laser sword, ...
There was an interesting mixture of history, philosophy, humour, and real physics.

I always find to encouraging to encounter students who are so excited about physics that they want to do something like this on a friday night.

Wednesday, May 3, 2017

Computational density functional theory (DFT) in a nutshell

My recent post, Computational Quantum Chemistry in a nutshell, was quite popular. There are two distinct approaches to computational approaches: those based on calculating the wavefunction, which I described in that post, and those based on calculating the local charge density [one particle density matrix of the many-body system]. Here I describe the latter which is based on density functional theory (DFT). Here are the steps and choices one makes.

First, as for wave-function based methods, one assumes the Born-Oppenheimer approximation, where the atomic nuclei are treated classically and the electrons quantum mechanically.

Next, one makes use of the famous (and profound) Hohenberg-Kohn theorem which says that the total energy of the ground state of a many-body system is a unique functional of the local electronic charge density, E[n(r)]. This means that if one can calculate the local density n(r) one can calculate the total energy of the ground state of the system. Although this is an exact result, the problem is that one needs to know the exchange-correlational functional, and one does not. One has to approximate it.

The next step is to choose a particular exchange-correlation functional. The simplest one is the local density approximation [LDA] where one writes E_xc[n(r)] = f(n(r)), where f(x) is the corresponding energy for a uniform electron gas with constant density x. Kohn and Sham showed that if one minimises the total energy as a function of n(r) then one ends up with a set of eigenvalue equations for some functions phi_i(r) which have the identical mathematical structure to the Schrodinger equation for the molecular orbitals that one calculates in a wave-function based approach with the Hartree-Fock approximation. However, it should be stressed that the phi_i(r) are just a mathematical convenience and are not wave functions. The similarity to the Hartree-Fock equations means the problem is not just computationally tractable but also relatively cheap.

When one solves the Kohn-Sham equations on the computer one has to choose a finite basis set. Often they are similar to the atomic-centred basis sets used in wave-function based calculations. For crystals, one sometimes uses plane waves. Generally, the bigger and the more sophisticated and chemical appropriate the basis set, the better the results.

With the above uncontrolled approximations, one might not necessarily expect to get anything that proximates reality (i.e. experiment). Nevertheless, I would say the results are often surprisingly good. If you pick a random molecule LDA can give a reasonable answer (say within 20 per cent) of the geometry, bond lengths, heats of formation, and vibrational frequencies... However, it does have spectacular failures, both qualitative and quantitative, for many systems, particularly those involving strong electron correlations.

Over the past two decades, there have been two significant improvements to LDA.
First, the generalised gradient approximation (GGA) which has an exchange-correlation functional that allows for the spatial variations in the density that are neglected in LDA.
Second, hybrid functionals (such as B3LYP) which contain a linear combination of the Hartree- Fock exchange functional and other functionals that have been parametrised to increase agreement with experimental properties.
It should be stressed that this means that the calculation is no longer ab initio, i.e. one where you start from just Schrodinger's equation and Coulomb's law and attempts to calculate properties.

It should be stressed that for interesting systems the results can depend significantly on the choice of exchange-correlational functional. Thus, it is important to calculate results for a range of functionals and basis sets and not just report results that are close to experiment.

DFT-based calculations have the significant advantage over wave-function based approaches that they are computationally cheaper (and so are widely used). However, they cannot be systematically improved [the dream of Jacob's ladder is more like a nightmare], and become problematic for charge transfer and the description of excited states.

Thursday, April 27, 2017

Is it an Unidentified Superconducting Object (USO)?

If you look on the arXiv and in Nature journals there is a continuing stream of people claiming to observe superconductivity in some new material.
There is a long history of this and it is worth considering the wise observations of Robert Cava, back in 1997, contained in a tutorial lecture.
It would have been useful indeed in the early days of the field [cuprate superconductors] to have set up a "commission" to set some minimum standard of data quality and reproducibility for reporting new superconductors. An almost countless number of "false alarms" have been reported in the past decade, some truly spectacular. Koichi Kitazawa from the University of Tokyo coined these reports "USOs", for Unidentified Superconducting Objects, in a clever cross-cultural double entendre likening them to UFOs (Unidentified Flying Objects, which certainly are their equivalent in many ways) and to "lies" in the Japanese translation of USO. 
These have caused great excitement on occasion, but more often distress. It is important, however, to keep in mind what a report of superconductivity at 130K in a ceramic material two decades ago might have looked like to rational people if it came out of the blue sky with no precedent. That having been said, it is true that all the reports of superconductivity in new materials which were later confirmed to be true did conform to some minimum standard of reproducibility and data quality. I have tried to keep up with which of the reports have turned out to be true and which haven't. 
There have been two common problems: 
1. Experimental error- due, generally, to inexperienced investigators unfamiliar with measurement methods or what is required to show that a material is superconducting. This has become more rare as the field matures. 
[n.b. you really need to observe both zero resistivity and the Meissner effect].
2. "New" superconductors are claimed in chemical systems already known to have superconductors containing some subset of the components. This is common even now, and can be difficult for even experienced researchers to avoid. The previously known superconductor is present in small proportions, sometimes in lower Tc form due to impurities added by the experimentalist trying to make a new compound. In a particularly nasty variation on this, sometimes extra components not intentionally added are present - such as Al from crucibles or CO2 from exposure to air some time during the processing. I wish I had a dollar for every false report of superconductivity in a Nb containing oxide where the authors had unintentionally made NbN in small proportions.
There is also an interesting article about the Schon scandal, where Paul Grant claims
During my research career in the field of superconducting materials, I have documented many cases of an 'unidentified superconducting object' (USO), only one of which originated from an industrial laboratory, eventually landing in Physical Review Letters. But USOs have had origins in many universities and government laboratories. Given my rather strong view of the intrinsic checks and balances inherent in industrial research, the misconduct that managed to escape notice at Bell Labs is even more singular.

Monday, April 24, 2017

Have universities lost sight of the big questions and the big picture?

Here are some biting critiques of some of the "best" research at the "best" universities, by several distinguished scholars.
The large numbers of younger faculty competing for a professorship feel forced to specialize in narrow areas of their discipline and to publish as many papers as possible during the five to ten years before a tenure decision is made. Unfortunately, most of the facts in these reports have neither practical utility nor theoretical significance; they are tiny stones looking for a place in a cathedral. The majority of ‘empirical facts’ in the social sciences have a half-life of about ten years.
Jerome Kagan [Harvard psychologist], The Three Cultures Natural Sciences, Social Sciences, and the Humanities in the 21st Century
[I thank Vinoth Ramachandra for bringing this quote to my attention].
[The distinguished philosopher Alasdair] MacIntyre provides a useful tool to test how far a university has moved to this fragmented condition. He asks whether a wonderful and effective undergraduate teacher who is able to communicate how his or her discipline contributes to an integrated account of things – but whose publishing consists of one original but brilliant article on how to teach – would receive tenure. Or would tenure be granted to a professor who is unable or unwilling to teach undergraduates, preferring to teach only advanced graduate students and engaged in ‘‘cutting-edge research.’’ MacIntyre suggests if the answers to these two inquiries are ‘‘No’’ and ‘‘Yes,’’ you can be sure you are at a university, at least if it is a Catholic university, in need of serious reform. I feel quite confident that MacIntyre learned to put the matter this way by serving on the Appointment, Promotion, and Tenure Committee of Duke University. I am confident that this is the source of his understanding of the increasing subdisciplinary character of fields, because I also served on that committee for seven years. During that time I observed people becoming ‘‘leaders’’ in their fields by making their work so narrow that the ‘‘field’’ consisted of no more than five or six people. We would often hear from the chairs of the departments that they could not understand what the person was doing, but they were sure the person to be considered for tenure was the best ‘‘in his or her field."
Stanley Hauerwas, The State of the University, page 49.

Are these reasonable criticisms of the natural sciences?

Wednesday, April 19, 2017

Commercialisation of universities

I find the following book synopsis rather disturbing.
Is everything in a university for sale if the price is right? In this book, the author cautions that the answer is all too often "yes." Taking the first comprehensive look at the growing commercialization of our academic institutions, the author probes the efforts on campus to profit financially not only from athletics but increasingly, from education and research as well. He shows how such ventures are undermining core academic values and what universities can do to limit the damage. 
Commercialization has many causes, but it could never have grown to its present state had it not been for the recent, rapid growth of money-making opportunities in a more technologically complex, knowledge-based economy. A brave new world has now emerged in which university presidents, enterprising professors, and even administrative staff can all find seductive opportunities to turn specialized knowledge into profit. 
The author argues that universities, faced with these temptations, are jeopardizing their fundamental mission in their eagerness to make money by agreeing to more and more compromises with basic academic values. He discusses the dangers posed by increased secrecy in corporate-funded research, for-profit Internet companies funded by venture capitalists, industry-subsidized educational programs for physicians, conflicts of interest in research on human subjects, and other questionable activities. 
While entrepreneurial universities may occasionally succeed in the short term, reasons the author, only those institutions that vigorously uphold academic values, even at the cost of a few lucrative ventures, will win public trust and retain the respect of faculty and students. Candid, evenhanded, and eminently readable, Universities in the Marketplace will be widely debated by all those concerned with the future of higher education in America and beyond.
What is most disturbing is that the author of Universities in the Marketplace: The Commercialization of Higher Education is Derek Bok, former President of Harvard, the richest university in the world!

There is a helpful summary and review of the book here. A longer review compares and contrasts the book to several others addressing similar issues.

How concerned should we be about these issues?

Thursday, April 13, 2017

Quantum entanglement technology hype


Last month The Economist had a cover story and large section on commercial technologies based on quantum information.

To give the flavour here is a sample from one of the articles
Very few in the field think it will take less than a decade [to build a large quantum computer], and many say far longer. But the time for investment, all agree, is now—because even the smaller and less capable machines that will soon be engineered will have the potential to earn revenue. Already, startups and consulting firms are springing up to match prospective small quantum computers to problems faced in sectors including quantitative finance, drug discovery and oil and gas. .... Quantum simulators might help in the design of room-temperature superconductors allowing electricity to be transmitted without losses, or with investigating the nitrogenase reaction used to make most of the world’s fertiliser.
I know people are making advances [which are interesting from a fundamental science point of view] but it seems to me we are a very long way from doing anything cheaper [both financially and computationally] than a classical computer.

Doug Natelson noted that at the last APS March Meeting, John Martinis said that people should not believe the hype, even from him!

Normally The Economist gives a hard-headed analysis of political and economic issues. I might not agree with it [it is too neoliberal for me] but at least I trust it to give a rigorous and accurate analysis. I found this section to be quite disappointing. I hope uncritical readers don't start throwing their retirement funds into start-ups that are going to develop the "quantum internet" because they believe that this is going to be as important as the transistor (a claim the article ends with).

Maybe I am missing something.
I welcome comments on the article.

Tuesday, April 11, 2017

Should we fund people or projects?

In Australia, grant reviewers are usually asked to score applications according to three aspects: investigator, project, and research environment. These are usually weighted by something like 40%, 40%, and 20%, respectively. Previously, I wrote how I think the research environment aspect is problematic.

I struggle to see why investigator and project should have equal weighting. For example, consider the following caricatures.

John writes highly polished proposals with well defined projects on important topics. However, he has limited technical expertise relevant to the ambitious goals in the proposal. He also tends to write superficial papers on hot topics.

Joan is not particularly well organised and does not write polished proposals. She does not plan her projects but lets her curiosity and creativity lead her. Although she does not write a lot of papers she has a good track record of moving into new areas and making substantial contributions.

This raises the question of whether we should even forget the project dimension to funding. Suppose you had the following extreme system. You just give the "best" people a grant for three years and they can do whatever they want. Three years later they apply again and are evaluated based on what they have produced. This would encourage more risks and save a lot of time in the grant preparation and evaluation process.

Are there any examples of this kind of "no strings attached" funding? The only examples I can think of are MacArthur Fellows and Royal Society Professorships. However, these are really for stellar senior people.

What do you think?

Thursday, April 6, 2017

Do you help your students debug codes?

Faculty vary greatly in their level of involvement with the details of the research projects of the undergrads, Ph.D students, and postdocs they supervise. Here are three different examples based on real senior people.

A. gives the student or postdoc a project topic and basically does not want to talk to them again until they bring a draft of a paper.

B. talks to their students regularly but boasts that they have not looked at a line of computer code since they became a faculty member. It is the sole responsibility of students to write and debug code.

C. is very involved. One night before a conference presentation they stayed up until 3 am trying to debug a students code in the hope of getting some more results to present the next day.

Similar issues arise with analytical calculations or getting experimental apparatus to work.

What is an appropriate level of involvement?
On the one hand, it is important that students take responsibility for their projects and learn to solve their own problems.
On the other hand, faculty can speed things along and sometimes quickly find "bugs" because of experience. Also a more "hands on" approach gives a better feel for how well the student knows what they are doing and is checking things.
It is fascinating and disturbing to me that in the Schon scandal, Batlogg confessed that he never went in the lab and so did not realise there was no real experiment.

I think there is no clear cut answer. Different people have different strengths and interests (both supervisors and students). Some really enjoy the level of detail and others are more interested in the big picture.
However, I must say that I think A. is problematic.
Overall, I am closer to B. than C, but this has varied depending on the person involved, the project, and the technical problems.

What do you think?

Tuesday, April 4, 2017

Some awkward history

I enjoyed watching the movie Hidden Figures. It is based on a book that recounts the little-known history of the contributions of three African-American women to NASA and the first manned space flights in the 1960s. The movie is quite entertaining and moving while raising significant issues about racism and sexism in science. I grimaced at some of the scenes. On the one hand, some would argue we have come a long way in fifty years. On the other hand, we should be concerned about how the rise of Trump will play out in science.


One minor question I have is how much of the math on the blackboards is realistic?



Something worth considering is the extent to which the movie fits the too-common white savior narrative, as highlighted in a critical review, by Marie Hicks.

Saturday, April 1, 2017

A fascinating thermodynamics demonstration: the drinking bird

I am currently helping teach a second year undergraduate course Thermodynamics and Condensed Matter Physics. For the first time I am helping out in some of the lab sessions. Two of the experiments are based on the drinking bird.



This illustrates two important topics: heat engines and liquid-vapour equilibria.

Here are a few observations fo in random order.

* I still find it fascinating to watch. Why isn't it a perpetual motion machine?

* Several more surprising things are:
a. it operates on such a small temperature difference,
b. that there is a temperature difference between the head and bulb,
c. it is so sensitive to perturbations such as warming with your fingers or changes in humidity.

* It took me quite a while to understand what is going on, which makes me wonder about the students doing the lab. How much are they following the recipe and saying the mantra...

* I try to encourage the students to think critically and scientifically about what is going on, asking some basic questions, such as "How do you know the head is cooler than the bulb? What experiment can you do right now to test your hypothesis? How can you test whether evaporative cooling is responsible for cooling the head?" Such an approach is briefly described in this old paper.

* Understanding and approximately quantifying the temperature of the head involves the concept of humidity, wet-bulb temperature and a psychometric chart. Again I find this challenging.

* This lab is a great example of how you don't necessarily need a lot of money and fancy equipment to teach a lot of important science and skills.

Thursday, March 30, 2017

Perverse incentives in academia

According to Wikipedia, "A perverse incentive is an incentive that has an unintended and undesirable result which is contrary to the interests of the incentive makers. Perverse incentives are a type of negative unintended consequence."

There is an excellent (but depressing) article
Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition
Edwards Marc A. and Roy Siddhartha

I learnt of the article via a blog post summarising it, Every attempt to manage academia makes it worse.

Incidentally, Edwards is a water quality expert who was influential in exposing the Flint Water crisis.

The article is particularly helpful because it cites a lot of literature concerning the problems. It contains the following provocative table. I also like the emphasis on ethical behaviour and altruism.


It is easy to feel helpless. However, the least action you can take is to stop looking at metrics when reviewing grants, job applicants, and tenure cases. Actually read some of the papers and evaluate the quality of the science. If you don't have the expertise then you should not be making the decision or  should seek expert review.

Tuesday, March 28, 2017

Computational quantum chemistry in a nutshell

To the uninitiated (and particularly physicists) computational quantum chemistry can just seem to be a bewildering zoo of multiple letter acronyms (CCSD(T), MP4, aug-CC-pVZ, ...).

However, the basic ingredients and key assumptions can be simply explained.

First, one makes the Born-Oppenheimer approximation, i.e. one assumes that the positions of the N_n nuclei in a particular molecule are a classical variable [R is a 3N_n dimensional vector] and the electrons are quantum. One wants to find the eigenenergy of the N electrons. The corresponding Hamiltonian and Schrodinger equation is


The electronic energy eigenvalues E_n(R) define the potential energy surfaces associated with the ground and excited states. From the ground state surface one can understand most of chemistry! (e.g., molecular geometries, reaction mechanisms, transition states, heats of reaction, activation energies, ....)
As Laughlin and Pines say, the equation above is the Theory of Everything!
The problem is that one can't solve it exactly.

Second, one chooses whether one wants to calculate the complete wave function for the electrons or just the local charge density (one-particle density matrix). The latter is what one does in density functional theory (DFT). I will just discuss the former.

Now we want to solve this eigenvalue problem on a computer and the Hilbert space is huge, even for a simple molecule such as water. We want to reduce the problem to a discrete matrix problem. The Hilbert space for a single electron involves a wavefunction in real space and so we want a finite basis set of L spatial wave functions, "orbitals". Then there is the many-particle Hilbert space for N-electrons, which has dimensions of order L^N. We need a judicious way to truncate this and find the best possible orbitals.

The single particle orbitals can be introduced
where the a's are annihilation operators to give the Hamiltonian

These are known as Coulomb and exchange integrals. Sometimes they are denoted (ij|kl).
Computing them efficiently is a big deal.
In semi-empirical theories one neglects many of these integrals and treats the others as parameters that are determined from experiment.
For example, if one only keeps a single term (ii|ii) one is left with the Hubbard model!

Equivalently, the many-particle wave function can be written in this form.

Now one makes two important choices of approximations.

1. atomic basis set
One picks a small set of orbitals centered on each of the atoms in the molecule. Often these have the traditional s-p-d-f rotational symmetry and a Gaussian dependence on distance.

2. "level of theory"
This concerns how one solves the many-body problem or equivalently how one truncates the Hilbert space (electronic configurations) or equivalently uses an approximate variational wavefunction. Examples include Hartree-Fock (HF), second-order perturbation theory (MP2),  a Gutzwiller-type wavefunction (CC = Coupled Cluster), or Complete Active Space (CAS(K,L)) (one uses HF for higher and low energies and exact diagonalisation for a small subset of K electrons in L orbitals.
Full-CI (configuration interaction) is exact diagonalisation. This only possible for very small systems.

The many-body wavefunction contains many variational parameters, both the coefficients in from of the atomic orbitals that define the molecular orbitals and the coefficients in front of the Slater determinants that define the electronic configurations.

Obviously, one expects that the larger the atomic basis set and the "higher" the level of theory  (i.e. treatment of electron correlation) one hopes to move closer to reality (experiment). I think Pople first drew a diagram such as the one below (taken from this paper).


However, I stress some basic points.

1. Given how severe the truncation of Hilbert space from the original problem one would not necessarily to expect to get anywhere near reality. The pleasant surprise for the founders of the field was that even with 1950s computers one could get interesting results. Although the electrons are strongly correlated (in some sense), Hartree-Fock can sometimes be useful. It is far from obvious that one would expect such success.

2. The convergence to reality is not necessarily uniform.
This gives rise to Pauling points: "improving" the approximation may give worse answers.

3. The relative trade-off between the horizontal and vertical axes is not clear and may be context dependent.

4. Any computational study should have some "convergence" tests. i.e. use a range of approximations and compare the results to see how robust any conclusions are.

Thursday, March 23, 2017

Units! Units! Units!

I am spending more time with undergraduates lately: helping in a lab (scary!), lecturing, marking assignments, supervising small research projects, ...

One issue keeps coming up: physical units!
Many of the students struggle with this. Some even think it is not important!

This matters in a wide range of activities.

  • Giving a meaningful answer for a measurement or calculation. This includes canceling out units.
  • Using dimensional analysis to find possible errors in a calculation or formula.
  • Writing equations in dimensionless form to simplify calculations, whether analytical or computational.
  • Making order of magnitude estimates of physical effects.

Any others you can think of?

Any thoughts on how we can do better at training students to master this basic but important skill?

Tuesday, March 21, 2017

Emergence frames many of the grand challenges and big questions in universities

What are the big questions that people are (or should be) wrestling within universities?
What are the grand intellectual challenges, particularly those that interact with society?

Here are a few. A common feature of those I have chosen is that they involve emergence: complex systems consisting of many interacting components produce new entities and there are multiple scales (whether length, time, energy, the number of entities) involved.

Economics
How does one go from microeconomics to macroeconomics?
What is the interaction between individual agents and the surrounding economic order?
A recent series of papers(see here and references therein) have looked at how the concept of emergence played a role in the thinking of Friedrich Hayek.

Biology
How does one go from genotype to phenotype?
How do the interactions between many proteins produce a biochemical process in a cell?


The figure above shows a protein interaction network and taken from this review.

Sociology
How do communities and cultures emerge?
What is the relationship between human agency and social structures?

Public health and epidemics
How do diseases spread and what is the best strategy to stop them?

Computer science
Artificial intelligence.
Recently it was shown how Deep learning can be understood in terms of the renormalisation group.

Community development, international aid, and poverty alleviation
I discussed some of the issues in this post.

Intellectual history
How and when do new ideas become "popular" and accepted?

Climate change

Philosophy
How do you define consciousness?

Some of the issues are covered in the popular book, Emergence: the connected lives of Ants, Brains, Cities, and Software.
Some of these phenomena are related to the physics of networks, including scale-free networks. The most helpful introduction I have read is a Physics Today article by Mark Newman.

Given this common issue of emergence, I think there are some lessons (and possibly techniques) these fields might learn from condensed matter physics. It is arguably the field which has been the most successful at understanding and describing emergent phenomena. I stress that this is not hubris. This success is not because condensed matter theorists are smarter or more capable than people working in other fields. It is because the systems are "simple" enough and the presence (sometimes) of a clear separation of scales that they are more amenable to analysis and controlled experiments.

Some of these lessons are "obvious" to condensed matter physicists. However, I don't think they are necessarily accepted by researchers in other fields.

Humility.
These are very hard problems, progress is usually slow, and not all questions can be answered.

The limitations of reductionism.
Trying to model everything by computer simulations which include all the degrees of freedom will lead to limited progress and insight.

Find and embrace the separation of scales.
The renormalisation group provides a method to systematically do this. A recent commentary by Ilya Nemenman highlights some recent progress and the associated challenges.

The centrality of concepts.

The importance of critically engaging with experiment and data.
They must be the starting and end point. Concepts, models, and theories have to be constrained and tested by reality.

The value of simple models.
They can give significant insight into the essentials of a problem.

What other big questions and grand challenges involve emergence?

Do you think condensed matter [without hubris] can contribute something?

Saturday, March 18, 2017

Important distinctions in the debate about journals

My post, "Do we need more journals?" generated a lot of comments, showing that the associated issues are something people have strong opinions about.

I think it important to consider some distinct questions that the community needs to debate.

What research fields, topics, and projects should we work on?

When is a specific research result worth communicating to the relevant research community?

Who should be co-authors of that communication?

What is the best method of communicating that result to the community?

How should the "performance" and "potential" of individuals, departments, and institutions be evaluated?

A major problem for science is that over the past two decades the dominant answer to the last question (metrics such as Journal "Impact" Factors and citations) is determining the answer to the other questions. This issue has been nicely discussed by Carl Caves.
The tail is wagging the dog.

People flock to "hot" topics that can produce quick papers, may attract a lot of citations, and are beloved by the editors of luxury journals. Results are often obtained and analysed in a rush, not checked adequately, and presented in the "best" possible light with a bias towards exotic explanations. Co-authors are sometimes determined by career issues and the prospect of increasing the probability of publication in a luxury journal, rather than by scientific contribution.

Finally, there is a meta-question that is in the background. The question is actually more important but harder to answer.
How are the answers to the last question being driven by broader moral and political issues?
Examples include the rise of the neoliberal management class, treatment of employees, democracy in the workplace, inequality, post-truth, the value of status and "success", economic instrumentalism, ...

Thursday, March 16, 2017

Introducing students to John Bardeen

At UQ there is a great student physics club, PAIN. Their weekly meeting is called the "error bar." This friday they are having a session on the history of physics and asked faculty if any would talk "about interesting stories or anecdotes about people, discoveries, and ideas relating to physics."

I thought for a while and decided on John Bardeen. There is a lot I find interesting. He is the only person to receive two Nobel Prizes in Physics. Arguably, the discovery associated with both prizes (transistor, BCS theory) are of greater significance than the average Nobel. The difficult relationship with Shockley, who in some sense became the founder of Silicon Valley.

Here are my slides.


In preparing the talk I read the interesting articles in the April 1992 issue of Physics Today that was completely dedicated to Bardeen. In his article David Pines, says
[Bardeen's] approach to scientific problems went something like this: 
  • Focus first on the experimental results, by careful reading of the literature and personal contact with members of leading experimental groups. 
  • Develop a phenomenological description that ties the key experimental facts together. 
  • Avoid bringing along prior theoretical baggage, and do not insist that a phenomenological description map onto a particular theoretical model. Explore alternative physical pictures and mathematical descriptions without becoming wedded to a specific theoretical approach. 
  • Use thermodynamic and macroscopic arguments before proceeding to microscopic calculations. 
  • Focus on physical understanding, not mathematical elegance. Use the simplest possible mathematical descriptions. 
  • Keep up with new developments and techniques in theory, for one of these could prove useful for the problem at hand. 
  • Don't give up! Stay with the problem until it's solved. 
In summary, John believed in a bottom-up, experimentally based approach to doing physics, as distinguished from a top-down, model-driven approach. To put it another way, deciding on an appropriate model Hamiltonian was John's penultimate step in solving a problem, not his first.
With regard to "interesting stories or anecdotes about people, discoveries, and ideas relating to physics," what would you talk about?

Wednesday, March 15, 2017

The power and limitations of ARPES

The past two decades have seen impressive advances in Angle-Resolved PhotoEmission Spectroscopy (ARPES). This technique has played a particularly important role in elucidating the properties of the cuprates and topological insulators. ARPES allows measurement of the one-electron spectral function, A(k,E) something that can be calculated from quantum many-body theory. Recent advances have included the development of laser-based ARPES, which makes synchrotron time unnecessary.

A recent PRL shows the quality of data that can be achieved.

Orbital-Dependent Band Narrowing Revealed in an Extremely Correlated Hund’s Metal Emerging on the Topmost Layer of Sr2RuO4 
Takeshi Kondo, M. Ochi, M. Nakayama, H. Taniguchi, S. Akebi, K. Kuroda, M. Arita, S. Sakai, H. Namatame, M. Taniguchi, Y. Maeno, R. Arita, and S. Shin

The figure below shows a colour density plot of the intensity [related to A(k,E)] along a particular direction in the Brillouin zone.  The energy resolution is of the order of meV, something that would not have been dreamed of decades ago.
Note how the observed dispersion of the quasi-particles is much smaller than that calculated from DFT, showing how strongly correlated the system is.

The figure below shows how with increasing temperature a quasi-particle peak gradually disappears, showing the smooth crossover from a Fermi liquid to a bad metal, above some coherence temperature.
The main point of the paper is that the authors are able to probe just the topmost layer of the crystal and that the associated electronic structure is more correlated (the bands are narrower and the coherence temperature is lower) than the bulk.
Again it is impressive that one can make this distinction.

But this does highlight a limitation of ARPES, particularly in the past. It is largely a surface probe and so one has to worry about whether one is measuring surface properties that are different from the bulk. This paper shows that those differences can be significant.

The paper also contains DFT+DMFT calculations which are compared to the experimental results.

Monday, March 13, 2017

What do your students really expect and value?

Should you ban cell phones in class?

I found this video quite insightful. It reminded me of the gulf between me and some students.



It confirmed my policy of not allowing texting in class. Partly this is to force students to be more engaged. But it is also to make students think about whether they really need to be "connected" all the time?

What is your policy of phones in class?

I think that the characterisation of "millennials" may be a bit harsh and too one dimensional. Although I did encounter some of the underlying attitudes in a problematic class a few years ago. Then reading a Time magazine cover article was helpful.
I also think that this is not a good characterisation of many of the students that make it as far as an advanced undergraduate or Ph.D programs. By then many of the narcissistic and entitled have self selected out. It is just too much hard work.

Friday, March 10, 2017

Do we really need more journals?

NO!

Nature Publishing Group continues to spawn "Baby Natures" like crazy.

I was disappointed to see that Physical Review is launching a new journal Physical Review Materials. They claim it is to better serve the materials community. I found this strange. What is wrong with Physical Review B? It does a great job.
Surely, the real reason is APS wants to compete with Nature Materials [a front for mediocrity and hype] which has a big Journal Impact Factor (JIF).
On the other hand, if the new journal could put Nature Materials out of business I would be very happy. At least the journal would be run and controlled by real scientists and not-for-profit.

So I just want to rant two points I have made before.

First, the JIF is essentially meaningless, particularly when it comes to evaluating the quality of individual papers. Even if one believes citations are some sort of useful measure of impact, one should look at the distribution, not just the mean. Below the distribution is shown for Nature Chemistry.


Note how the distribution is highly skewed, being dominated by a few highly cited papers. More than 70 per cent of papers score less than the mean.

Second, the problem is that people are publishing too many papers. We need less journals not more!
Three years ago, I posted about how I think journals are actually redundant and gave a specific proposal of how to move towards a system that produces better science (more efficiently) and more accurately evaluates the quality of individuals contributions.

Getting there will obviously be difficult. However, initiatives such as SciPost and PLOS ONE, are steps in a positive direction.
Meanwhile those of us evaluating the "performance" of individuals can focus on real science and not all this nonsense beloved by many.

Wednesday, March 8, 2017

Is complexity theory relevant to poverty alleviation programs?

For me, global economic inequality is a huge issue. A helpful short video describes the problem.
Recently, there has been a surge of interest among development policy analysts about how complexity theory may be relevant in poverty alleviation programs.

On an Oxfam blog there is a helpful review of three books on complexity theory and development.
I recently read some of one of these books, Aid on the Edge of Chaos: Rethinking International Cooperation in a Complex World, by Ben Ramalingham.

Here is some of the publisher blurb.
Ben Ramalingam shows that the linear, mechanistic models and assumptions on which foreign aid is built would be more at home in early twentieth century factory floors than in the dynamic, complex world we face today. All around us, we can see the costs and limitations of dealing with economies and societies as if they are analogous to machines. The reality is that such social systems have far more in common with ecosystems: they are complex, dynamic, diverse and unpredictable. 
Many thinkers and practitioners in science, economics, business, and public policy have started to embrace more 'ecologically literate' approaches to guide both thinking and action, informed by ideas from the 'new science' of complex adaptive systems. Inspired by these efforts, there is an emerging network of aid practitioners, researchers, and policy makers who are experimenting with complexity-informed responses to development and humanitarian challenges. 
This book showcases the insights, experiences, and often remarkable results from these efforts. From transforming approaches to child malnutrition, to rethinking processes of economic growth, from building peace to combating desertification, from rural Vietnam to urban Kenya, Aid on the Edge of Chaos shows how embracing the ideas of complex systems thinking can help make foreign aid more relevant, more appropriate, more innovative, and more catalytic. Ramalingam argues that taking on these ideas will be a vital part of the transformation of aid, from a post-WW2 mechanism of resource transfer, to a truly innovative and dynamic form of global cooperation fit for the twenty-first century.
The first few chapters give a robust and somewhat depressing critique of the current system of international aid. He then discusses complexity theory and finally specific case studies.
The Table below nicely contrasts two approaches.

A friend who works for a large aid NGO told me about the book and described a workshop (based on the book) that he attended where the participants even used modeling software.

I have mixed feelings about all of this.

Here are some positive points.

Any problem in society involves a complex system (i.e. many interacting components). Insights, both qualitative and quantitative, can be gained from "physics" type models. Examples I have posted about before, include the statistical mechanics of money and the universality in probability distributions for certain social quantities.

Simplistic mechanical thinking, such as that associated with Robert McNamara in Vietnam and then at the World Bank, is problematic and needs to be critiqued. Even a problem as 'simple" as replacing wood burning stoves turns out to be much more difficult and complicated than anticipated.

A concrete example discussed in the book is that of positive deviance, which takes its partial motivation from power laws.

Here are some concerns.

Complexity theory suffers from being oversold. It certainly gives important qualitative insights and concrete examples in "simple" models. However, to what extent complexity theory can give a quantitative description of real systems is debatable. This is particularly true of the idea of "the edge of chaos" that features in the title of the book. A less controversial title would have replaced this with simply "emergence", since that is a lot of what the book is really about.

Some of the important conclusions of the book could be arrived at by different more conventional routes. For example, a major point is that "top down" approaches are problematic. This is where some wealthy Westerners define a problem, define the solution, then provide the resources (money, materials, and personnel) and impose the solution on local poor communities. A more "bottom up" or "complex adaptive systems" approach is where one consults with the community, gets them to define the problem and brainstorm possible solutions, give them ownership of implementing the project, and adapt the strategy in response to trials. One can come to this same approach if ones starting point is simply humility and respect for the dignity of others. We don't need complexity theory for that.

The author makes much of the story of Sugata Mitra, whose TED talk, "Kids can teach themselves" has more than a million views. He puts some computer terminals in a slum in India and claims that poor uneducated kids taught themselves all sorts of things, illustrating "emergent" and "bottom up" solutions. It is a great story.  However, it has received some serious criticism, which is not acknowledged by the author.

Nevertheless, I recommend the book and think it is a valuable and original contribution about a very important issue.