At the UN climate negotiations under way in Poland this week, politicians will be poring over forecasts of climate change. It's an opportune moment for physicist Lenny Smith to challenge the climate modellers who he believes are overselling their results. Human activity really is changing the global climate, he tells Fred Pearce, but we must stop pretending that we know the details of how it will all play out
Lenny Smith
© Pal HansenPhysicist Lenny Smith thinks that climate modellers are overselling their results.

You work with climate models, but you have issues with them too. Why?

The temptation to interpret model noise as forecast information invades our living rooms every night. TV weather-forecast maps look so realistic it is hard not to over-interpret tiny details - to imagine that the band of rain passing over Oxfordshire at noon next Saturday requires postponing a barbecue. Rain may indeed be likely somewhere in the area sometime on Saturday, but the details we see on TV forecasts are noise from the models. I think we are having exactly the same problem with climate projections.

Does this mean the models are useless?

They are certainly right on the basic story of global warming. Man-made climate change is real. However, there is a risk that something important will happen that is not predicted by any of today's models - and they cannot give us trustworthy forecasts of climate for regions as small as most countries are. The bottom line is that the models help us understand pieces of the climate system, but that does not mean we can predict the details.

Why do the forecasts go wrong?

You may not have the right initial conditions to start your forecast; you may not know the right equations; or you may not have the computing power to use these equations.

A forecast from a model is sometimes sensitive to the initial conditions, and a way round this is to run the model using various scenarios. This is called ensemble forecasting. The challenge then is to interpret the results. Suppose that 30 per cent of the simulations in an ensemble weather forecast say it will rain tomorrow. That doesn't mean there is a 30 per cent chance of rain.

Some weather forecasters feel their job is to provide "certainty", even when it is repeatedly wrong. Providing even a useful probability distribution would be counted as failure.

Do similar issues arise over climate projections?

Yes. If modellers are asked for detailed forecasts about what will happen, say, in south-east England in 2060, some feel that it's their job to provide the best available information. They then report whatever today's biggest computers spit out, even if they know those results are not robust.

Suppose different models give simulations for precipitation in 2060, ranging from 20 per cent less rainfall to 40 per cent more rainfall. The average is a 10 per cent increase, but is that value of any use to decision-makers? We don't know how to turn those numbers into a trustworthy probability forecast. Maybe we should just accept we don't know the details.

Next year, the UK Climate Impacts Programme [largely funded by the British government] will unveil a "weather generator" that will allow corporations or government agencies to print out hourly weather patterns for beyond 2060, with a spatial resolution of 5 kilometres. Understandably, many users think they can use this information to work out how big to build reservoirs or flood defences. How much detail should people believe? Where does insight from the laws of physics end and meaningless happenstance from model details kick in?

Do you worry that the doubts you express about climate models could fuel the arguments of climate sceptics?

Yes I do. Effective application of climate science hinges on clear communication of which results we believe are robust and which are not. Any discussion of such limits can be abused by those seeking only to confuse. But failing to discuss these limits openly can hinder society's ability to respond, and also compromise the future credibility of science.

How do climate scientists react to your criticisms?

Most of the working scientists, especially the younger ones, are worried about over-interpretation. In some countries, though, national research centres are charged with both advancing the science and selling their results commercially. This must be a difficult position. It is hard for a salesman to lead his presentation with uncertainty, even if that's what the science says.

It's interesting to compare these debates with what happens in other disciplines. Seismologists practically throw rocks at each other when arguing about earthquake predictions. The climate community presents a more unified front. That's not unreasonable, because the basic physics does make sense and deserves unanimity. The downside is that if someone goes too far in interpreting model results, they don't always face proper scrutiny.

So should we believe the reports produced by the Intergovernmental Panel on Climate Change?

Broadly yes - we understand a lot. You have to read the qualifiers carefully, though. In the most recent report, for instance, there is an explicit acknowledgement that the range of simulations in today's models is too narrow. That is, future warming could be greater or less than what is suggested by the diversity between models in the report. It's good that the qualifier is in there, but it is a hell of a qualifier to find on page 797.

Doesn't this risk undermining the science of climate change?

It could. I see three dangers for climate science. The first is politically or financially motivated naysayers. The second is academics new to climate science and to forecasting physical systems, like those who don't see the difference between forecasting the next full moon, which is straightforward, and the next stock-market crash, which is not. They are exploited by the naysayers.

But perhaps the greatest danger is climate scientists blatantly overselling what we know. That could bring everything down and cost the world valuable time.

So what should policy-makers do?

They could help by asking scientists to explain what we know, rather than posing questions they would like answered in an ideal world.

And what use are the models?

For advancing our understanding, they are fundamental. For decision-making, even given their uncertainties, they can help minimise our vulnerability. They are also a source of information about what might plausibly happen - even if they cannot yield probabilities on what will actually happen.

That is fair enough. In the real world we don't usually expect certainty, and don't have much use for averages - but we do need information about plausible risks. When I cross the street, average statistics about cars and how they are driven are of less value to me than the sound of a bus heading my way. Models help us listen for that bus. So let's forget the spurious certainties, and even the spurious probabilities, and concentrate on what matters.

How did you get interested in the whole issue of uncertainty in modelling?

In New York I worked with Jim Hansen, the climate scientist, and looked at the codes of the early computer models. I did my thesis with Ed Spiegel, an astrophysicist who had worked on chaos since the 1960s. I had to grapple with uncertainty at every turn.

These days, my work involves ways to better interpret and improve our models. Some of that is about climate and weather, but I also study fluid dynamics and signals in everything from the national electricity grid to simple circuits. You'd think an electric circuit was completely predictable, but it isn't.

Profile:

Lenny Smith gained a PhD in physics from Columbia University, New York. He is now professor of statistics at the London School of Economics and a senior research fellow at the University of Oxford, as well as the Oxford Centre for Industrial and Applied Mathematics. His research focuses on the improvement and interpretation of numerical models. In 2003 he won the Royal Meteorological Society's Fitzroy prize for distinguished work in applied meteorology.