global warming
I dedicate this column to the memory and work of Vincent Gray, one of the earliest and most effective critics of the deliberate deception that human CO2 is causing global warming. He knew what was wrong because he was an expert reviewer of the Science Reports of the Intergovernmental Panel on Climate Change (IPCC). He was among the first to identify the failure to validate any climate models. Here is how he explained the problem in his 2002 The Greenhouse Delusion.
"The whole point is, that a computer-based mathematical model of any process or system is useless unless it has been validated. Validation of such a model involves the testing of each equation and the study of each parameter, to discover its statistically based accuracy using a range of numerically based probability distributions, standard deviations, correlation coefficients, and confidence limits. The final stage is a thorough test of the model's ability to predict the result of changes in the model parameters over the entire desired range."
As a response to my comment that no model has ever been validated, they changed the title in Climate Models - Evaluation" no less than fifty times. There is not even a procedure in any IPCC publication describing what might need to be done in order to validate a model."
"Instead of validation, and the traditional use of mathematical statistics, the models are "evaluated" purely from the opinion of those who devised them."
In the early days, a test of a model was called 'hindsight' forecasting. You ran the model back to a previously known situation to determine if it recreated that situation. If so, then it was more likely to forecast looking forward. As I understand, this became known as validation, and the modelers claimed success because they tweaked variables until the model recreated past conditions. The problem is it became a hindsight correlation with no proof of cause and effect.

These 'validation' stories illustrate how the IPCC do not even carry out basic procedures, but when caught, tweak the result or use different terminology. To my knowledge, the lack of validation continues. Vincent's work provides an opportunity to respond to comments about my last article. He pointed out that the IPCC errors were more than just insignificant minor ones. They were profound and underscore that there is no level of certainty involved. Critics of my article argued that there was sufficient certainty to consider the models valid tools. I reject that claim completely because certainty is inadequate at every phase of the anthropogenic global warming (AGW) claim.

The legal system considers, and punishes differently, crimes of passion and premeditation. They are viewed as two separate crimes because of the intent. The global climate campaign was intent on proving to the world that human production of CO2 was causing AGW. This means it was premeditated. It became a crime of passion after the crime was committed because the perpetrators allowed their passion to override and resist anything that revealed the truth. The people involved knew from the start that what they were doing was less than pseudoscience, so it is premeditated. Sadly, if they didn't know, and there were far too many who didn't, then they were incompetent.

There is another extremely large group of scientists from any discipline, who ever read the IPCC Reports. Those that do experience what meteorologist and physicist Klaus-Eckart Pol described.
I became outraged when I discovered that much of what the IPCC and the media were telling us was sheer nonsense and was not even supported by any scientific facts and measurements.
Physicist the late Hal Lewis described the IPCC work as follows.
It is the greatest and most successful pseudoscientific fraud I have seen in my long life as a physicist.
No uncertainty there.

I pointed out in the early 1980s when use of climate for a political agenda began that there are two distinct obligations required for those involved in the AGW deception. The first requires scientific responsibility for what is produced. The second requires socio-economic obligations for policy based on that science. Neither scientific nor social and political obligations were met. Worse, they were deliberately avoided.

The dangers crystallized when the deception entered the public arena with Hansen's staged appearance before the Senate Committee in 1988. I make no apologies for being so repetitive and outspoken on these issues because still too many try to rationalize what went on.

In a comment about my recent article, Richard Tol wrote,
This is all rather exaggerated. Tim's main point appears to be that, since we do not know things precisely, we do not know anything at all. Few things in life are known with great precision and accuracy, particularly those things that matter.

Humans are pretty good at dealing with imperfect information. We would have long gone extinct if we were not.

Climate policy is just another case of decision making under uncertainty. We know how to do that.
This fatuous statement provides the basis for explaining everything that is wrong with the 'science' for a political agenda that is the work of the Intergovernmental Panel on Climate Change (IPCC). The first point is it represents the standard environmental fall back when the data and science are inadequate - the Precautionary Principle; the idea that it is better to act anyway, just in case. No, it isn't, for the reasons I gave when I appeared before the Canadian Parliamentary Committee on the CFCs and ozone issue. Science advances by speculation, although they call it hypothesizing. Many scientists can and do create many speculations based on a few facts and assumptions every day. Nowadays, media and others scour academic publications looking for sensational global or life-threatening speculations. They never report on the failures after research.

My challenge to the politicians was which ones were they going to deal with? They can't deal with them all and almost all of them if left to the scientific method of skepticism they will prove unjustified. Ironically, development success provides the money to create a distorted list of priorities. We can afford to be stupid.

The IPCC was formed under the auspices of the United Nations Framework Convention on Climate Change (UNFCCC). That agency was under the umbrella of the United Nations Environment Program (UNEP) Agenda 21, a global environment strategy for the 21st century. It was the brainchild of Maurice Strong who presented it for ratification for the world in Rio de Janeiro in 1992. Like all final products, it is based on a series of assumptions. With Agenda 21 these are set out as Principles that set the parameters for action and activities. Principle 15 says,
In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.
Let's parse what they say. First, it is discriminatory and anti-success. It only applies to countries that can afford it (capabilities). Who decides that? They do. We act when there are threats of serious irreversible damage. Who decides that? They do. Lack of full scientific certainty is not required to act, but who decides what is adequate certainty? They do. They gave themselves the authority to dismiss the certainty that Tol referenced.

Now we know that the models fail to meet any 'certainty' requirements at even the most basic level. However, the certainty is inadequate for everything they said and did and, sadly, still do. The computer models are mathematical constructs with the Earth's atmosphere represented by arbitrary cubes (Figure 1).

weather data
Figure 1
There is no weather data for approximately 85% of the surface rectangles. The temperature data is taken at between 1 and 1.25 m above ground, so it is not representative of the temperature above or below that level. There is virtually no data above the surface. The database used to build the mathematical models does not even approximate the average temperature in any cube. The data that exists does not meet any acceptable level of scientific certainty. For example, the US has, arguably, the best and certainly the most expensive temperature measuring devices. Anthony Watts found only 7.9% of USHCN weather stations achieved accuracy better than 1°C. To put that in context, the IPCC 2001 Report said a 0.6°C increase in approximately 120 years was not natural.

There is not a single variable used by the IPCC that comes even close to what is considered a statistically significant sample. As I understand it, that is a sample needs to be 30% of the population for significance. The IPCC people should know this. As members of the World Meteorological Organization (WMO), they work with 30-year normals all the time. The problem is even that is meaningless when applied to a climate record. A thirty-year sample of a 100-year record may be statistically significant, but it is not for a longer record.

The IPCC and its originators knew that certainty was critical, so they set out to fake it. Working Group I (WGI) produces the 'scientific' proof that human CO2 is causing warming. They represent only a small portion of the people working on the science, and very few of them have any climatology training. They are mostly specialists in one small piece of the climate puzzle. Their findings are taken without question and form the basis of the work of Working Groups II and III. This means they only look at the impact and mitigation necessary for the certainty that warming will occur.

People in WGI are aware of the limitations of the data. At the very start of Assessment report 5, there is a section titled, "Treatment of Uncertainties." These are the now familiar, "likelihoods" that smear and spread already inadequate precision.

emissions data
Figure 2
Figure 2 is the 'Forcings' table from AR5. These are the variables they consider important in human causes of climate change. On the right is a column marked "Level of Confidence." Who determines that and how is it measured? It is by default a subjective measure, but the IPCC prefer that. Only one marked Very High (VH) is for CO2, a self-serving assessment that is easily debunked. Only three of the eleven are listed as High. Just consider one, Aerosols. Could somebody give me the data on the number and nature of aerosols in the atmosphere as well as the volume and how it varies over any time period?

The 'forcing' table in Figure 2 was modified from the one used in AR3 (Figure 3.)

radiative data
Figure 3
Then, the column was labeled LOSU for "Level of Scientific Understanding." What on earth is that except another subjective measure. I am not convinced "confidence' was an improvement. There was a marginal statistical improvement because the number of "High" ratings went from 2 of 11 to 3 of 11 and the number of "Low" went from 4 to 2. However, when you read the Report in almost every category of natural variables the LOSU or Confidence level is Low to non-existent.

Most of the public, including almost all the media and most scientists, have never read the WG I Report. Its only possible purpose is to say if they were later challenged, that we knew there were serious data and methodology problems. A major reason it is not read is because deliberate policy directed its release after the Summary for Policymakers (SPM) Synthesis Report. These Reports are deliberately created to reduce doubt and exaggerate certainty. In order to achieve even the minimal level of certainty that a politician or leader requires they bypassed, underplayed, or distracted from the complete lack of certainty in the science. Consider the figure Jones produced of global temperature increase in 120 years of 0.6°C. The actual number published was 0.6°C ±0.2°C. If that was reported in terms a politician or the public understood it would say 0.6°C ±33.3%. There is no person or group, even a politician, who would base policy of any sort on such percentages. There are strict limits of tolerable certainty in all other scientific research, why not in climate science? The question is what percentage of certainty is reasonable? This question crosses the line in climatology between science and socio-economic policy.

In 1989 I witnessed a good example at a conference in Edmonton on Prairie Climate predictions and the implications for agriculture and forestry. Climate modeler Michael Schlesinger made a presentation comparing climate predictions made by the top five models of the day. He argued they were valid because they all showed warming. This was 100 percent predictable because they were all programmed to show temperature increase with a CO2 increase. When you examined the results on even a continental scale, they disagreed. One showed North America cooling, and another showed warming. The audience comprised of people making planning decisions needed more certainty. One asked Schlesinger about the accuracy of his warmer and drier prediction for Alberta. The answer was 50%. The person replied that is useless, my Minister is planning reforestation in the area and needs 95% certainty.

There is another way of testing the political reaction to uncertainty. Many US Senators knew the problems with the climate science of the IPCC from the work of Senator James Inhofe. When asked to ratify the Kyoto Protocol in 1995 they took a different approach. They did not want to vote directly because it might require they appear less than green. Instead, they looked at the socio-economic costs of implementation by voting on the Byrd/Hagel Resolution. They concluded that the costs far outweighed the benefits and voted 95-0 not to vote on Kyoto. The Senators were 100% certain of the impact, regardless of the science.

Vincent Gray notes an important conclusion in Climate Change 95 which says:

"Nevertheless, the balance of evidence suggests that there is a discernible human influence on global climate."

This comment is a direct quote from the infamous Chapter 8 contribution of lead author Benjamin Santer. The Chapter 8 committee agreed on comments including this one.

"While some of the pattern-base discussed here have claimed detection of a significant climate change, no study to date has positively attributed all or part of climate change observed to man-made causes."

In the Report Santer, changed this to,

"The body of statistical evidence in chapter 8, when examined in the context of our physical understanding of the climate system, now points to a discernible human influence on the global climate."

Avery and Singer noted in 2006,

"Santer single-handedly reversed the 'climate science' of the whole IPCC report and with it the global warming political process! The 'discernible human influence' supposedly revealed by the IPCC has been cited thousands of times since in media around the world and has been the 'stopper' in millions of debates among nonscientists."

This was the first egregious example of the corruption and deception used to establish a certainty that did not exist at any phase of the entire IPCC exercise. If you want to play with your models and science in the laboratory fine, but you must meet the obligations and certainties of science. The IPCC did not. However, if you take the findings of that 'science' and present it as public policy, then another set of obligations and certainties are required. The IPCC did not meet those either.

It is impossible to correct what the IPCC did partly because each Report built on the deceptions of earlier Reports. The IPCC must be shut down completely, and all national weather bureaus must be directed to data collection including reconstruction of past climates. It is the only way to establish even a minimal level of certainty of knowledge as the base for understanding mechanisms and thereby more accurate forecasting.