- JoNova - https://joannenova.com.au -

The models are wrong (but only by 400%)

McKitrick, McIntyre, and Herman 2010: It’s been a long time coming. A humdinger of a paper.

International Journal Of (Popular) Climatology

International Journal Of (Popular) Climatology

It’s a big step forward in the search for the hot-spot. (If the hot spot were a missing person, McKitrick et al have sighted a corpse.)

In 2006 the CCSP quietly admitted with graphs (in distant parts of various reports) that the models were predicting a hot spot that the radiosondes didn’t find (Karl et al 2006).

Graph of the missing hot spot.

The models predicted a hot spot (left), the radiosondes couldn’t find anything like it.

Obviously this was a bit of a  problem for the Scare Campaign. Much of the amplifying feedback created in the models also creates the hot-spot, so without any evidence that the hot-spot is occurring, there goes the disaster (and the urgent need for funding and junkets).

Douglass et al officially pointed out the glaring deficiency in 2007 by comparing tropospheric predictions from the models. They specifically used only models that had got the surface temperatures correct.

Santer et al replied in 2008 by discovering a lot of uncertainties, and stretching the error bars. Since the broad errors bars overlapped he could announce that the hot spot wasn’t really missing (even though he didn’t really find it either). He wrote this up in words effectively saying that the inconsistency in temperatures was not so inconsistent. (If the models can’t predict a very specific range and the radiosondes aren’t accurate to a very specific range, then they both agree!!) Many in the Big Scare Campaign got overly excited and declared that the hot spot was “found”.

Note that Santer et al did not limit themselves to the same model runs that Douglass et al used. While Douglass et al  insisted on only using models that at least got the surface trend right, Santer did not. The McKitrick paper used the same archive of model simulations as the Santer paper.

McIntyre and McKitrick pointed out these key Santer results which used data up to 1999 were overturned with the use of data up to 2009. Somehow, despite all the excitement over Santer et al 2008, the IJOC decided updating it and contradicting it was “not interesting” and it took months to reach that banal conclusion. The editors also wouldn’t reveal exactly what the anonymous unpaid reviewers said and instead suggested a range of time-consuming changes and conditions that would have tied the paper up even longer. (The full story is on Climate Audit.)

So Ross McKitrick, Stephen McIntyre, and Chad Herman took their critical work to another journal (say “cheers” for competition.) And the IJOC have missed out on what looks to probably be a very cited paper. But they did successfully delay it for 18 months, past the crucial Copenhagen conference, allowing Santer et al to be cited elsewhere and at length. Perhaps the IJOC got what they wanted, but it seems that what they wanted is not a full debate and discussion about climate science. This is the sad joke of relying on “peer review”.

A big step forward in this paper is the use of econometric statistical analysis. As Wegman found, the top guns in climate science relied heavily on statistics, but didn’t rely heavily (or at all) on expert statisticians. McKitrick et al bought in some cutting edge tools from economics, and did the-not-too-complicated-step of including the most recent data.

The end result is that where Santer et al found the error bars could overlap, McKitrick found that the models overestimated temperatures by 200 and 400% in the lower and mid troposphere respectively.

The observations don’t match the predictions

David Stockwell sums up the importance of this new paper:

This represents a basic validation test of climate models over a 30 year period, a validation test which SHOULD be fundamental to any belief in the models, and their usefulness for projections of global warming in the future.

The results are shown in their figure:

… the differences between models and observations now exceed the 99% critical value. As shown in Table 1 and Section 3.3, the model trends are about twice as large as observations in the LT layer, and about four times as large in the MT layer.

But you can rest assured. The models, in important ways that were once claimed to be proof of “… a discernible human influence on global climate”, are now shown to be FUBAR. Wouldn’t it have been better if they had just done the validation tests and rejected the models before trying to rule the world with them?

As a final note, having produced a paper of such significance, it’s worth noting just how much Santer et al were keen to assist McIntyre and McKitrick in the interests of furthering science. There is nothing more important than understanding our climate, right?

Thanks to David Stockwell for finding some good quotes from McIntyre and McKitrick:

We requested this data from S08 lead author Santer, who categorically refused to provide it (see http://www.climateaudit.org/?p=4314.) Instead of supplying what would be at most 1 MB or so of monthly data collated by specialists as part of their research work, Santer directed us to the terabytes of archived PCMDI data and challenged us to reproduce their series from scratch. Apart from the pointless and potentially large time cost imposed by this refusal, the task of aggregating PCMDI data with which we are unfamiliar would create the risk of introducing irrelevant collation errors or mismatched averaging steps, leading to superfluous controversy should our results not replicate theirs.

Following this refusal by lead author Santer, we filed a Freedom of Information (FOI) Request to NOAA, applying to coauthors Karl, Free, Solomon and Lanzante. In response, all four denied having copies of any of the model time series used in Santer et al. (2008) and denied having copies of any email correspondence concerning these time series with any other coauthors of Santer et al. (2008). Two other coauthors stated by email that they did not have copies of the series. An FOI request to the U.S. Department of Energy is under consideration.

Santer declared that McIntyre’ FOIA requests were just a fishing expedition, and not real science.

Ross McKitrick has set himself up with a new blog documenting his great work.

REFERENCE

McKitrick, R., S. McIntyre, and C. Herman, (2010), Panel and multivariate methods for tests of trend equivalence in climate data series. Atmospheric Science Letters, 11: 270–277.  DOI: 10.1002/asl.290. Data/code archive. [Discussion on JoNova] [PDF]

McKitrick, R., McIntyre, S., and Herman, C. (2011) Corrigendum to Panel and multivariate methods for tests of trend equivalence in climate data series, Atmospheric Science Letters,  Vol. 11, Issue 4, 270–277. DOI: 10.1002/asl.360. [Abstract]  [See McKitricks page on model testing].

7.7 out of 10 based on 13 ratings