- JoNova - https://joannenova.com.au -

#DataGate! First ever audit of global temperature data finds freezing tropical islands, boiling towns, boats on land

Hadley Meteorology Office, logo, UK.

What were they thinking?

The fate of the planet is at stake, but the key temperature data set used by climate models contains more than 70 different sorts of problems.  Trillions of dollars have been spent because of predictions based on this data – yet even the most baby-basic quality control checks have not been done.

Thanks to Dr John McLean, we see how The IPCC demands for cash rest on freak data, empty fields, Fahrenheit temps recorded as Celsius, mistakes in longitude and latitude, brutal adjustments and even spelling errors.

Why. Why. Why wasn’t this done years ago?

So much for that facade. How can people who care about the climate be so sloppy and amateur with the data?

HadCrut4, Global Temperature, 1850 - 2018. Graph.

HadCrut4 Global Temperature, 1850 – 2018.

Absurdity everywhere in Hadley Met Centre data

There are cases of tropical islands recording a monthly average of zero degrees — this is the mean of the daily highs and lows for the month. A spot in Romania spent one whole month averaging minus 45 degrees. One site in Colombia recorded three months of over 80 degrees C. That is so incredibly hot that even the minimums there were probably hotter than the hottest day on Earth. In some cases boats on dry land seemingly recorded ocean temperatures from as far as 100km inland The only explanation that could make sense is that Fahrenheit temperatures were mistaken for Celsius, and for the next seventy years at the CRU no one noticed.

Dr McLean audited the HadCrut4 global data from 1850 onwards for his PhD thesis, and then continued it on afterwards til it was complete:

“I was aghast to find that nothing was done to remove absurd values… the whole approach to the dataset’s creation is careless and amateur, about the standard of a first-year university student.”
– John McLean

His supervisor was Peter Ridd, famously sacked for saying that “the science was not being checked, tested or replicated” and for suggesting we might not be able to trust our institutions

Data is incredibly, brazenly, sparse

The Hadley Met Centre team have not even analyzed this data with a tool as serious as a spell checker.

For two years the entire Southern Hemisphere temperature was estimated from one sole land-based site in Indonesia and some ship data. We didn’t get 50% global coverage until 1906. We didn’t consistently get 50% Southern Hemisphere coverage until about 1950.

McLean’s findings show there is almost no quality control on this crucial data. The Hadley Met Centre team have not even analyzed this data with a tool as serious as a spell checker.  Countries include “Venezuala”,” Hawaai”, and the “Republic of K” (also known as South Korea). One country is “Unknown” while other countries are not even countries – like “Alaska”.

The real fault of the modern day institutes is not so much the lack of historic data, but for the way they “sell” the trends and records as if they are highly certain and meaningful.

HadCrut4, Southern Hemisphere Temperatures, Graph, 1850 -2018.

HadCrut4, Southern Hemisphere Temperatures, 1850 -2018.

There are systematic and far reaching problems

Climate Research Uni, East Anglia University.

..

The HadCRUT4 dataset is a joint production of the UK Met Office’s Hadley Centre and the Climatic Research Unit of the University of East Anglia.

The CRU data covers 10,295 stations, but 2693 – more than a quarter – don’t meet the criteria for inclusion described in Jones et al 2012, which is considered to the best description of what should and shouldn’t be included.

It is impossible to know exactly which sites are included in the final temperature analysis, and whether a site’s records have been adjusted. (If only we could do our tax returns like this?)

The sub-parts of the datasets contradict each other. The land set and the sea set should combine up to be the global set, but they don’t always match. Which one is right?

“It seems like neither organization properly checked the land or sea temperature data before using it in the HadCRUT4 dataset. If it had been checked then the CRU might have queried the more obvious errors in data supplied by different countries.  The Hadley Centre might also have found some of the inconsistencies in the sea surface temperature data, along with errors that it created itself when it copied data from the hand-written logs of some Royal Navy ships.” 

— John McLean

Cooling the past one hundred years later?

In probably the worst systematic error, the past is rewritten in an attempt to correct for site moves. While some corrections are necessary, these adjustments are brutally sweeping. Thermometers do need to move, but corrections don’t have to treat old sites as if they were always surrounded by concrete and bricks.

New original sites are usually placed in good open sites. As the site “ages” buildings and roads appear nearby, and sometimes air conditioners, all artificially warming the site. So a replacement thermometer is opened in an open location nearby. Usually each separate national meteorology centre compares both sites for a while and figures out the temperature difference between them. Then they adjust the readings from the old locations down to match the new ones. The problem is that the algorithms also slice right back through the decades cooling all the older original readings – even readings that were probably taken when the site was just a paddock.  In this way the historic past is rewritten to be colder than it really was, making recent warming look faster than it really was. Thousands of men and women trudged through snow, rain and mud to take temperatures that a computer “corrected” a century later.

We’ve seen the effect of site moves in Australia in Canberra, Bourke, Melbourne and  Sydney. After being hammered in the Australian press (thanks to Graham Lloyd), the BOM finally named a “site move” as the major reason that a cooling trend had been adjusted to a warming one. In Australia adjustments to data increase the trend by as much as 40%.

In theory, a thermometer in a paddock in 1860 should be comparable to a thermometer in a paddock in 1980. But the experts deem the older one must be reading too high because someone may have built a concrete tarmac next to it forty or eighty years later. This systematic error, just by itself, creates a warming trend from nothing, step-change by step-change.

Worse, the adjustments are cumulative. The oldest data may be reduced with every step correction for site moves. Ken Stewart found some adjustments to old historic data in Australia wipe as much as 2C off the earliest temperatures. We’ve only had “theoretically” 0.9C of warming this century.

While each national bureau supplies the “preadjusted” data. The Hadley Centre is accepting them. Does it check? Does it care?

No audits, no checks, who cares?

As far as we can tell this key data has never been audited before. (What kind of audit would leave in these blatant errors?) Company finances get audited regularly but when global projections and billions of dollars are on the table climate scientists don’t care whether the data has undergone basic quality-control checks, or is consistent or even makes sense.

Vast areas of non-existent measurements

In May 1861 the global coverage, according to the grid-system method that HadCRUT4 uses, was 12%.  That means that no data was reported from almost 90% of the Earth’s surface.  Despite this it’s said to be a “global average”.  That makes no sense at all. The global average temperature anomaly is calculated from data that at times covers as little as 12.2% of the Earth’s surface”, he says.  “Until 1906 global coverage was less than 50% and coverage didn’t hit 75% until 1956.  That’s a lot of the Earth’s surface for which we have no data.” – John McLean

Real thermometer data is ignored

In 1850 and 1851 the official data for the Southern Hemisphere only includes one sole thermometer in Indonesia and some random boats. (At the time, the ship data covers about 15% of the oceans in the southern half of the globe, and even the word “covers” may mean as little as one measurement in a month in a grid cell, though it is usually more.) Sometimes there is data that could be used, but isn’t. This is partly the choice of all the separate national meteorology organisations who may not send in any data to Hadley. But neither do the Hadley staff appear to be bothered that data is so sparse or that there might be thermometer measurements that would be better than nothing.

How many heatwaves did they miss? For example, on the 6th of February, 1851, newspaper archives show temperatures in the shade hit 117F in Melbourne (that’s 47C), 115 in Warnambool, and 114 in Geelong. That was the day of the Black Thursday MegaFire. The Australian BOM argues that these were not standard officially sited thermometers, but compared to inland boats, frozen Caribbean islands and 80 degree months in Colombia, surely actual data is more useful than estimates from thermometers 5,000 to 10,000km away? Seems to me multiple corroborated unofficial thermometers in Melbourne might be more useful than one official lone thermometer in Indonesia.

While the Hadley dataset is not explicitly estimating the temperature in Melbourne in 1850 per se, they are estimating “the Southern Hemisphere” and “The Globe” and Melbourne is a part of that. By default, there must be some assumptions and guesstimates to fill in what is missing.

How well would the Indonesian thermometer and some ship data correlate with temperatures in Tasmania, Peru, or Botswana? Would it be “more accurate” than an actual thermometer, albeit in the shade but not in a Stevenson screen? You and I might think so, but we’re not “the experts”.

Time the experts answered some hard questions.

UPDATE

See the Hadley team reply to #DataGate a week later. Polite fog excuses.

The full report

The 135-page audit with more than 70 findings is available for $8 from Robert Boyle Publishing. You can help support months of work that should have been done by official agencies years ago.

————————————-

Always hard hitting — James Delingpole’s view:

Climate Bombshell: Global Warming Scare Is Based on ‘Careless and Amateur’ Data, Finds Audit

McLean’s report could scarcely have come at a more embarrassing time for the IPCC. On Monday, it will release its 2018 Summary for Policy Makers claiming that the global warming crisis is more urgent than ever. But what McLean’s audit strongly suggests is that these claims are based on data that simply cannot be trusted.

–read it all.

___________________________

Main points:

Gory details of the worst outliers

Abbreviations

HadCRUT4 is a global temperature dataset, providing gridded temperature anomalies across the world as well as averages for the hemispheres and the globe as a whole. CRUTEM4 and HadSST3 are the land and ocean components.

REFERENCES

Jones, P. D., D. H. Lister, T. J. Osborn, C. Harpham, M. Salmon, and C. P. Morice (2012), Hemispheric and large-scale land-surface air temperature variations: An extensive revision and an update to 2010, J. Geophys. Res., 117, D05127, doi:10.1029/2011JD017139.

Climatic Research Unit : Data

h/t Tom Nelson for #DataGate.

9.5 out of 10 based on 184 ratings