- JoNova - https://joannenova.com.au -

Scandal: BoM thermometer records adjusted “by month” — mysterious square wave pattern discovered

There is some major messing with data going on.

What would you say if you knew that the official Perth thermometer was accurate at recording minimums for most of time in October  in the eighties, but 0.7°C too warm all of December, and 1.2°C too cool in January? Bizarrely that same thermometer was back to being too warm in February! Try to imagine what situation could affect that thermometer, and require post hoc corrections of this “monthly” nature. Then imagine what could make that same pattern happen year after year. All those weather reports we listened to in Perth in 1984 were wrong (apparently). And this bizarre calendar of corrections is turning up all over Australia.

Bob Fernley-Jones has looked closely at all the adjustments done to achieve the wonderful homogenized ACORN data, as compared to the theoretically “raw” records listed in Climate Data Online (CDO) on the BOM website. He can’t know what the BOM did (since they won’t tell anyone), but he knows the outcome of their homogenization. He was shocked when he noticed a strange square-wave pattern repeating year after year; he was astonished that there were corrections calendar month by calendar month, up and down, switching wildly back and forth.

This graph below shows how the adjustments changed the original record in Perth from December 1983 to Nov 1984 (summer-autumn-winter-spring). This is ACORN temperatures minus the original CDO. A negative number means the adjusted “new” result is lower (cooler) than the original, a positive number shows how much warmer the adjusted result is. If we believe the adjustments, for the whole month of December 1983, each morning, the Perth thermometer was reading the minima too high by about 0.7°C. Then from January 1 to January 31 the thermometers switched to reading too low by a massive 1.2°C. That’s an eye-watering swing of almost 2 degrees. Come February 1st, and those thermometers switched again, to being too warm. It’s lucky, don’t you think, that the modern BOM is so knowledgeable they can compensate for the bizarre misbehaviour and moods of thermometers? It’s almost like astrology for temperature sensors. The thing is, if thermometers are this fickle, how can the BOM honestly tell the Australian public that they know the nation is warming, and to a tenth of a degree? The “equipment” is so bad, how do we know anything for sure?

...

This strange pattern of corrections appears year after year in Perth in the 1980s

Chris Gillham took Bob’s method and graphed Perth corrections for all years from 1910-2014. Averaged over the whole period, in January mornings the thermometers apparently read far too cool, but the same thermometers in February read too warm. The flip from December to January is a whopper 1.6°C difference between the corrections “needed” on the same thermometer on Dec 31st compared to New Years Day. That must have been some party, year after year.

Disconcertingly, all those weather reports I heard in Perth as a child were wrong. All those times the weather bureau said it was a 15°C night in Perth in the eighties, they really meant 16°C (if it were January) but 14°C (if it were December or March). Night after night, the bureau was getting it wrong. A degree here, a degree there, a thousand days of mistakes in every city. Don’t the Australian people need to know this? Isn’t there something just a little false about announcing “the hottest” month when the headline depends on adjustments that rewrite the past?

Thermometers are a 400 year old technology. If thermometers were so bad in 1984, aren’t they still unreliable? How do we know that the current records will not be corrected and erased in 2036?

The message here is that the data adjustments, which the BOM won’t explain, are massive. They can pretend it doesn’t affect the overall national trend by more than a fraction of a degree, but it rewrites the history of Australian weather and it suggests that any national trend is wildly uncertain — the instruments are far too fickle. Some of the trends, and many of the records and headlines, are a product of the adjustments more than the weather. Why won’t the BOM be honest about the scale of the fiddling? If climate change matters, why won’t the BOM answer questions?

This is wholesale rewriting of Australian history, and it’s a scandal. Read all the details and see all the graphs below.

— Jo

——————————————————————————————————–

GUEST POST by Bob Fernley-Jones

Corrupted Australian Surface Temperature Records. (Part 2)

Illogical Algorithms   3/July/2015

Introduction:

This is a summary of part of a study involving some 50 Mb of data that was prompted by various controversies over the Australian Bureau of Meteorology’s (BoM) ‘homogenization’ of temperature records. The Bureau have made ‘corrections’ for nominal changes in site conditions over which criticisms have included that it has resulted in exaggeration of the reported warming trend, such as by excluding hotter data from before 1910; the starting point of their homogenisation.

However this is not about the methodology that the BoM has used in homogenization.  Instead it is a test for reasonableness in the resultant DATA after changes they made to their currently available “raw data”.  By raw is meant that recently retained under the BoM source; ‘Climate Data Online’ (CDO).

Still further controversy surrounds earlier data than the “raw data” used here.  However, it has been established alongside this summary that with all 24 locations researched so far having long records, that the homogenised data are undoubtedly based on the CDO “raw data” in those long records.  That said, many stations have short records and unfortunately much data in the shorter term have been made common between CDO and the homogenized ACORN- SAT (ACORN) files.

Nevertheless, any discovery of substantive corruption in that data is enough to say that the homogenization is unacceptable, regardless of what their methodology was or the “rawness” of the data used.

It is not possible to suggest what the BoM methodology should result in, because their processes have not been released to the public in sufficient detail.  However, it ought to be possible to validate if that data meets the required standards of reasonableness without knowing HOW they got there

In this part of the study, an area of startling interest was spotted in the graph for Perth.

Method and notes below are largely the same as for Part I

NOTES on the presentation:

The following charts visually illustrate anomalies in the BoM temperature records, by comparing downloads from two of their several data portals, namely; CDO and the homogenized ACORN files.

CDO daily data were subtracted from the ACORN version via digital processes and plotted in EXCEL 2010 spreadsheet software for 24 cities and remote rural sites, out of which six important sites are used here.  The anomalies are of great variety in their magnitude, shape and displacement.  (By displacement is meant the distance up or down of their centroids relative to the zero horizontal axis).

These findings identify major problems with the BoM’s data processing methodology or skills (at least).

NOTES on reading the charts:

  • The graphs are unusual in format.  Please do not skip these notes.  The red plot lines are daily maxima and the blue are daily minima on a highly compressed scale spread over up to ~38,000 days.  Some blue data may be smothered by the red which is plotted secondly.  In all cases an annual cycle is clearly evident because of seasonality.
  • Those values that go up or down (in 0C) to the boundaries of the chart are the result of no data in either ACORN or CDO and thus are corrupted, (not anomalies in the sense as used here).  They are only of interest in indicating the completeness of the record.  Note that the horizontal zero axis origin is not at the base of the chart but towards the centre because of both positive and negative temperature anomalies.
  • In all cases, running back from 2014, ACORN is the same as CDO but for unexplained greatly varying periods.  Thus there are no anomalies (differences) in those periods, and only a red line is seen at the zero axis line.
  • Negative temperature anomalies (ACORN – CDO) that are biased towards 1910 mean that the homogenization has resulted in an increased warming trend. (Vice versa for positive anomalies)
  • Summer in Australia is from 1/December to end February
  • BTW, the regimented cycling seen in the following plots is a partial validation of the anomaly methodology, which was used here and in Part I: A mess of temperature adjustments in Australian capital cities.

Fig 1)  Perth Anomalies:  

The capital of Western Australia is a star performer for inexplicable anomalies in the ACORN “corrections” as discussed in PART 1, but here is where some strange algorithms were first noticed and subsequently found in abundance at other sites.

There is an area of interest highlighted which in expanded view reveals monthly step-changes in the daily data and several concerns.  Careful study shows varying monthly step-changes throughout the Perth record and elsewhere.

 

...

(Click to enlarge)

Fig 2)  Perth Underlying Algorithms: 

The many similarly repeated cycles suggest that under the daily data noise there is a fixed underlying algorithm for each step-change.  By averaging the noise for each month, it follows that the smoothed average for a typical year will be a close approximation of that underlying algorithm, such as this:

 

...

….

Chris Gillham has also checked this out independently for the minima using different software. His results are in Fig 2b averaged over a longer period:

Fig 3)  Perth, is this Cherry-picking? 

No, because while it’s most obvious in Fig 3 (a) three other four-year periods also reveal monthly steps but under much daily data noise:

 

 

Fig 4)  Other State/Territory Capitals:

Each of these four examples lie in the same three-year period from 1/December/1960.  All are affected by daily data noise but Adelaide is the most obvious example of monthly steps (or bimonthly) but careful examination shows it to be present on all four sites.  The seasonal distribution effects have differing phasing but Hobart is very flat despite that its range of extremes is somewhat similar to Melbourne.

 

Fig 5)  Other Surprises: 

Out of the 24 sites so far studied, three have bare underlying algorithms (without any daily data) for the early years after 1910, and here are two:

Additional Comments:

a) In choosing six capital cities with long records, (which excludes Brisbane, and Canberra), there was an expectation that these sites would have the best resources, and probably the most robust records out of the 112 ACORN stations.

b) This essay concentrates on the findings of strange algorithms in the anomaly data but there are earlier problematic findings many strange anomalies in our capital city temperature records including unaccountable step-changes etcetera that are simply not credible.

c) The monthly algorithms are within the ACORN process rather than the raw data or some natural process by virtue of them being different within every major step-change, they being created in ACORN, (In one step-change, for Hobart in Tasmania, the cycles are constant for 88 years!)

Conclusions:

These monthly cycle algorithms do not pass the test for reasonableness because of their great variety with each major step-change.  Some have seasonal distributions which are implausible.    It is currently not possible to analyse why this is so, but whatever, they have fatally low credibility.

 

——————–

References:

ACORN Station Catalogue. (Including history of sites involved…. Sometimes two entirely different locations)

Sortable list of ACORN-SAT stations with linked data

Climate Data Online (CDO starting page)

Acknowledgements:

Thank you especially to Phil, Chris Gillham, Lance Pidgeon and Geoffrey Sherrington

Disclosures:

I’m a retired mechanical engineer with no past or present funding for my subject research from anyone. (Or interests other than seeking proper scientific methodology).

Compiled by Bob Fernley-Jones    (Mechanical engineer retired)

9.3 out of 10 based on 114 ratings