What would you say if you knew our high quality temperature record included sites with 100 year long “records” which were based on just 12 years of data and some undisclosed method was used to construct 90% of the graph?
Wow? I mean, Wow?! Why are these sites with such little actual data being included in a series called “high quality”?
Presumably the “adjusted” trends were recreated (in a sense) by homogenizing data from nearby stations, but why not use just the stations with long records in the first place? Out in the vast outback there are long distances between stations, and while a “splice” might overlap for ten years, who knows whether the dramatic PDO oscillations don’t shift weather patterns during their 30 year cycle and mean that any ten year period is not indicative of the longer time frame.
BOM compensate for the Urban Heat Island effect by making adjustments that essentially result in almost no change in the trend. They remove the “urban” stations, but UHI affects even small populations, and Andrew Barnham speculates that the largest changes in the UHI effect may occur in these smaller rural locations that are still included.
Andrew Barnham has done an independent assessment of the BOM trends, using a slightly different methodology to Ken Stewart, but like Ken, this raises grave doubts that our high quality records are done in a rigorous manner. We need both the BOM and CSIRO audited, independently.
As usual, Barnham did this pro bono and deserves a big thank you for his dedication and skill.
Analysing Australian Temperature Trends
Guest Post By Andrew Barnham
The article Australian warming trend adjusted up by 40% was published on Jo Nova’s blog on 29th July 2010. This work by Ken really caught my attention as it stands in stark contrast with consensus claims that the homogenization process is simply a ‘fine tuning’ process and that it barely impacts the final trend result. For example, on GISS website, generating anomaly trend graphs using adjusted or unadjusted data yields very little difference. So assuming that Ken’s result is accurate, then the Australian dataset stands at odds to this. The following is largely a replication of Ken’s work, using slightly different tooling and methodologies. In this I analyze Bureau of Meteorology (BOM) data comparing modern temperature sets used by the BOM to construct Australian land surface anomaly figures used by the BOM for purpose of contributing to the climate change narrative. I scrutinize raw data compared to adjusted data. I also scrutinize the process BOM use to compensate for local climate effects. A link to all computer source codes used in this analysis can be found at the bottom of this article.
Firstly, credit to the BOM for making the information very easy to access and for structuring the data in a consistent and easy to analyse fashion. Because of this, it only took me a few hours to secure and organize the data which allowed me to focus on actual analysis.
Some months ago the Bureau of Meteorology published this now famous image which is readily accessible on their website. It is a temperature anomaly reconstruction of Australian land surface temperatures that shows that based on a 1960-1990 baseline period, Australia has warmed by 0.9°C over the past century.
The image can be accessed directly from their website here: Australian climate variability & change – Time series graphs
Following is my reconstruction of the same data set using their High Quality (HQ) adjusted data series. Visually the results are a very close fit which gives me confidence that I have replicated their process with sufficient accuracy (the process they use is not documented).
The next logical step is to generate the same image for unadjusted data sets. But it is at this point that problems become evident. The HQ Series the BOM uses is based on the work of two studies, Torok 1995, and Della Marta 2004. These studies define HQ temperature stations which are then used to measure temperature changes on the Australian Continent. Torok states the following:
“… if only stations currently operating and with at least 80 years of data are considered. To increase the number of long-term stations available, previously unused data were digitized and a number of stations were combined to create composite records”.
Ideally we want to use stations that provide long term continuous data. Now consider the following three graphs that graph both adjusted and unadjusted temperature series.
These sites contain hardly any raw data (blue line) at all. In spite of this BOM include these stations and reconstruct 100 years of temperature data using some undisclosed process. It is not documented the precise steps BOM used to reconstruct these phantom records although the Torok study indicates that this is done by splicing data from nearby stations and then correcting for discontinuity using the homogenization process.
Torok states that this is necessary due to an absence of long term station records. What is unusual although is that there is a lot of station data which appears to satisfy the requirements for HQ data, but these stations are ignored by the Torok study and are not included in BOM HQ climate change series.
If we define good stations to mean stations with at least 80 years of data and are still active (or assumed to be active if there is data for 2009) then in Australia we have 62 stations that qualify (out of a total of 1,144 stations that have at least 10 years of data). Of these 62 stations, 38 are included in the HQ climate group, and 24 are ignored. The reason why so many stations are ignored is not clear, maybe there are undisclosed problems with the data sets. The absence of these stations does not have an undue effect on gridded trend results. Never-the-less it seems unusual and something of a waste that BOM would disregard stations with long-term data sets; and opt in part to reconstruct temperature series data from smaller sized records.
Returning to the issue of raw data analysis.
For purposes of generating anomaly graphs I used a method known as CAM (Common Anomaly Method), which is a popular method for climate analysis and appears to be the method used by BOM. CAM has one serious limitation: it is essential that station data records have sufficient data in a calibration period in order for the station to be useful. Stations that fail this test need to be discarded (or attempts made to ‘fill in’ the missing data first) . For generation of graphs I rejected any station which had less than 25 years of data in the 31 year 1960 to 1990 calibration period. In the HQ graph there are 99 stations. But in raw data, I was forced to filter down to 63 stations as 33 stations had insufficient data in the calibration period. Illustrating the distribution of stations after gridding:
HQ Series (top) VS Raw Series (lower)
Finally – the Raw graph
The anomaly difference based on modern data represents a 20% discrepancy
A cursory look at some of the stations, this comes as no surprise.
An analysis of the original Torok data yields a larger discrepancy of 40%. Illustrating in brief here:
A Brief Discussion on UHI
The Urban Heat Island (UHI) effect is a real effect that requires careful handling when dealing with surface temperature records for purposes of global climate change analysis.
The effect is discussed in detail in a CSIRO report, for example (CSIRO is a key Australian government-funded research body who work closely with BOM on climate change issues):
The report makes a number of interesting statements:
“It [UHI] is commonly measured as the difference between urban and rural temperatures at night, when the near-surface effect is strongest.”
“The net effect on mean temperatures is a city-rural [City of Melbourne] difference of 1.O°C in 1950, increasing to about 1.6”C by 2000. For maximum temperature, the average difference is only O.l”C with a slight upward trend mainly due to warming in the 1990s. The effect of urbanisation appears most evident in minimum temperature.”
So a simple, yet somewhat crude, way to remove UHI from the climate change equation is to look at maximum daytime temperatures only as opposed to min/max average. This technique is crude because it likely that the difference between annual maximum trend and minimum trend is attributable to factors other than just UHI. It gives us at least an approximate possible upper margin for UHI. It is likely that there are indeed forces at work whereby emissions based global warming applies itself with different intensities between day and night: just as UHI manifests itself differently between night and day. Following is a graph of the raw data set with daytime temperatures only, presented with intention to better understand the UHI signal, on the understanding that doing so may also remove some component of the nigh-time global warming signal (if any) at the same time. So the technique is not particularly rigorous, but is presented for purposes of expediency. The issue of UHI is complex and merits its own, more detailed, treatment.
BOM have chosen to deal with UHI by removing entire stations classified as urban: which I have also done so far. The HQ temperature series actually includes 134 stations: 100 are rural and 34 are urban.
But this process is arguably even less rigorous than looking at daytime temperature only. It assumes UHI is a binary effect and this is clearly not the case. UHI is observable for even very modest rural population densities.
Trend anomaly with following combinations (average temperature):
|Rural + Urban||0.94||0.73|
Extraordinarily, Urban stations only reduces the observed trend. The raw temperature trend goes up and down in such a way that seems to indicate that the data may be harbouring a statistical paradox known as Simpson’s Paradox. Why this comes about merits close examination, which is beyond scope of this article. But a couple of speculative points.
Firstly it could be a statistical artifact of there being too few stations and too few grid cells. Illustrating grid cell map for Urban only:
Gridding is statistically fraught when you consider that stations cluster close to population centres. Some grids within the Rural HQ/Raw analysis only have a single station in them. So the influence of a single temperature station located in the Australian outback is more statistically significant than the readings from 20 or so stations located in a grid on the Australian South/East coast. This doesn’t mean gridding is bad, it just means that ideally it would be nice if distribution of stations was more uniform; and the absence of the ideal means that risk of statistical artefacts is higher.
Another, equally wild, speculative point is that UHI in major population centres has already run its course. UHI observations show that the effect of UHI is logarithmic. So a city growing from 3 million to 4 million people will exhibit less UHI than a town growing from 100 to 1000. So stations designated as ‘rural’ will actually be subject to a stronger UHI anomaly signal because of this.
Of course a counter argument to consider is the point of view that UHI is not a significant effect at all. Yet the trend figures against CSIRO figures for the City of Melbourne (Australia’s second largest city), indicate that UHI is indeed a significant climate force and the fact that BOM’s processes do not replicate this in the trend numbers suggests that they are possibly not adequately dealing with the issue of UHI.
One last chart of numbers: showing breakdown of min and max in trend:
|HQ (min)||HQ (max)||Raw (min)||Raw (max)|
|Rural + Urban||1.24||0.66||0.77||0.60|
It’s a confusing array of numbers that raises more questions than answers.
Just one of many possible questions: why is Urban Max so different from Rural Max?
According to the CSIRO, UHI does not significantly impact max temperatures so why the distinct discrepancy (in both HQ and Raw)? What else could possibly explain this variation?
Finally, a word on the homogenization process and its relevance to the issue of UHI. The homogenization process is concerned with correcting for a number of items that can affect temperature data, such as station moves, changes to immediate station environment (construction of new nearby buildings), station equipment etc. No attempt to correct for UHI forms part of homogenization process; nor can it. Homogenization applies discrete step-wise adjustments which is likely a poor model for UHI where UHI is a continuous function of population and the process of urbanization.
Looking at land surface record of a single continent doesn’t necessarily deepen our understanding of the climate system and the extent of human industrial impact on that system in terms of emissions such as CO2. Yet results of studies such as Torok and Della-Marta and BOM climate change artifacts in general are used to justify policymaking that potentially has a significant impact on daily Australian life. My results raise a number of issues with the quality of the data and supporting analysis provided by BOM.
Trend temperature in Australia over the past 100 years can be simply represented as follows:
Of the 0.94 trend reported by the BOM at least 20% and possibly as much as 40% is certainly man made but not as a consequence of global warming. UHI is man made, and statistical artefacts of the homogenization process are strictly speaking man made too: made by BOM employees to be precise. But neither of these are caused by global emissions. Of the remaining 60%-80% what precise component of which can be directly attributed to global human emissions remains uncertain and is beyond the scope of this article.
Some closing points:
- The homogenization process is a very blunt and un-subtle analysis tool. When it is wielded by some groups it generates outcomes that are barely discernible (such as GISS). Yet when it is yielded by the BOM the impact on the final anomaly figures are quite dramatic.
- BOM implicitly claim to compensate for UHI by virtue of removing stations designated as urban. But the methodology they use has no appreciable effect on resulting trends and as such it seems likely that it is an inappropriate methodology.
- BOM disregard a number of stations with long term data. Instead they utilise a process of merging data from multiple nearby stations and attempt to correct for discontinuities in the resulting dataset via the homogenization process in order to boost the number of stations in their data set. A highly fraught and subjective process that exposes them to further criticism.
Finally it is important to note that although the HQ series is adjusted upwards as much as 20% as a consequence of BOM homogenization, the raw temperature series still demonstrates an upward trend of 0.72°C per century. This figure, without attempts to compensate for UHI, is consistent with what is reported in global land surface temperature series sets such as GISS. Precisely what component of this trend represents a signal that indicates Co2 emissions driven AGW is outside the scope of this analysis.
Download a 12 page PDF for printing or emailing.
Other posts tagged: Australian Temperatures
A link to the computer source codes will be available soon here for anyone who is interested in replicating this work.
From Speedy in comments:
It’s probably time the BOM stopped refering to their data as “High Quality”.
Or perhaps they could refer to it as High Quality*
* Homogenised to conform to IPCC standards. Data sources incomplete or unavailable. Influence of urban heat island UUHI) unknown or not adequately accounted for. Data in recent records will be warmer than they actually appear. It’s worse than we thought.