What is striking about Andy Pitman and Lisa Alexander’s response to the articles in The Australian, on The Conversation, is how intellectually weak it is, and how little content they have after we remove the logical fallacies. It’s argument from authority, circular reasoning, and strawmen. Hail the Gods (and don’t look over there)! They don’t question Jennifer Marohasy’s remarkable figures, they don’t even mention them at all, nor use the names “Rutherglen”, “Amberley” or “Bourke” –how revealing.
And these are the points at issue. Long cooling trends at supposedly excellent sites had been homogenized and transformed into warming trends. Rutherglen is the kind of station other stations dream to be: it has stayed in the same place according to the official documents, isn’t affected by the heat from urban growth either and is similar to its neighbors. Other stations might be adjusted to be more like it. Instead the BOM has a method that detected “unrecorded” site moves at Rutherglen by studying unnamed stations somewhere in the region. Awkwardly, someone who used to work there says the thermometer didn’t move. Hmm. Would a thinking person ask for more details and an explanation? Not if you are director of an ARC Centre of Excellence for Climate System Science or a Chief Investigator of the same institution. This is apparently how Excellent Climate Centres work. The thermometers and trends for Rutherglen were wrong all along, off by nearly 2 degrees for 70 years. Luckily the experts finally arrived to fix them, but using thermometers that might have been hundreds of kilometers away?
It seems a bit magical. But Pitman and Alexander offer no scientific or physical reason why it makes sense for long cooling trends in raw data to be dramatically changed in stations which are supposed to be our best. Nor do they even try to explain the BOM’s lack of curiosity of decades of older data at Bourke either. Perhaps there are reasons years of historic data can’t be analyzed and combined to look at long term trends, but if there is, Pitman and Alexander don’t know it and apparently don’t care too much either.
Instead we hear that homogenization is used all over the world, but in lots of different forms, and sometimes not at all.
Data homogenisation techniques are used to varying degrees by many national weather agencies and climate researchers around the world. Although the World Meteorological Organization has guidelines for data homogenisation, the methods used vary from country to country, and in some cases no data homogenisation is applied.
Is this supposed to make us feel confident that the Australian BOM uses the “right” version, and no one should even ask questions about the details? Skeptical scientists (as opposed to Directors of Excellence) have been asking for these details of individual site adjustments for years. The BOM could have provided the answers and silenced the critics long ago. Instead skeptics were delighted when Graham Lloyd at The Australian finally managed to elicit three paragraphs of details on three sites. Pop the Champagne, eh?
To answer the critics Pitman draws on the elite gods of ClimateScienceTM to help him. Unlike other fields of science where one expert can explain things to another scientist, in BOM-science they use, wait for it, “complex methods” that apparently can’t be discussed. Besides Blair Trewin has written a “comprehensive” article. It must be good. Likewise, annointed Climate Elves called Itsi’s are allowed to talk about which homogenisation method might be better than others, but scientists outside the fairy circle are not.
Perhaps it’s understandable that Pitman and Alexander didn’t bother explaining any of the details — they are writing for The Conversation, after all. It’s not like it’s an educated high level audience…
Who is cherry picking?
The handy thing about climate parameters is there are plenty of trend-cherries to pick: there are maxima, mean, minima, and extremes, and they are grouped by region or state, or national memes. Hot days can be defined lots of ways: over 40C, 37C, 35C, or the 10th percentile above the monthly mean.
Luckily for Pitman and Alexandra, there was at least one example from these permutations and combinations that showed cooler trends after homogenization. That little category is “Trends in the frequency of hot days over Australia – unadjusted data using all temperature stations that have at least 40 years of record available for Australia from the GHCN-Daily data set.”
Skeptical scientists like Ken Stewart instead just looked at big basic parameters like the average minima across the country of 100 plus stations and found the adjustments warmed the trends by 50%.
This climate has a circular trend?
Circular Reasoning Prize for the day goes to this line, bolded:
If the Bureau didn’t do it (homogenisation), then we and our fellow climatologists wouldn’t use its data because it would be misleading. What we need are data from which spurious warming or cooling trends have been removed, so that we can see the actual trends.
What “actual” trends? Since the Models of Excellence are not-so-excellent at predicting the climate, this translates to saying that broken models don’t work with unhomogenized data. (Could be a clue there, you think?)
Apparently the editors of The Conversation find argument from authority and circular reasoning appealing. (Forgive me, what well-trained academic wouldn’t?)*
The mismatch in the PR
There is another layer to this. Even if the adjustments can be physically justified with documents, the grand uncertainty in Australian datasets is never conveyed in BOM press releases which announce records that may rely on adjustments of up to 2 degrees to temperatures recorded 70 years ago. Even if the adjustments are justified, isn’t it important for the public to know how complicated it is and how fickle most of these records are when they may disappear with the next incarnation of “high quality” data?
Strawmen to mislead?
Andy Pitman is keen to suggest skeptical scientists make false accusations:
But skeptics accuse the BOM of not documenting individual site specific explanations. In return, Andy Pitman keeps that trend going by responding to questions about Amberley, Rutherglen and Bourke without mentioning, er… Amberley, Rutherglen or Bourke.
Peer review makes scientific arguments “Valid”?
It doesn’t matter if skeptics are logical and have empirical evidence, what matters to Andy Pitman and Lisa Alexander is whether those ideas have been passed by two anonymous unpaid reviewers.
Valid critiques of data homogenisation techniques are most welcome. But as in all areas of science, from medicine to astronomy, there is only one place that criticisms can legitimately be made. Anyone who thinks they have found fault with the Bureau’s methods should document them thoroughly and reproducibly in the peer-reviewed scientific literature. This allows others to test, evaluate, find errors or produce new methods. This process has been the basis of all scientific advances in the past couple of centuries and has led to profoundly important advances in knowledge.
All scientific advances in the last two centuries? Tell that to Newton, Einstein, Watson and Crick. They were not peer reviewed. On the other hand these 120 scientific advances were peer reviewed, that is until someone realized they were computer generated gibberish, and un-published them.
Is that more the type of Excellence our ARC strives for?
*Apologies to the exceptions who survive in academia.