- JoNova - https://joannenova.com.au -

Lawyer busts climate science

This paper has become the zeitgeist. I’ve had countless emails, and I know it’s been mentioned on Pielke, then  by Solomon then Watts Up. Every self respecting skeptic will have looked at by this weekend, if not already. (Thanks to all the people who’ve emailed in the last three days).

My thoughts? For a long scientific review, it’s surprisingly well written, cuts to the core, and it’s a very unusual style of writing: No one is pushing anything, it’s not polarized or written to entertain, yet at the same time, it has compelling clarity. Johnston also exposes the rhetorical flaws in the reasoning and argument styles, which gives it a comprehensive punch.

I’m not used to reading official documents about the climate that are written to actually explain something. It’s 79 pages long, and distinctly lacks any cartoons, or even graphs, but surprisingly, astonishingly, it has sentences that are readable. There are no double barreled vagarisms designed to obscure the meaning while they recite a litany of key phrases, as if the answer is really hidden in there somewhere.  This document doesn’t finish off every other point with speculation that it might be worse than we thought. Even though, actually, as far as science goes, official climate science is worse than we thought. Damning with understated tones.

“Global Warming Advocacy Science: A Cross Examination”

Jason Scott Johnston, Professor and Director of the Program on Law, Environment and Economy at the University of Pennsylvania Law School

First up, I’ll note that I was impressed that the problems with feedbacks gets the full explicit treatment (page 72). Johnstone has done an admirable job. The message is getting out. Otherwise, I’ll jump straight to the conclusions on policy which I found interesting. Johnston has put some thought into climate science per se. (I’ve added my summary headers in between the quotes).

III. Conclusion: Questioning the Established Science, and Developing a Suitably
Skeptical Rather than Faith-based Climate Policy

As a large number of climate scientists have stressed, such an understanding will come about only if theoretical and model-driven predictions are tested against actual observational evidence.

The most valuable lesson: It’s standard practice for scientists to ignore the results they don’t like.

Among the most surprising and yet standard practices is a tendency in establishment climate science to simply ignore published studies that develop and/or present evidence tending to disconfirm various predictions or assumptions of the
establishment view that increases in CO2 explain virtually all recent climate change.

The main problem is the variety of data sets which climate science advocates use to rebut studies by finding something from another data set.

Perhaps even more troubling, when establishment climate scientists do respond to studies supporting alternative hypotheses to the CO2 primacy view, they more often than not rely upon completely different observational datasets which they say confirm (or at least don’t disconfirm) climate model predictions. The point is important and worth further elucidation: while there are quite a large number of published papers reporting evidence that seems to disconfirm one or another climate model prediction, there is virtually no instance in which establishment climate scientists have taken such disconfirming evidence as an indication that the climate models may simply be wrong. Rather, in every important case, the establishment response is to question the reliability of the disconfirming evidence and then to find other evidence that is consistent with model predictions. Of course, the same point may be made of climate scientists who present the disconfirming studies: they tend to rely upon different datasets than do  establishment climate scientists. From either point of view, there seems to be a real problem for climate science: With many crucial, testable predictions – as for example the model prediction of differential tropical tropospheric versus surface warming – there is no indication that climate scientists are converging toward the use of standard observational datasets that they agree to be valid and reliable.

The debate will never end unless we sort out the datasets

Without such convergence, the predictions of climate models (and climate change  theories more generally) cannot be subject to empirical testing, for it will always be  possible for one side in any dispute to use one observational dataset and the other side to use some other observational dataset.

So lets fund the data sets, and not the big disconfirmed models

Hence perhaps the central policy implication of the cross-examination conducted above is a very concrete and yet perhaps surprising one: public funding for climate science should be concentrated on the development of better,
standardized observational datasets that achieve close to universal acceptance as valid and reliable. We should not be using public money to pay for faster and faster computers so that increasingly fine-grained climate models can be subjected to ever larger numbers of simulations until we have got the data to test whether the predictions of existing
models are confirmed (or not disconfirmed) by the evidence.

Trillions of dollars should not be wasted because of rhetorical tricks

As things now stand, the advocates representing the establishment climate science story broadcast (usually with color diagrams) the predictions of climate models as if they were the results of experiments – actual evidence. Alongside these multi-colored multi-century model-simulated time series come stories, anecdotes, and photos – such as the iconic stranded polar bear — dramatically illustrating climate change today. On this rhetorical strategy, the models are to be taken on faith, and the stories and photos as evidence of the models’ truth. Policy carrying potential costs in the trillions of dollars ought not to be based on stories and photos confirming faith in models, but rather on precise and replicable testing of the models’ predictions against solid observational data.

Read the full paper

9.5 out of 10 based on 2 ratings