- JoNova - https://joannenova.com.au -

When top scientists take 2 years to publish, it’s time to give up on old “Peer” review

Ladies and Gentlemen this is the front line trench of modern science. If climate science is so important, and there is no time to waste, why does the system try so hard to discourage dissent (because they don’t want to find the truth, only the “correct” answer)?

This paper by Lindzen and Choi was submitted and rejected by GRL in 2009, then rejected twice more by PNAS. (And in part because it needed to meet impossible standards. In the end, it was supposed to include “the kitchen sink” but fit into a sandwich bag — see below). The paper could have been out for discussion in 2009, and while it has improved upon revision, was it worth the two year wait? Those gains could have been made in two months (or two weeks) online.

Even the reviewers understand how significant these results would be if they are right. One admits the new paper shows the models don’t match the observations.

Science needs free and open criticism, and competing theories. If Lindzen’s analysis is revolutionary, but potential wrong, is it so bad to publish those results? He is one of the most eminent researchers in the field — and surely the crowd of “experts” would quickly find the flaws and point out the omissions, and both sides could move forward.

It’s time for scientists to step outside the system and stop paying homage to the dogma of the old rules. It  slows down research because the all-too-human gatekeepers can keep a topic away from public view for month after month, while people pay money for schemes that are not necessary and government reviewers can ignore results that are inconvenient.

In this day of electronic publication where space is no limit, and results can be discussed widely, transparently and easily, why bow to a system that has strict limits on words?

As long as we pay respect to anachronistic rituals, and establishment procedures, the prevailing system can be a stranglehold on the ideas that the community can discuss. Formal peer review has proven to be as corruptible as any human process, as the Climategate emails show. There is a point where we must ask, why bother?

It’s time real scientists had an impartial rigorous publication to send their material too. Where is the 21st Century new version of “Science” or “Nature”? There is no rescuing the old publications.

This post is long, but it is, in effect, about both the problems with peer review, as well as being the latest news on the point in climate science that is more critical than any other — the modeled feedbacks.

The Paper: Lindzen, R., Choi, Y.S. (2011) On the Observational Determination of Climate Sensitivity and Its Implications.  Asian Pacific Journal of the Atmospheric Sciences, in press. [link]— Joanne Nova

From Master Resource via the Science and Public Policy Institute blog
[Editor’s note: The following material was supplied to Master Resource by Dr. Richard Lindzen as an example of how research that counters climate-change alarm receives special treatment in the scientific publication process as compared with results that reinforce the consensus view. In this case, Lindzen’s submission to the Proceedings of the National Academy of Sciences was subjected to unusual procedures and eventually rejected (in a rare move), only to be accepted for publication in the Asian Pacific Journal of Atmospheric Sciences.

I, too, have firsthand knowledge about receiving special treatment. Ross McKitrick has documented similar experiences, as have John Christy and David Douglass and Roy Spencer, and I am sure others. The unfortunate side-effect of this differential treatment is that a self-generating consensus slows the forward progress of scientific knowledge—a situation well-described by Thomas Kuhn is his book The Structure of Scientific Revolutions. –Chip Knappenberger]

***

“If one reads [our new] paper, one sees that it is hardly likely to represent the last word on the matter. One is working with data that is far from what one might wish for. Moreover, the complexity of the situation tends to defeat simple analyses. Nonetheless, certain things are clear: models are at great variance with observations, the simple regressions between outgoing radiation and surface temperature will severely misrepresent climate sensitivity, and the observations suggest negative rather than positive feedbacks.”

— Richard Lindzen

***

These  are emails between Lindzen and The editor of  the Proceedings of the National Academy of Sciences (PNAS). This is the same journal that published the ad hominem black list of “climate scientists” which I mocked as being from the National Academy of Sorcery (see image below).  Dr Lindzen points out that it’s normal procedure for members of PNAS to put in up to 4 papers a year, they arrange their own duo of reviewers, and 98% of the submissions are accepted. (Attachment1.pdf is simply a statement of PNAS procedure.)

PNAS - satire, cover.

Richard Lindzen wrote:

The rejection of the present paper required some extraordinary violations of accepted practice. We feel that making such procedures public will help clarify the peculiar road blocks that have been created in order to prevent adequate discussion of fundamental issues. It is hoped, moreover, that the material presented here can offer the interested public some insight into what is involved in the somewhat mysterious though widely (if inappropriately) respected process of peer review.

This situation is compounded, in the present example, by the absurdly lax standards applied to papers supportive of climate alarm. In the present example, there existed an earlier paper (Lindzen and Choi, 2009) [we covered that paper here CK], that had been subjected to extensive criticism. The fact that no opportunity was provided to us to respond to such criticism was, itself, unusual and disturbing. The paper we had submitted to the PNAS was essentially our response which included the use of additional data and the improvement and correction of our methodology.

Several weeks after Lindzen submitted this paper the Editor of PNAS responded with two attachments to a two-line email:

Attach1.pdf (The standard policy of PNAS)_
Attach2.pdf (The letter about Lindzens submission).

Lindzen describes the letter:

This attachment begins with what we regard as a libelous description of our choice of reviewers. Will Happer, though a physicist, was in charge of research at DOE including pioneering climate research. Moreover, he has, in fact, published professionally on atmospheric turbulence. He is also a member of the NAS. M.-D. Chou and I have not collaborated in over 5 years, and Chou had absolutely nothing to do with the present manuscript. There then followed a list of other reviewers that we felt were all inappropriate.

Lindzen suggested appropriate reviewers. Randy Schekman decided to ask one of the experts Lindzen suggested. But the next email mentioned two other names apparently suggested by one of Lindzens list of reviewers. Lindzen wrote that “As best as I could determine, none of my suggested reviewers would have made such a recommendation. I can only speculate that Schekman considered Ramanathan.

Lindzen wrote that he did object to those two reviewers:

“Both are outspoken public advocates of alarm, and Wielicki has gone so far as to retract results once they were shown to contradict alarm.” Lindzen suggested Dr. Patrick Minnis, a collegue of Wielicki.

The reviews went ahead finally, and the response was a polite rejection. Attach3.pdf.

The old-media double bind

Lindzen could respond to the reviewers and adjust his paper to clarify things, but then it would be too long to fit the space constraints of PNAS, “especially since the reviewers made clear that important material should not be relegated to ‘supplementary material’.”

Further, Lindzen notes that most papers that are re-submitted are re-rejected leading to further delays.

Our final letter to Schekman (Letter_to_Schekman.pdf) is attached. As already noted, we chose to respond in detail to each review, and these responses are attached (Response.pdf). The revised paper (as well as the original version submitted to the PNAS: Lindzen-Choi-PNASSubmission.pdf) is also attached (Lindzen-Choi-APJAS.pdf).

The final version is accepted (following review) by the Asian Pacific Journal of Atmospheric Sciences.

Furthermore, Lindzen writes that they feel it is necessary to reply to the reviewers even though they did not re-submit the paper, “simply because we found comments on the rejection of our paper on the internet even before receiving your official decision”.

Commentors on Master Resource point out the rejection included a review comment:“If the analysis done by the authors prove to be correct, major scientific and even political implications can be foreseen.” Which is in and of itself perhaps more of a reason to publish something from an eminent team rather than not.

Lindzen points out the scientific flaws in the reviewers arguments:

As to your quote from one of the board members, the answer is straightforward. We clarify in Section 4 (Methodology) of the revised paper the use of a simple model to generate time series with specified feedbacks to test various analysis methods. The use of simple regression over the entire record (as is the procedure in Trenberth et al, 2010 and Dessler, 2010) is shown to severely understate negative feedbacks and exaggerate positive feedbacks – and even to produce significant positive feedback for the case where no feedbacks were actually present (viz Figure 7 and Table 1 of the revised paper) . Our method, while hardly ideal, fairly accurately replicates negative feedbacks and only modestly understates positive feedbacks. Equally important, the simple regression approach leads to extraordinarily small values of the correlation (r ) on the order of 0.02. Such values would, in normal scientific fields, lead 2 to the immediate rejection of the results of Trenberth et al and Dessler as insignificant. We show that the appropriate use of objectively determined segments that adhere to the normal requirement that segments be short compared to equilibration times while long compared to the time scales associated with feedback processes, greatly increases the signal to noise ration, eliminates biases due to equilibration, and greatly increases r2 – despite reducing the degrees of freedom (viz Figure 9 of the revised manuscript). I hope that you will agree that holding such unjustified and insignificant analyses as those by Trenberth et al and Dessler to be standards for comparison to be disturbing to say the least.

Here’s the proof that if someone does find a result that “busts” the alarmist science, it could not be published, because it’s too different to the accepted paradigm.

Reviewer #2 claimed there were many unknowns (mostly in climate science and climate models, but also in the Lindzens paper), and since reviewer #2 also felt the paper would be “revolutionary” if it were right because it was so different from the models therefore he concluded that it ought not be published for wider discussion (figure that):

The poor state of cloud modeling in GCMs has been amply demonstrated elsewhere and the effect of this on climate sensitivity is well documented and acknowledged. The more significant result here is a claim to have demonstrated an extremely strongly negative, fast process climate feedback in the Tropics. This would be revolutionary, if it bears the test.

While the stated result is dramatic, and a remarkable departure from what analysis of data and theory has so far shown, I am very concerned that further analysis will show that the result is an artifact of the data or analysis procedure. The result comes out of a multi-step statistical process. We don’t really know what kind of phenomena are driving the SST and radiation budget changes, and what fraction of the total variance these changes express, since the data are heavily conditioned prior to analysis. We don’t know the direction of causality – whether dynamically or stochastically driven cloud changes are forcing SST, or whether the clouds are responding to SST change. Analysis of the procedure suggests the former is true, which would make the use of the correlations to infer sensitivity demonstrably wrong, and could also explain why such a large sensitivity of OLR to SST is obtained when these methods are applied.

The inferred sensitivity of longwave emission to SST is enormous, significantly greater than that of a black body at the emission temperature of the tropics. Given that no plausible model or data analysis has ever produced anything close to this, one is inclined to think that the result comes from the methodology and not from physics.

[Lindzen’s reply] The number of assertions by the reviewer would require another paper to respond to. His or her use of undefined terms like ‘enormous’ and ‘revolutionary’ are relatively meaningless. Since when is a negative feedback that reduces the response by 40% considered enormous, but a positive feedback that is purported to increase the response by 300% is considered plausible? However, it should be clear from the revised paper that there is no ambiguity in our choice of segments. Moreover, our methodology is tested rigorously by a simple model (see Figures 7 and 8). We are confident that all our reported results are reproducible by anyone who wishes to do so.

Reviewer #2 doesn’t want those results out there, even if they might be correct unless there is also a physical explanation… If someone discovered a pharmaceutical was having an adverse effect would we prevent publication until they had an explanation of exactly how it was hurting people? –Jo

Without a physical explanation for where these strong negative feedbacks are coming from, and without an acknowledgment that the results are highly uncertain and possibly not applicable at all, I would not publish this paper.

Reviewer #4 says:

1) If the paper were properly revised, it would meet the top 10% category. 2) The climate feedback parameter is of general interest. 3) I answered no, because the exact same data have been used by others to get an opposing answer and I do not see any discussion or evidence as to why one is correct and the other is not.

[Lindzen’s reply] The reasons for the opposite answer with the same data, but with different methods, are clearly stated in the revised manuscript.

So the team of researchers on the establishment case can get published for using this data, but not one of the smaller team who point out flaws. And that the dissenters came to different conclusions is not enough, Lindzen needs to explain why he got the opposite result to Trenberth or the public ought be shielded from knowing that there are dissenting views. Reviewer #4 admits (!) that the paper shows the models are not matching observations, but struggles with Lindzen reaching a different conclusion to Trenberth.

Trying to understand the feedback of the Earth-atmosphere system to radiative forcings from observations has been going on for a long time and remains difficult. This paper continues in that vein and, as far as I am concerned, shows that observations and model calculations are different.

[Lindzen’s reply] This was one of our major aims: namely to show that when data and models are analyzed in the same way, they lead to profoundly different results, and that these differences relate directly to the question of climate sensitivity.

Trenberth et al. (2010) performed a very similar analysis and got the opposite result. Why are the two analyses of the same data so different? That is the big question here.
While the specific comments bring up some issues related to that question, it is clear that this paper provides no insight. Why can the two papers arrive at such divergent answers? I would love to see that question resolved satisfactorily. Both cannot be right. Perhaps, both are wrong.

But to go beyond Trenberth et al. and LC09, this paper has to address that question and argue why Trenberth is wrong and the current analysis is correct. Otherwise, we are left with two completely opposing analyses of a common dataset and no discussion as to why one is correct and the other is not.

[Reply] Perhaps, our new Figure 7 (the test results with the same generated data in the simple system) summarizes why one is usably correct and the other is generally not.

Lindzens entire reply to the reviewers  is here.

***

Roy Spencer  commented on feedbacks at Master Resource #13:

Roy Spencer { 06.09.11 at 7:59 pm }
Positive feedback for climate is not the same as for engineering…in the usual sense of the word, the climate system is stable, with net negative feedback.

But the MAIN climate stabilizing effect is NOT included in climate “feedback”: the increase in IR cooling to space as temperature rises (the Stefan-Boltzman effect). It’s just semantics, and leads to much confusion.

For example, positive cloud feedback would reduce the rate of radiative loss to space with temperature below the Stefan-Boltzman value…but it would still be a loss of energy with warming, and so negative feedback in the traditional sense.

***

Judith Curry comments on peer review on her blog:

In the end, it is far more important that controversial papers be published than buried in the publication process.  Far better for a flawed paper to be published than for a potentially game changing paper to be buried.  LC’s work on this topic needs to be pursued, challenged, and understood.”

Curry says:

“First, I have been harshly critical of “pal review,” and the PNAS papers contributed by NAS members is the worst form of pal review.

And this:

“Looks like potentially important papers by skeptics get “special treatment”, whereas unimportant and often dubious papers by consensus scientists slide right through. This treatment feeds into the narratives of McKitrick, Spencer, Christy, Douglass and Michaels about unfair treatment of skeptics by the journal editors.  The establishment would often respond to such criticisms by saying that these are marginal papers by marginal scientists, and that more reputable and recognized scientists such as Lindzen have no trouble getting their papers published.  Well, this PNAS episode certainly refutes that argument.”

h/t Bob Fergusson (SPPI) and to Steve.

In a previous post I asked: Can Peer Review be fixed?

* The PNAS blacklist of scientists was one of cohenites ten worst papers.

8.2 out of 10 based on 5 ratings