- JoNova - http://joannenova.com.au -

IPCC plays hot-spot hidey games in AR5 — denies 28 million weather balloons work properly

Posted By Joanne Nova On April 2, 2013 @ 12:55 pm In Global Warming | Comments Disabled


The classic hot spot prediction (A) compared to 28 million weatherballoons (B). Click to enlarge. You won’t see this in the new report.

It was a major PR failure in 2007. The IPCC won’t make the same mistake again. They’ve dumped the hot-spot graphs.

In AR4 they put in two graphs that show how badly their models really do. In the next report they plan to bury the spectacular missing-hot-spot images through “graph-trickery” and selective blindness. Each round of IPCC reports takes the spin-factor up another notch. It’s carefully crafted.

See the draft of AR5: Chapter 9: Evaluation of Climate Models

It’s hot-spot hidey games and PR tricks

In the new extra-tricky AR5 version, the IPCC “quote the critics” and ignore them at the same time. That way they can say they include the McIntyre’s, McKitrick’s, Douglass’, and Christy’s: the words are on the page, but that doesn’t mean the information is used in the conclusions. The models have failed and they bury that undeniable result under the clutter.  (You’ll need to read the fine print). There is no acknowledgement that this issue of the “hot spot” drives more amplification of predicted warming in their models than any other point (though that is obvious and implicit in Fig 9.44, and you can see that below). Which policymaker exactly is going to notice that?

The IPCC are an abject lesson in how to hide a message in plain sight

In the new report they have dumped their former fingerprint predictions which looked so definitively and technical, but proved to be so wrong. However they will not join-the-dots. They won’t admit this is a major point their models have failed on, instead they flat out deny the results from 28 million weather balloons are conclusive.

In a sense, in AR5, the IPCC just throws up its hands and says “yes ok, the models don’t align with the data, but the data might be wrong, and rather than fix those models, we’ll quietly dump that test and the awkward results and pick a different set of inconclusive tests instead. It’s known as shifting the goal-posts. ” It’s what any rational weasel-grade bureaucrat would do if their job and their junkets depended on it. You can hardly blame them… :-|

The art of tricky-graphs: The All New Hot Spot is turned sideways, extended up, and “smallified”

The graphs up the top have been split into four bands, screwed sideways, and extended to far higher in the atmosphere. The net effect visually is to minimize the disparity at the point that matters. Only by reading the caption and text, and reams of information, would you figure out that the action occurs in the bulge of the red line in the second graph (that’s the models best shot at the tropics). Compare that to the black line which is what the weather balloons found. I’ve blown it up further below, and removed the clutter. The green line is irrelevant (that’s model predictions without CO2 — which is argument from ignorance with unverified models). The results in the stratosphere are not that important. The water vapor changes at the upper edge of the troposphere are what matters (about 200hpa or 10 km up).

Official caption: Figure 10.7: Observed and simulated zonal mean temperatures trends from 1961 to 2010 for CMIP5 simulations containing both anthropogenic and natural forcings (red), natural forcings only (green) and greenhouse gas forcing only (blue) where the 5 to 95 percentile ranges of the ensembles are shown. Three radiosonde observations are shown (thick black line: HadAT2, thin black line: RAOBCORE 1.5, dark grey band : RICH-obs 1.5 ensemble and light grey: RICH- τ 1.5 ensemble. After (Lott et al., 2012).

See the second graph on the left up, expanded close on the right below.

Close up of the second graph of Fig 10.7 (see caption above).

How do you say “we have no evidence” without saying it — like this:

“In many cases, the lack of long term observations, observations suitable for the evaluation of important processes, or observations in particular regions (e.g., polar areas, the upper troposphere / lower stratosphere (UTLS), and the deep ocean) remains an impediment.”

Blame the equipment. They have fifty years of data and millions of results.

This is the money statement:

In summary, there is high confidence (robust evidence although only medium agreement) that most, though not all, CMIP3 and CMIP5 models overestimate the warming trend in the tropical troposphere during the satellite period 1979–2011. The cause of this bias remains elusive.

What they don’t say is that this point on its own is responsible for half the warming projected in the models, and hence that after twenty years of trying to reconcile the models and observations it’s past time they turfed the models and trashed the assumption that humidity will cause monster positive feedback. Forget the projections of 6 degrees of hell, the best estimate would be half the current one (or less) and we can all go home.

Is water vapor feedback critical?

Is your skeptical brain wondering if I’ve got that point right about the positive feedback being so large? Remember it’s the IPCC that says without feedbacks CO2 will only cause 1.2C of warming.1,2 It’s the feedbacks that drive all the scary projections above that. Then gaze upon the graph below, 9.44. Spot the largest single feedback, one so big, it’s almost as large as “total feedbacks”. That would be “WV” or water vapor. This is almost the same graph as it was in AR4 – see Fig 8.14, on page 631.2

This is central to maintaining the scare.

Figure 9.44: a) Feedback parameters for CMIP3 and CMIP5 models (left and right columns of symbols) for water vapour (WV), clouds (C), albedo (A), lapse rate (LR), combination of water vapour and lapse rate (WV+LR), and sum of all feedbacks (ALL) updated from Soden and Held (2006). CMIP5 feedbacks are derived from CMIP5 simulations  for abrupt four-fold increases in CO2 concentrations (4 × CO2).

For the die-hard IPCC interpreters, here is the full “Fifth Assessment Report” section where they discuss the pesky discrepancy that the whole crisis hinges upon. Upper tropospheric temperature trends
Most climate model simulations show a larger warming in the tropical troposphere than is found in observational datasets (e.g., (McKitrick et al., 2010) (Santer et al., 2012)). There has been an extensive and sometimes controversial debate in the published literature as to whether the difference between models and observations is statistically significant, once observational uncertainties and natural variability are taken into account (e.g., Douglass et al., 2008; Santer et al., 2008; Christy et al., 2010; McKitrick et al., 2010; Bengtsson and Hodges, 2011; Fu et al., 2011; Santer et al., 2012; Thorne et al., 2011). For the thirty-year period 1979 to 2009 (sometimes updated through 2010 or 2011), the various observational datasets find, in the tropical lower troposphere (LT, see Chapter 2 for definition), an average warming trend ranging from 0.07°C to 0.15°C per decade. In the tropical middle troposphere (MT, see Chapter 2 for definition) the average warming trend ranges from 0.02°C to 0.15°C per decade (e.g., Chapter 2, Figure 2.15; McKitrick et al., 2010). Uncertainty in these trend values arises from different methodological choices made by the groups deriving satellite products (Mears et al., 2011) and radiosonde compilations (Thorne et al., 2011), and from fitting a linear trend to a time series containing substantial interannual and decadal variability (Santer et al., 2008; McKitrick et al., 2010). Although there have been substantial methodological debates about the calculation of trends and their uncertainty, a 95% confidence interval of around ±0.1°C per decade has been obtained consistently for both LT and MT (e.g., Chapter 2; McKitrick et al., 2010). Hence, a trend of zero is, with 95% confidence, consistent with some observational trend estimates but not with others.

For the thirty-year period 1979 to 2009 (sometimes updated through 2010 or 2011), the CMIP3 models simulate a tropical warming trend ranging from 0.1°C to somewhat above 0.4°C per decade for both LT and MT (McKitrick et al., 2010), while the CMIP5 models simulate a tropical warming trend ranging from slightly below 0.15°C to somewhat above 0.4°C per decade for both LT and MT (Santer et al., 2012; see also Po-Chedley and Fu, 2012) who, however, considered the period 1979–2005). Both model ensembles show trends that are higher on average than the observational estimates, although both model ensembles overlap the observational ensemble. Because the differences between the various observational estimates are largely systematic and structural (Chapter 2; Mears et al., 2011), the uncertainty in the observed trends cannot be reduced by averaging the observations as if the differences between the datasets were purely random. Likewise, to properly represent internal variability, the full model ensemble spread must be used in a comparison against the observations, as is well known from ensemble weather forecasting (e.g., Raftery et al., 2005). The very high significance levels of model-observation discrepancies in LT and MT trends that were obtained in some studies (e.g., Douglass et al., 2008; McKitrick et al., 2010) thus arose to a substantial degree from using the standard error of the model ensemble mean as a measure of uncertainty, instead of the standard deviation or some other appropriate measure of ensemble spread. Nevertheless, almost all model ensemble members show a warming trend in both LT and MT larger than observational estimates (McKitrick et al., 2010; Po-Chedley and Fu, 2012; Santer et al., 2012).

It is unclear whether the tropospheric model-trend bias is primarily related to internal atmospheric processes or to coupled ocean-atmosphere processes. The CMIP3 models show a 1979–2010 tropical SST trend of 0.19°C per decade in the multi-model mean, much larger than the various observational trend estimates ranging from 0.10°C to 0.14°C per decade (including the 95% confidence interval, (Fu et al., 2011)). This SST trend bias would cause a trend bias also in TL and TM even if the models’ atmospheric components were perfectly realistic. The influence of SST trend errors on the analysis can be reduced by considering changes in tropospheric static stability, measured either by the difference between MT and LT changes or by the amplification of MT changes against LT changes; another approach is to consider the amplification of tropospheric changes against SST changes. For month-to-month variations there is consistency between observations and CMIP3 models concerning amplification aloft against SST variations (Santer et al., 2005), and between observations and CMIP5 models concerning amplication of TM against TL variations (Po-Chedley and Fu, 2012). The 30-year trend in tropical static stability, however, is larger than in the observations for almost all ensemble members in both CMIP3 (Fu et al., 2011) and CMIP5 (Po-Chedley and Fu, 2012). For two CMIP3 models, ECHAM5/MPI-OM and GFDL-CM2.1, this trend bias in static stability lies outside each model’s internal variability and is hence highly statistically significant The bias persists even when the models are forced with the observed SST, as was found in the CMIP3 model ECHAM5  (Bengtsson and Hodges, 2011) and the CMIP5 ensemble (Po-Chedley and Fu, 2012).

In summary, there is high confidence (robust evidence although only medium agreement) that most, though not all, CMIP3 and CMIP5 models overestimate the warming trend in the tropical troposphere during the satellite period 1979–2011. The cause of this bias remains elusive.


What’s the excuse?

The answer didn’t pan out the way they expected, and so post hoc, they now say that the radiosondes don’t really work as well as they thought.

The hot spot is apparently too difficult because the observations are too uncertain:

Observational uncertainties for climate variables, uncertainties in forcings such as aerosols, and limits in process understanding continue to hamper attribution of changes in many aspects of the climate  system, making it more difficult to discriminate between natural internal variability and externally forced  changes. Increased understanding of uncertainties in radiosonde and satellite records makes assessment of causes of observed trends in the upper troposphere less confident than an assessment of overall atmospheric temperature changes.

They have a choice here

The heat is missing from the oceans, the trends are not accelerating in sea levels, ocean heat, global temperatures, and their 1990 predictions have failed abysmally. The radiosondes show that the humidity is not rising in the upper troposphere, as well as the temperatures. The models are “right” except for  for rain, drought, storms, humidity and everything else. The cloud feedback mistakes are 19 times larger than the effect of CO2. (See Man Made Global Warming Disproved).

Some of these data points make sense if the IPCC models wildly exaggerate the way humidity warms the world. The modelers could change one factor in their models and quite a few of their predictions would fit much closer to the observations.

But instead they deny the importance of 28 million weather-balloons, call the missing heat a “travesty”, they pretend that if you slap enough caveats on the 1990 report and ignore the actual direct quotes they made at the time, then possibly, just maybe, their models are doing OK, and through sheer bad luck 3000 ocean buoys, millions of weather balloons, and 30 years of satellite records are all biased in ways that hides the true genius of the climate models.




1. Hansen J., A. Lacis, D. Rind, G. Russell, P. Stone, I. Fung, R. Ruedy and J. Lerner, (1984) Climate sensitivity: Analysis of feedback mechanisms. In Climate Processes and Climate Sensitivity, AGU Geophysical Monograph 29, Maurice Ewing Vol. 5. J.E. Hansen and T. Takahashi, Eds. American Geophysical Union, pp. 130-163 [Abstract]

2. IPCC, Assessment Report 4, 2007, Working Group 1, The Physical Science Basis, Chapter 8. section  p631 [PDF] see also Fig 8.14.

3. Thomas H. Vonder Haar, Janice L. Bytheway and John M. Forsythe. Weather and climate analyses using improved global water vapor observations. GEOPHYSICAL RESEARCH LETTERS, VOL. 39, L15802, 6 PP., 2012. doi:10.1029/2012GL052094.

Hot Spot Graph Sources:

(A) Predicted changes 1958-1999. Synthesis and Assessment Report 1.1, 2006, CCSP, Chapter 1, p 25, based on Santer et al. 2000;
(B) Hadley Radiosonde record: Synthesis and Assessment Report 1.1, 2006, CCSP,, Chapter 5, p116, recorded change/decade, Hadley Centre weather balloons 1979-1999, p. 116 , fig. 5.7E, from Thorne et al., 2005.

Extra information

For facebookites – I’m trying to get organised…

VN:F [1.9.22_1171]
Rating: 9.2/10 (89 votes cast)

Article printed from JoNova: http://joannenova.com.au

URL to article: http://joannenova.com.au/2013/04/ipcc-plays-hot-spot-hidey-games-in-ar5-denies-28-million-weather-balloons-work-properly/

Copyright © 2008 JoNova. All rights reserved.