This is part of the big PR game of publishing “papers.”
In the climate models, the critical hot spot is supposed to occur because (specific) humidity rises in the upper troposphere about 10km above the tropics. The weather balloons clearly show that temperatures are not rising as predicted, so it was not altogether surprising that when Garth Paltridge analyzed weather balloon results for humidity, and found that humidity was not rising as predicted either.
Indeed, he found specific humidity was falling, which was the opposite of what all the major climate models predicted and posed yet a another problem for the theory that a carbon-caused disaster is coming. He had a great deal of trouble getting published in the first place, but once he finally did get published and skeptics were starting to quote “Paltridge 2009”, clearly, Team AGW needed an answer. “Dessler 2010” is transparently supposed to be that answer.
To start by putting things into perspective, lets consider just how “spuriously” small, patchy and insubstantial the radiosonde measurements have been. According to NOAA The integrated Global Radiosonde Archive contains more than 28 million soundings, from roughly 1250 stations.
…
Or how about the data from just one month.
ARSA : Location and number of radiosonde reports
selected for the month of November 2009
(Click the figure to enlarge it or click here)
The ARSA site estimates there are around 40,000 radiosondes released each month. (While the NOAA site above suggests it’s been closer to 50,000 per month. And it’s been something roughly like that every month since 1958).
The radiosonde results are uncertain, but they suggest specific humidity is falling (and the error bars all fall in the negative range). So how do you dismiss that wall of data going back for half a century?
Satellites record humidity too, and they suggest specific humidity hasn’t changed much, or has been slightly positive. So if you collect studies of satellite data and stick those results on a graph, lo and behold, the only study that exclusively uses radiosonde data suddenly looks like an “outlier”.
There are problems with radiosonde data — especially humidity measurements. But satellite channels have equally large (if not larger) uncertainties. They can’t pick out humidity specifically, and they can’t resolve say, 10 km from 11km. They cover thick slabs of the atmosphere (Channel 12 gets results from the region that is 8 -12 km up). In the end the only thing we know for sure is that it’s hard to measure humidity 12 km off the ground.…
The Dessler 2010 paper in Journal of Geophysical Research (JGR) describes Paltridge’s results as “spurious”. JGR did ask Paltridge to comment, which he did, but Dessler did not resolve Paltridge’s criticisms (see some of them below), and JGR published anyway.
JGR let some decidedly unscientific things slip into that Dessler paper. One of the reasons provided is nothing more than a form of argument from ignorance: “there’s no theory that explains why the short term might be different to the long term”. Why would any serious scientist admit that they don’t have the creativity or knowledge to come up with some reasons, and worse, why would they think we’d find that ignorance convincing? (Though it does appear to have impressed John Cook of the not so skepticalscience).
It’s not that difficult to think of reasons why it’s possible that humidity might rise in the short run, but then circulation patterns or other slower compensatory effects shift and the long run pattern is different. Indeed they didn’t even have to look further than the Paltridge paper they were supposedly trying to rebut (see Garth’s writing below). In any case, even if someone couldn’t think of a mechanism in a complex unknown system like our climate, that’s not “a reason” worth mentioning in a scientific paper.
Andy Dessler is so convinced the models are right, he flatly declared it back in January on Piekle’s blog, even going so far as to say water vapor feedback was strong, positive and announcing that it was “unequivocal”. Ponder what “unequivocal” means when millions of weather balloons travelled up through the air measuring humidity and repeatedly found something that was the exact opposite of what he says. Only a religious fanatic would call that “unequivocal”.
Instead, in the real world, most of the data and observations suggest that the feedbacks are not strongly positive. In the ice cores there’s a long lag (where CO2 rises and falls after temperatures) and there no definitive evidence coming out of the best ice core data of “amplification”. The climate models (which Dessler claims are independent) are programmed to have positive feedback, so they could hardly show anything else. That’s circular reasoning.
Furthermore, both satellites and weather-balloons observe the tropical upper troposphere is not warming faster than the surface (the climate models predict warming — the hotspot). If specific humidity was increasing significantly in the upper troposphere, wouldn’t that produce a long term warming trend?
The peak effects of the water vapor amplification are supposed to occur between 200-300hpa, and even the satellite data doesn’t show much support for that.
Guest Post by Garth Paltridge
Re The Dessler and Davis Paper
Its primary aim is to make the point that, in the authors’ opinion, the negative NCEP trends (reported in Paltridge et al – i.e. in PO9) in water vapour concentration in the middle and upper troposphere are spurious – and therefore that overall water vapour feedback in the climate system is indeed positive and is an amplifier of global warming. It supports this opinion with:
(a) A simple comparison of the NCEP trends with the trends from four other re-analysis data sets. The comparison indicates that the three most recently developed re-analyses have positive trends in the middle and upper troposphere.
(b) Correlations (from each of the re-analyses) between middle and upper level water vapour concentration qmu and surface temperature T. These show that, at least for overall short-term change on time scales less than 10 years, all the correlations are positive and thereby indicate positive feedback.
(c ) Comment to the effect that the concept of long-term positive feedback is in accord with virtually all of the independent lines of evidence (models, observations, theory, and newer re-analyses).
RE (a): A central issue is that the NCEP re-analysis relies on balloon observations of water vapour, whereas all the other re-analyses use satellite data of one form or another. The other re-analyses may or may not use balloon data as well as satellite data. The ERA40 re-analysis certainly includes balloon data, and that particular re-analysis also has a tendency toward negative trends of middle and upper level water vapour.
Raw balloon data (tropical data as examined in PO9 for instance) seem to show a negative long-term trend of water vapour at the upper levels. Raw satellite data (as mentioned in PO9 and the Dessler paper for instance) seem to show a positive trend.
So the most likely and straightforward explanation of the difference between the outputs of the re-analysis schemes is that they (the schemes) are actually behaving as they were intended – namely, they are simply reflecting the behaviour of their different sources of input data. Which, if so, leads not to the question of whether some re-analyses are ‘newer’ than others, but to the question of which source of input data (balloon or satellite) has the greater potential for error, and whether, in either case, those errors would or could lead to the trends that are observed. Any significant attempt to resolve such a question would have to consider not only the potential errors of both the balloon and the satellite information, but also the possibility that different sorts of satellite data have been introduced into the re-analysis schemes at different times over the 30-year period since 1979.
The bottom line here is that it is a bit infra-dig to give the impression that the NCEP re-analysis is a single “outlier” pitched against a number of other independent re-analyses and can therefore probably be discarded. If one ignores the question of which is the more reliable as input data (satellite or balloon data), the balance of ‘likelihood of verisimilitude’ between NCEP and the others has to be more like 50:50.
And therefore it is also a bit infra-dig to talk fairly extensively about the well-known problems associated with balloon measurements and make no reference to the many and various problems associated with satellite data. (I have heard it said for instance that it is difficult enough to believe past satellite data on trends of total water vapour content – let alone of the water vapour concentration at any particular level).
Re (b): Much of the Dessler and Davis discussion is devoted to the correlation between changes in upper-level water vapour qmu with changes in surface temperature T. In particular it is devoted to the possibility flagged in PO9 that the long-term correlation may be negative even though the short-term correlation is positive.
The authors say that there is no theory to explain such a difference in the sign of the correlations. Suffice it to say that PO9 contains a diagram and discussion concerning two (admittedly only qualitative) theoretical suggestions as to how such an eventuality might occur. The suggestions concern possible causes of long-term increase in the stability of the lower atmosphere – an event which, according to the NCEP data, indeed seems to have occurred over the last few decades, and which, if real and continued, could confine a long-term increase of water vapour concentration to the convective boundary layer. (One of the possible causes is the relatively large increase in radiative heating in the middle troposphere associated with increasing CO2). This is not to say that such theories are correct, but the absence of any reference to them suggests a reluctance even to contemplate arguments on the other side of the fence.
The authors make the point that there is poorer agreement among the re-analyses about the qmu vs T correlation on time scales longer than 10 years than there is about the correlation on time scales less than 10 years. They attribute this to “handling data inhomogeneities” having more impact on long-term trends than short-term trends. Fair enough. But they could also have pointed out for the record that, at least at face value, the slopes of the long-term correlations displayed in their diagram are generally a lot less than those of the short-term correlations, and that some of them are indeed negative at certain levels. The bottom line here is that even a reduction of slope (if it were verified) would be very significant in the overall water-vapour feedback story.
Re (c): Superficially the comment is impressive. One wonders however what is this theory which is supposed to be an independent line of evidence – apart, that is, from the individual bits of theory so far built into the models. And in view of the discussion above about whether reference to the “newer re-analyses” is really germane to the issue, one wonders also about the significance of those analyses in the present context. And in view of the fact that the veracity of models relies (among many other things) upon the observations on which they are based, it is pushing things a bit far to say that models are truly “independent” evidence. (Perhaps more to the point in the context of models, their long-term trends of qmu depend among other things on simulation of vertical sub-grid-scale diffusion – the contribution of which is one of the most difficult characteristics of models to ‘nail down’ and verify independently).
The bottom line here is that “virtually all the independent lines of evidence” probably boil down only to the observations. And at the moment, bearing in mind the discussion with regard to (a) above, there is still a lot of work to be done to establish just what the observations are telling us.
———————————–
Of course Dessler and Davis are entitled to their opinion, and may even be proved right one day. But at the moment it won’t do the discipline much good if people assume simply on the basis of the existence of their paper that the issue is now resolved.
…
REFERENCES
Dessler, A. E., and S. M. Davis (2010), Trends in tropospheric humidity from reanalysis systems, J. Geophys. Res.,
115, D19127, doi:10.1029/2010JD014192 [PDF]
Paltridge, G., Arking, A., Pook, M., (2009) Trends in middle- and upper-level tropospheric humidity from NCEP reanalysis data. Theoretical and Applied Climatology, Volume 98, Numbers 3-4, pp. 351-35). [PDF]
Thanks to Garth for being so helpful with advice and information.
Thanks to the ineffable wisdom of Rod Smith for giving me some appreciation of the formidable database amassed from weather balloons.
UPDATE: The numbers of weather balloons each month differ between groups and datasets, I’ve updated those figures to refect the two sources I list.
Tiny url: http://bit.ly/9Rpafb
Dessler is just another cog in the wheel of contemporary scientific cleansers. I guess he needed new tricks in Dessler 2010 since the printed version of Dessler et al 2008 is only good for kindling and toilet paper.
40
Dessler: “water vapor feedback was strong, positive and announcing that it was “unequivocal””. No scientist would use the word “unequivocal”. Does Dessler have proper scientific training?
40
Another reason for the divergence of satellite data is that satellite data requires a radiative transfer model to convert measured radiances into water content and with climate science, radiative transfer models tend to be subjective and are moving targets. The method used for geostationary weather satellites is to measure emitted radiation in a band around 7u which is particularly sensitive to water vapor absorption. The method used by the NOAA polar orbiters is to split the normal surface radiation channel into 2 bands, one which is somewhat sensitive to H2O absorption and another that is not and the relative ratio between the 2 channels is related to the total water content. These emissions are what you see on the weather reports as ‘water vapor imagery’.
Here’s a plot of atmospheric water content, per the ISCCP data set.
http://www.palisad.com/co2/sat/wc.png
Around 1999, they changed the methodology for determining water vapor content which seems to have biased the data down and smoothed out the variability. While the 25-year trend is negative, it’s hard to tell for certain owing to the change in the methodology. However, if you examine the data since 2000, a small negative trend, corresponding to the small cooling we have seen since 2000 is evident. The general relationship between water vapor and surface temperature is shown in this plot.
http://www.palisad.com/co2/sat/st_wc.png
George
10
Thinking back to my days when i worked in the high Arctic doing Radiosonde flight(yes a long time ago ’79-80), but i seem to recall that the humidity basically tailed off after the tropopause, because it was darn cold up there and not much humidity….
10
Jo! it again proves sceptics lack imagination,it doesn’t matter how often we are told we still have trouble believing 1+1=3
20
Off topic but James Dellingpole has just won a prestigious peer reviewed award for his blogging on climate change. Follow the link on Jo’s blogroll.
10
[…] This post was mentioned on Twitter by QueenBee, WeatherBalloons. WeatherBalloons said: Dessler 2010: How to call vast amounts of data “spurious” « JoNova: Furthermore, both satellites and weather-bal… http://bit.ly/9Rpafb […]
10
Bob Malloy:
There are hundreds, nay thousands, of peer reviewed papers showing that 2+2= 5. This latest research you have found shows we may have to go back and reexamine our projections. If 1+1=3, then 2+2 may well equal 6, so it’s worse than we thought!
Ken
20
A bit O/T,
An ABC moderator has tried to justify to bulldust letting the insult denier and denialist through their moderation at this article:
10
Looks to me like there are TWO outliers – NCEP and JRA – leaving the mainstream studies bunched around 0.00. Does this mean that the lack of any trend in specific humidity is unequivocal.
Where does this leave Dressler’s claim that “all of the independent lines of evidence”support positive water vapor feedback? Observations (see graph), newer reanalysis (see graph), models (not evidence) and theory (yes! found it! Modeling theory is the sole line of evidence).
10
I’ve seen two papers that have claimed water vapor was increasing and therefore positive feedback was proven. The only problem was that both papers had cherry picked base periods that actually corresponded to global cooling, thus demonstrating a negative correlation and feedback!
Now for an alternative explanation as to what caused the decrease in H20 at 300hpa
Firstly the decline occurred in a single step around the end of the century, rather than a gradual decline, otherwise absolute humidity was quite stable long term.
Secondly, the drop in humidity is clearly associated with a climatic shift as seen in the western pacific warm pool ssts
Thirdly the climate system seems to have responded by a large increase in tropical convection,
My theory is that an increase in convective efficiency causing higher rainfall helps dry out the upper atmosphere relative to lower rainfall producing systems. The precipitation aspect of the water cycles is largely ignored by climate researchers, mind you so is the 5% decrease in clouds that clearly drives temperature so its not much of a surprise!
20
-Garth Paltridge on Climate Audit, Mar 4, 2009
* * *
Common sense seems to suggest that it would be, well, aesthetically more pleasing as a theory, if water vapour feedback in response to warming was ultimately negative. If one views the Earth’s biosphere as a massively complex system of feedback loops tending towards maintaining stability, then water vapour feedback should be negative, therefore homeostatic, rather than positive, which would amplify any warming event. Positive water vapour feedback would mean every time the planet warmed (for whatever reason) then water would evaporate, raising humidity, raising the temperature, evaporating more water.
Obviously, this cycle would be limited by the amount of energy available so could not go over the fabled “tipping point” in a runaway greenhouse effect, but you might expect a greater amplitude in climate variability then the t-record shows. Also, any initial cause of warming should be followed by an extended period of warming after the initial forcing was long gone.
Perhaps the feedback direction of water vapour could be deduced from an analysis of the t-record?
For example, the warming peak during the El Nino of 1998 rapidly collapsed towards the trend-line. If water vapour was as massively positive as the CAGW hypothesis implies, shouldn’t the 1998 peak have produced a prolonged arc shape rather than a sharp pinnacle?
10
RobJM,
Yes, rainfall is a big piece. If you look at the relationship between atmospheric water content and cloud cover, there’s an interesting effect.
http://www.palisad.com/co2/sat/wc_ca.png
As the atmospheric water content increases, cloud cover increase up to about 90%, at which point it reverses as water content increases, only to start increasing again later. To understand why, you need to see the relationship between temperature and cloud coverage.
http://www.palisad.com/co2/sat/st_ca.png
Keep in mind from my previous post that the relationship between surface temperature and water content is monotonic. The illustrative point of this is to show that the peak in cloud coverage occurs at an average temperature of about 0C. The reason is that the Earth’s energy balance equation changes. Above 0C, clouds reflect more energy than the surface, while below 0C, surface snow reflects the same, if not more, than the clouds. The changes the net effect of incremental clouds from warming a influence to a cooling influence.
Where rainfall comes in is that between 273K and 300K, clouds decrease even as water vapor increases, thus going counter to the intuitive effect of more temp -> more evaporation -> more clouds. The reason is that rainfall is increasing faster than evaporation and that this is also a negative feedback like effect.
If you are familiar with the transfer characteristics of a tunnel diode, you may see the similarity between this effect and negative resistance and you might notice where on the curve the climate system is operating.
George
10
What I find weird is that people expect a big blob of warm moist air can sit in space where it would normally rise. How is the CO2 supposedly preventing this “hotspot” from dissipating upwards and outwards? I don’t get how it can work even in theory…
10
A great post and reply from Garth Paltridge. Dessler has form:
http://jennifermarohasy.com/blog/2009/04/more-worst-agw-papers/
And after watching the debate between Dessler and Lindzen I have to say that Dessler’s form is not very pleasant.
20
I notice our PM Gillard has made a deal with Russia to export some of our uranium ostensibly to power more nuclear stations. Yet we still are one of the few nations not using nuclear power nor intending to. I cannot get my head around the attitude of Labor and the Greens. Australia is not some pocket size country crammed with umpteem people per square kilometre. It has more space where reactors and waste could easily be placed with not a single atom touching anyone. Also, one would swear that it was Melbourne and Sydney that were bombed in August 1945 -(Pity they weren’t)- instead of Japanese cities. I mean, Australians then and now had much to cheer about because the quick ending of WW2 saved countless Australian servicemen and prisoners’ lives. We were never prime targets during the Cold War. So I can never understand the utter phobia Australians (or some) have over uranium and nuclear power.
10
We live in a big gas filled arc lamp. As you will know a similar negative resistance effect also happens with arcs as the source impedance becomes a limiting factor. The polar aurora lights display this. This electric ciruit would be modulated by the solar cycles and perhaps also by the metonic cycles.
4 metonic cycles or 7 solar cycles is about a Gleisberg cycle.
http://wekuw.met.fu-berlin.de/~SemjonSchimanke/EGU_2007_Poster_A0.pdf
These two frequencies would be hetrodyned in the nonlinear device.
If you thought that was an odd theory wait for this:
It is a very sad thing that the enforced flat earth scientific/religious consensus prevented this modeling computer (an advance on “The Working Celtic Cross” of Crichton E M Miller) from making the 76 year modulation cycle that it was able to predict more well known.
http://www.youtube.com/watch?v=DiQSHiAYt98
http://www.youtube.com/watch?v=4eUibFQKJqI
10
Siliggy, you may be interested in Louis Hissink’s electric universe 3-parter:
http://jennifermarohasy.com/blog/2008/05/we-live-in-an-electric-universe-part-1-by-louis-hissink/?cp=1
10
Tel,
I think the problem is that the usual methodology considers all of the power absorbed by the atmosphere by GHG’s is either returned to the surface (and them some) or retained by the atmosphere. This ignores the fact that about half of all power absorbed absorbed by the atmosphere is re-emitted back into space, albeit, somewhat delayed. The predicted ‘hot spot’ is likely just a consequence of a failure to emit enough power into space, thus heat accumulates.
George
10
Climate Modelling Nonsense – CARBON DIOXIDE VAPOUR TRICK…………
http://www.quadrant.org.au/magazine/issue/2009/10/climate-modelling-nonsense
10
‘it is pushing things a bit far to say that models are truly “independent” evidence.’
The only time models can be regarded as any kind of evidence is when they consistently make predictions which are subsequently confirmed by observations. For instance predicting the properties of yet to be discovered elements by their position in the periodic table.
As far as I’m aware no climate science models have done anything like this. Any example, anyone?
10
Tel #14
It could be that it is rising, with the top cooling and the bottom warming.
10
Lots of scientific stuff going on here. To my simple mind the following from NOAA says heaps to me at least: http://thetruthpeddler.wordpress.com/2010/11/11/official-government-data-shows-u-s-is-still-cooling-at-a-rate-of-8-8-deg-fcentury/comment-page-1/#comment-402
co2isnotevil gives a perfectly believable explanation. Trenberth’s missing heat is somewhere between here and Pluto. At least it won’t be bothering us now or in the future.
10
Ken @ 9
Quite easily done, or, as they say, QED.
You agree 1.251 + 1.251 = 2.502 ?
All we do is round the numbers – in which case 1.251 ~ 1, and 2.502 ~ 3
THEREFORE 1+ 1 = 3. QED.
We can also make 2 + 2 = 6 by rearranging the data (I’ve even built a computer model to do this, so it must be right! 🙂 )
e.g. 2 * 1.251 + 2* 1.251 = 2 * 2.502
Approximating:
2 + 2 ~ 2 * 3 = 6 QED again.
The “trick” is to ignore the error bars…
Cheers,
Speedy.
10
Two plus two can equal 6, if the values of two are large enough…
10
Cohenite @18…
The Electric Universe, ah the good old days. Nothing like the smell of paradigm shift in the morning 😉 I’m eagerly awaiting the discovery of the Higgs Boson, due any day now!
Do you remember that Louis was so pissed at me after those threads he accused me of being a geophysicist from a rival mining company out to jump his company’s diamond claim by discrediting him somehow? I love Louis, I really do. But I wouldn’t know a mining claim from fishing permit. I’m really just a plumber on the payroll of The Single Flush Toilet Cartel here to help all you Big Tobacco operatives stop those greenies from saving the planet. After all, we all know that the collapse of civilization and possible human extinction is great for business!
Beam me up, Scottie!
http://www.youtube.com/watch?v=eyOGjuxWhrM&feature=related
10
http://thetruthpeddler.wordpress.com/2010/11/07/recent-trends-in-global-temperature-and-carbon-dioxide-concentration-show-no-correlation/
10
wes, Louis is one of the characters of the blogsphere and quite knowledgeable; but I do so hope he is wrong about the electric universe or at least those parts of it which bar FTL travel; we will need such high speeds to escape the current crop of navel gazers and self-validating mental cream-puffs who are dictating social policy especially AGW; agnotology is their great gift and our only hope is that their influence is made up of tardyons.
10
Speedy:
There you go- I was right all along! And that’s 3 studies confirming the projections- wuth 95% probability.
Ken
10
Ken Stewart:
And I’ll need to go back to school and learn new math.
on another note:
Because of my work hours, (11pm/7am),to fight boredom and keep me awake, I listen to the 2SM network talk over night. I find the 2sm network receptive to sceptics, earlier this week a caller from the Gold Coast mentioned your work & web sight. The host (Gary Stewart), a genuinely nice block, stated he had come across a link to your website but not bothered to explore it. He felt that it may be just the ravings of a crank.
I sent him an email with a short summery of your work. If you have the time, a call to Gary might be very informative to him and his listeners, I’m sure he would give you a good hearing.
His show is between 12am & 5am eastern daylight hours, 11pm to 4am Queensland time and the phone number is 131269, the cost of a local call.
Bob
10
Jo,
Climate science has not looked at pressure build-up. They ASSUME since they see no ceiling, it could not happen. But all the evidence is pointing that is exactly what is happening.
I’ll show you a missed point in science:
How much energy does a grain of sand hold?
If you stop the planet dead the energy held would fly off at 1669.8km/hr at the equator. Add to this 18.5 mi/sec shooting around the sun and 300km/sec that our solar system is moving.
That is a great deal of energy. Now apply that to every molecule on this planets surface.
That is quite an astounding amount of stored energy.
Pressure build-up in climate science does all kinds of funky stuff.
10
Yes thankyou. Very interesting. While reading that and the comments I was surprised that Flemings hand rules were not mentioned by anyone. Think about the flow of air around in cyclones, or hurricanes. There are a lot of electrons flowing around. This would generate a magnetic field at right angles. An interaction with the earths magnetic field would occur. Just like the coriolis effect there must also be a related spiral magnetic flow. High Voltages would be created at 90 degrees to that rotation.
You may find this interesting.
In part 3 Louis Hissink mentioned Richard J. Blakeslee , Ph.D. of NASA Global Hydrology and Climate Center. Seven years ago I emailed Richard Blakeslee twice with questions about ionospheric Voltage. He was kind enough to give me two very detailed replies. In one of them he said:
Lance Pidgeon
10
Cohenite @ 28:
The case for electric universe includes some very hard-to-deny observations. Please spend some more time on the subject because I’d like to hear your interpretations.
10
Mark; it is not an area which I have had much to do with other than reading Louis’s observations from time to time; however it is not a left field line of research as this shows:
http://www.utdallas.edu/physics/faculty/tinsley.html
10
Just found this. Have not read it yet.
“Cloud Formation and the Possible Significance of Charge for Atmospheric Condensation and Ice Nuclei”
http://www.google.com/url?sa=t&source=web&cd=11&ved=0CBEQFjAAOAo&url=http%3A%2F%2Fwww.springerlink.com%2Findex%2FX0K833756788224T.pdf&ei=jBDfTJiUAYW6caa_3JcM&usg=AFQjCNHJbOrAP-XFaxx3Z_pUhyTPTmmoAA
10
I was reading above where you guys are proving 2+2=5 etc., now consider this and google “Why the pentium can’t do maths”
The presence of the bug can be checked manually by performing the following calculation in any application that uses native floating point numbers, including the Windows Calculator or Microsoft Excel.
The correct value is (4195835 / 3145727) = 1.333820449136241002
However, the value returned by the flawed Pentium would be incorrect beyond four significant digits
(4195835 / 3145727) = 1.333739068902037589
Another test can show the error more intuitively. A number multiplied and then divided by the same number should result in the original number, as follows:
(3145727 x 4195835) / 3145727 = 4195835
But a flawed Pentium will return:
(3145727 x 4195835) / 3145727 = 4195579
Also consider this, the bug above is known and was discovered by accident, how many bugs in the floating point processor are still un-discovered ?
And again, consider this, super computers are made up of “off the shelf” Intel processors. Older computers used P4’s. The Tianhe-1A system at National University of Defense Technology has more than 21000 processors. (google “supercomputer).
So after millions and millions of calculations done in a climate model with the error getting greater, done by faulty math processors with an unknown “bug”, who is going to guarantee the end result ?
We have become too reliant in the ability of these machines to be accurate. Blind faith ?
A computer did it, so it must be OK !!
10
Chris,
I used to design floating point processors and Sparc chips for Weitek, so I definitely remember this bug. It boosted the sale of our high performance math coprocessors.
George
10
Burning twice the coal since warmists first said we should stop……..
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/burning_twice_the_coal_since_warmists_first_said_we_should_stop/
10
The models are wrong by William Kininmonth: Meteorologist and former head of Australia’s National Climate Centre. He was also Australian delegate to the World Meteorological Organization’s Commission for Climatology (1982-98) | Climate Realists
http://climaterealists.com/index.php?id=3855
10
It’s really even worse than 1 + 1 = 3 or 2 + 2 = 5 or 6 because 2 = 1! This fact throws a monkey wrench into every calculation.
Here’s the proof:
This will force everyone back to school to relearn everything from basic arithmetic on up. Sorry about that.
10
Friends:
I write to make two points.
My first point:
(sarc on)
The Muir-Russel Inquiry determined that any practice is acceptable in ‘climate science’ so long as it is commonly used in ‘climate science’. For example, non-disclosure of data is such an acceptable common practice. Dessler’s work seems to be an example of the practice known as “hide the decline” which is also commonly used in ‘climate science’. Therefore, according to the methods of ‘climate science’, there can be no doubt that Dessler’s work is entirely correct. Furthermore, Dessler’s work was peer reviewed and published in a journal which is approved by the ‘Hockey Team’ so its findings are right and cannot be doubted.
(sarc off)
My second point:
I am completely serious when I point out that the discussion (above) of addition is important in the context of Dessler’s paper and any other work that considers compilation of measurements.
In a measurement system the value of (1+1) can be any value from 1 to 3, and this because of rounding errors.
If the measurement device (e.g. a thermometer) measures to unit values of a datum (e.g. deg.C) then the device indicates a true value which is anywhere between 0.5 and 1.5 as being 1.
So, consider two cases.
In case A, (0.5 + 0.5) = 1.0 which rounds to 1
providing a mean (i.e. an average) of the two results of 0.5 which rounds to 1
In case B, (1.4 + 1.4) = 2.8 which rounds to 3
providing a mean (i.e. an average) of the two results of 1.5 which rounds to 2
However, in both cases the instrument records
(1 + 1) = 2
providing a mean (i.e. an average) of the two results of 1
And, in both cases, the error of the results is provided as +/-0.5
(for simplicity of explanation, I am ignoring proper RMS error analysis here).
So,for both cases the measured result is 1 +/-0.5
i.e. the measured values are seen as being the same, and they are the same within the range of stated error (although 1.4 is much larger than 0.5).
This interaction of error range and rounding errors has no effect if throughout a series of measurements
(a) the measured parameter is varying randomly
and
(b) the instrument does not drift.
But this rarely the case.
Consider the case where the instrument drifts by 0.2 during a set of measurements. This would not alter its accuracy (of +/- 0.5) but would alter its precision with a resulting affect on the measurement result.
This is clearly understood by consideration of a set of the data where all the true values are 0.45
If there were no measurement drift then every value would round to zero and the mean value determined would be 0 +/-0.5 (which is clearly correct).
But with an instrument drift of 0.2 (occuring linearly over the data series) most values would be recorded as being more than 0.5 so the mean value determined would be 1 0 +/-0.5 (which is clearly not correct).
(Similar effect is provided by a data set varying in a non-random manner.)
These matters are not significant when measuring large differences between data sets. But they can be very significant if small changes are to be detected.
Consideration of Dressler’s data
The graph from dessler’s paper reproduced in the above article shows the
“Trend in specific humidity (%/year)”
And that graph provides error bars for the satelite data.
The error bars of the satelite data are about +/-0.2 %/year at altitudes where the satelite and radiosonde data diverge.
But the obtained values from the satelite data vary with altitude and, importantly, they differ by ~0.4 %/year at altitudes where the satelite and radiosonde data diverge. This difference between the satelite data sets has similar magnitude to the maximum difference between the satelite data and the radiosonde data.
So, the graph indicates that it is very likely that the difference between the satelite and radiosonde data is a result of instrument drift of the satelite measurement systems which is within the measurement accuracy and, therefore, is too small to be detected.
Of course, error bars for the radiosonde data are not provided on the graph, so it is quite possible that the radiosonde data may be an outlier. But the presented data strongly suggests this is not the case.
Richard
10
Climate change no longer scares Europe………..
http://www.washingtontimes.com/news/2010/nov/5/climate-change-no-longer-scares-europe/?page=2
10
Siliggy, Re 35, I am pretty sure if we really want to control climate or mitigate the effects of ice ages we need to build very large electrostatic filters on every continent. I doubt we can control volcanic activity so we have to be able to control aerosols.
Not to mention much easier to deal with than limiting puny carbon dioxide!
10
Supercooled ice nucleation. Fun to watch.
http://www.youtube.com/watch?v=wzHXiGdMvkU
http://www.youtube.com/watch?v=BZDhFkF-hPc
http://www.youtube.com/watch?v=n_H5ZIoZSBo
10
Cohenite @ 34
Only a denialist would call present research on Electric Universe “left field” (although there is room for a pun on the word field) 🙂
If you have been watching the other threads you’ll find this recently from:
Oliver K. Manuel
Former NASA Principal
Investigator for Apollo
http://joannenova.com.au/2010/10/is-the-western-climate-establishment-corrupt-part-9-the-heart-of-the-matter-and-the-coloring-in-trick/comment-page-5/#comment-125028
Not electric universe per se but good to know…
10
Roy Hogue #40
Roy
Don’t come back.
Never. Ever. Never
10
Mark D.:
November 14th, 2010 at 12:02 pm
Siliggy, Re 35’43
How many ways could geoengineering go wrong? Considering they would act to stop a warming that has already stopped and gone into reverse.
http://www.slate.com/id/2268123/
10
L.P., You know as do many observant blokes, our biggest problem is not warming but instead might be the commencement of the next Ice Age
If we’re lucky we’ll figure it out before the end is near….
10
G/Machine you had better hope he is right if not the universe will implode.
Let me explain. The formula for gravitational atraction is:
= G (m1m2/s^2)
M meaning Mass
G meaning gravity
S meaning distance between
Now imagine two balls. One ball has a diameter of 4 and the other has a diameter of 2.
The small ball is inside the large ball, exactly in the middle. Now to find the center of each ball you would measure to the middle and you get Roy’s numbers 2 and 1.
Now Roy is correct so the ‘s’ number above is the long term average of 1 and 2. If roy was wrong then s would be 0 because the two balls are on the same centre. Then you would have G = m1m2/0^2 the result would be infinite gravity that would destroy the universe in the opposite of the big bang.
10
Siliggy: #49
Your supposition is correct, but only for very small values of zero.
10
G/Machine @ 40
Agree with you, but with a 🙂 !
Roy’s logic would get past the dumbest 88% of the public and 99% of the journalists!
Cheers,
Speedy
10
I may just be me but there seems to be a reporting blackout on the upcoming Cancun climate conference. The only site I can find addressing this void seems to be
http://ourmaninsichuan.wordpress.com/
which is running a Cancun week of blogs.
Pointman
10
Roy Hogue @ 40:
What I learned is that dividing by zero destroys the coherence of any analysis 🙂
I am aware of another way to make ‘2 = 1’: When I started programming in Fortran II (don’t ask when), the language treated constants like variables: If you entered the character ‘2’, for example, into a program, the character was converted by the compiler into a pointer to a table with the value 2 at that location. This is exactly what the compiler did with variables, also: ‘x’ would become a pointer to a table with the current value of x. The only difference, was that the compiler pre-loaded the table for numbers.
However, Fortran allowed functions to change the value of their arguments. If you put ‘2’ in a function argument, and inside the function that argument was referred to as ‘x’, and contained the statement ‘x = x-1’, then for the rest of the program, whenever the numeral ‘2’ appeared, the value it had would be ‘1’.
Lots of fun.
10
Siliggy, Rereke, your sense of humor is even better than mine. I got a good laugh this morning. Thanks!
But all kidding aside, 2 might really = 1 for values of 2 that are very close to 1.
10
Bob @53,
You weren’t supposed to notice!
But yes, lots of fun. If you do a Google advanced search on the phrase “1=2” and demand that the word “proof” also appear you can get a considerable list of sites with such stuff. The one I used is here. It looks like it was a challenge to students to find the fallacy.
FORTRAN eh? That dates us both. I was only involved in one project done in FORTRAN (thankfully). But when I started teaching part time in 1993 I taught FORTRAN for several years. I was glad when it was dropped from the curriculum and I could move on to C and then C++.
If there are two languages that should have been blown up and forgotten about they are FORTRAN and COBOL. But both are still very much alive.
10
Roy,
Aren’t many if the climate models written in Fortran? It seems that a lot of the SW produced by GISS is in Fortran. Fortran was one of the first languages I developed with as well, but I’ve moved on and all of my climate modeling bas been in C. Fortran used to be considered the scientific programming language but with more modern compilers and run-time libraries, C is now far better and even more portable.
George
10
How to turn Australia into a basketcase………..
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/how_to_turn_australia_into_a_basketcase/
10
Interesting Reading about this evil UN climate conference in Cancun…………
From the “New American”:
http://www.thenewamerican.com/index.php/tech-mainmenu-30/environment/4788-un-pushes-economic-scheme-for-climate-conference
Is it me, or does this read like no one is getting copies of the agenda except with the approval of the Federal Republic of Germany?
http://unfccc.int/files/parties_and_observers/notifications/application/pdf/20101105_notif_parties.pdf
Apparently, the TIANJIN, CHINA conference on climate change was where everyone sorted out in camera what is to be achieved in camera sub rosa at Canned Corn:
http://unfccc.int/meetings/intersessional/tianjin_10/items/5695.php
So far, this is all that is “out there” in terms of the actual agenda items to be discussed. WTF
http://unfccc.int/files/press/news_room/statements/application/pdf/101009_cf_speaking_notes.pdf
Here is the Website for the UN Climate Change conference at Cancun, their VIRTUAL GATEWAY, which becomes operational 29 November 2010. Again, no specific agenda items:
http://unfccc.int/meetings/cop_16/virtual_participation/items/5780.php
This Gateway tells me only one thing. The attitude and official stance of the Un-Nations World Thermostat Control Committee is this:
“We know what’s best for you without consulting with you. Just don’t try to find out what it is or tell us what to do.”
10
Ever since FORTRAN 90, it’s had objects and explicit pointers (pointers were always implicit, and you could hack the code to use them). Recent versions have array operations and parallel execution control. It’s true, that with enough run-time libraries, C can be brought almost up to the numerical level of a bare FORTRAN compiler.
Not to mention all the numerically friendly features for scientific programming, such as array indices that can start and stop at arbitrary integers.
One area where C has never even gotten close, however, is in informative diagnostic messages from the compiler 🙂
Speaking from my experience at NIST — your average physical scientist is much more likely to be able to write a reasonably error-free program in FORTRAN than in C. (Still not very likely, however.)
10
George,
The answer is certainly yes. But the language itself doesn’t tell us anything about the goodness of the program. I was referring to the fact that there is a very large base of FORTRAN code and it’s completely impractical to sit down and rewrite it all in a newer language. So now we have structured FORTRAN and its successors. The fact that newer code is done in FORTRAN may be from nothing more sinister than that FORTRAN is what was taught in the engineering, chemistry or physics curriculum.
You are right that C would be a better choice. But once you have something started in FORTRAN it’s a little difficult to modify it using C.
Just a comment on Bob’s assertion,
When you have to be your own quality control the chance of errors increases significantly. The language may not count for much because logic errors are easy to make and sometimes very hard to detect. The syndrome is like proofreading your own work. You tend to read what you “know” is there rather than what’s actually there (even in testing the program). C does have some beautiful little traps for the unwary however.
10
From
http://physicsworld.com/cws/m/1902/279338/article/multimedia/44247
physicsworld.com newswire (Week 46)
10
CGM’s how accurate are they?
http://www.informaworld.com/smpp/section?content=a928051726&fulltext=713240928
A comparison of local and aggregated climate model outputs with observed data
Authors: G. G. Anagnostopoulosa; D. Koutsoyiannisa; A. Christofidesa; A. Efstratiadisa; N. Mamassisa
Abstract
We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We also spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections are also poor.
10
Dessler doesn’t believe facts? What’s new? Just in from Scotland the Scottish Energy Minister refuses to listen to a local expert: http://www.telegraph.co.uk/news/newstopics/politics/scotland/8129883/Bonkers-green-energy-risks-power-shortages.html
No doubt about the AGW scam it has, is and will make fools out of many. Mainly politicians and their scientific advisors.
10
A piece with some relevance to Australia, since your tax dollars are paying for it. The Personal Carbon Credit Card scheme …
http://ourmaninsichuan.wordpress.com/
Pointman
10
My question is, on what basis do people assume the radio sonde data to be incorrect other than the fact that the data does not conform to their theory?
The last time i used a radio sonde (not long ago) the temp, humidity and pressure (PTu) sensors were all calibrated moments before release. The humidity sensor is fitted with a heater to stop ice accumulation thus distorting the measurement. The data is telemetered back down to the ground and processed in real time.
On what grounds can someone simply ignore such data? Does this latest East Anglia CRU hack explain this? No of course not, this is a prime example of the level of corruption that has spread amongst climate science.
10
James Dlingpole wins online journalism prize
Sorry if this is off topic, however I thought some regular bloggers here would be pleased to see this.
http://blogs.telegraph.co.uk/news/damianthompson/100063465/telegraph-blogger-james-delingpole-wins-bastiat-prize-for-online-journalism/
Bet the Moonbat is green with envy!
10
@crakar24:
At the risk of having to give a tediously long explanation, the East Anglia thing wasn’t a “hack”, it was a leak.
Pointman
10
Pointman,
Sorry i meant Dessler is a hack and he most likely is on the payroll of the CRU as this is where all the dodgy scientists hide. I meant no reference to “hacked or leaked emails”.
10
All this talk of 2 + 2 = 5… I am going to have to listen to some Radiohead when I get home now 🙂
BTW the lyrics are very apt:
http://www.greenplastic.com/lyrics/225.php
Sadly the first link on their official web site is to 350.org:
http://www.radiohead.com/deadairspace/
Ironic…
10
Dick Smith global warming HYPOCRITE!!!
Dick Smith gases on……….
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/dick_smith_gases_on/P0/
10
Cancun – more dangerous than Copenhagen………
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/cancun_more_dangerous_than_copenhagen/
10
$10K Climate Challenge…….
Peter Laux, Locomotive Engineman from Australia, “will pay $10,000 (AUS) for a conclusive argument based on empirical facts that increasing atmospheric CO2 from fossil fuel burning drives global climate warming.”
http://climateguy.blogspot.com/2010/11/10k-climate-challenge.html
10
Gillard looks to China and bankrupt California for inspiration……
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/gillard_looks_to_china_and_bankrupt_california_for_inspiration/
What a stupid, usless bogan……
10
Re the Fortran discussion above and its evolution-
This came up in a discussion on programing about the end of the 1980’s
The question was “What language will we be using in 2000?”
The answer was “I don’t know what language but we’ll still be calling it Fortran”
10
And it’s the coolest, wettest spring in a long time here in Qld. Half way through November and haven’t come near to average maximum yet- and way over rainfall average.
Ken
10
wendy, re 73
Should I take his money? I can show with empirical evidence that doubling CO2 will cause about 0.6C +/- 0.3C of incremental warming at the surface. As to whether or not this is a ‘driving’ or ‘forcing’ effect, I wouldn’t characterize it such, but the IPCC would, even at a much lower sensitivity, so it depends on your definition of ‘driving’, which he left fuzzy. I would maintain that the evidence showing only a small effect is sufficient to win this challenge.
George
10
George
Your problem is that although you can show (say) in a laboratory that doubling of CO2 might increase the efficiency of atmospheric infra-red absorption (in the target wavelength, of course) by a poofteenth of a percent – but what does that mean?
How would you translate this minor change in absorption into a proven temperature rise, and demonstrate the absence of negative feedbacks?
The alarmists have this same issue but compound their problems by making ridiculous claims (The Maldives are sinking etc) and getting caught trying to dodgey up the evidence (e.g. hide the decline). So it’s a lot easier to prove they’re wrong than it is to prove they’re right. Unfortunately for them, they, as the champion of the theory, are the ones who have to substantiate it. Something they have failed to do with monotonous regularity but with lots of noise and hand-waving.
Your number is probably about right, by the way. A lot more right that the IPCC, based on the paleoclimate evidence and the weak temperature trending of the last 150 years.
Cheers,
Speedy
PS: Mr. Bulldust: Happy Climategate Day!
10
@crakar24:
Apologies, I should have read more carefully.
Pointman
10
Speedy,
You seem to be caught in the same trap as the warmists, where you are confusing open loop gain with feedback. Changing CO2 concentrations affects the open loop gain. Clouds are the only effect properly called feedback. The way to think about it that the net atmospheric opacity sets the net climate system gain (closed loop gain), where the net climate system gain is the ratio of emitted surface power to incident solar power.
We can compute the net opacity relatively easily. Per HITRAN based atmospheric simulations, the clear sky atmosphere captures about 62% of the surface radiation (this increases to almost 63% when CO2 is doubled). Per ISCCP data, the average cloudy sky absorbs on average about 82% of surface emitted power which increases to almost 83% when CO2 is doubled and the percentage of the surface covered by clouds is 66%.
Half of the power absorbed by the atmosphere is radiated into space and half is returned to the surface, so the net atmospheric opacity is the cloud weighted sum of half the clear sky and cloudy sky absorption. This is calculated as,
NetOpacity = (1-0.66)*0.62/2 + 0.66*0.82/2 = 0.376
This says that for each watt/m^2 emitted by the surface, 625 mw escape, while 376 mw are returned to the surface. We can calculate the net climate system gain as 1/(1-NetOpacity) = 1.6.
We can cross check this against the data by considering the solar constant is 1366 W/m^2 and the average albedo is about 0.3. This means that on average, 239 W/m^2 of power arrives from the Sun to heat the planet. If we multiply 239 times 1.6, we get, 382 W/m^2. Converting this back to a surface temperature with SB produces 286.5K, which is pretty close to the current 25-year average global temperature. And of course, 239 W/m^2, when converted to a temperature, is 255K.
Keeping clouds constant, doubling CO2 increase the net opacity to about .381, which increases the gain to 1.615. If we multiply 239 W/m^2 by .015, which is the increase in gain caused by doubling CO2, the net surface power increases by about 3.6 W/m^2, which converts to a temperature increase of about 0.6C (which is where my estimate comes from).
At this point, the only thing that can change this result is the percentage of clouds covering the planet. If we bound the percentage change in cloud coverage caused by doubling CO2 at +/- 5%, the range of cloud coverage becomes 0.63 to 0.69. Plugging this into the opacity calculations, the expected range in the net opacity (after doubling CO2) would be between .375 @ 63% and .384 @ 69%, producing a range in the closed loop gain of between 1.6 and 1.623, which sets the expected range in incremental surface power from doubling CO2 to between 0C and 1C.
The actual range is a little smaller, since as clouds increase the gain, more solar energy is reflected and the surface cools.
George
10
I plotted up the NCEP Reanalysis Total Column Water Vapour data (the estimated total water vapour in a given atmospheric column) which I hope others might find easier to understand.
[While some like Dessler don’t like this dataset, there is quite a bit of new research using GPS receivers that confirms the NCEP data: GPS receivers are apparently very good at detecting water vapour]. There is something like just 1 inch or 2.5 cms or 25 kg per metre2 of water vapour in the atmosphere at any one time.
First, water vapour is strongly affected by the ENSO cycle. This means one can cherrypick a certain period and come up with all kinds of incorrect correlations and conclusions. Water Vapour lags behind the ENSO cycle by about 3 to 4 months.
http://img574.imageshack.us/img574/9051/ensowatervapour.png
It is actually more closely correlated with the temperature cycle – higher temps, more water vapour (there is something like a 1 month lag in water vapour versus temperature most of the time).
http://img261.imageshack.us/img261/1766/watervapourtempstime.png
But the actual amount of water vapour in the NCEP dataset is not increasing as predicted in Global Warming Theory. The theory is based on the Clasius Clayperon equations which predicts a 7% increase in water vapour per 1C increase in temperatures (for this part of the equations). All the climate models build in between 6% to 8% increase in water vapour feedback per 1C increase in temperatures. The models also include a 2% increase in cloudiness and precipitation for each 1C.
Well water vapour is changing at a much slower rate than predicted and included in the models (something like 50% of the rate predicted which turns out to be about the 10th individual climate index which is changing at 30% to 50% of that predicted).
Here is a scatter of the NCEP Reanalysis Water Vapour versus Temperature and the theoritical increase (note the theoritical increase does vary a little with pressure levels with the 300 mb level having the largest increase but the average is 1.68 kg/m2/K or 1.68 mms/K).
http://img190.imageshack.us/img190/7007/watervapourscattertemps.png
Clouds and precipitation do not appear to be changing at all so that contradicts the theory as well.
10
More from “the science might not be settled” department?
“In 1996, the constructal law was formulated and proposed to expand thermodynamics in a fundamental way.
First was the proposal to recognize that there is a universal phenomenon not covered by the first law and the second law. That phenomenon is the generation of configuration, or the generation of ‘design’ in nature.”
Check out
http://wattsupwiththat.com/2010/11/15/the-constructal-law-of-flow-systems/#more-27870
and the part re gcm’s particularly
10
Another Ian, good link and perfectly reasonable concept! thanks
10
co2isnotevil 79:
The amount of cloud doesn’t have to change in order to change the climate. Just where and when that cloud occurs. That in itself makes the construction of climate models a folly; or at best a pointless exercise in mathematics and programming.
As a Gedankenexperiment, consider e.g. the situation where all cloud forms 6 hours after noon in a particular location and then dissipates 6 hours before noon the following day; on a rotating globe. Compare that to the opposite situation. That gives you the magnitude of the error bars; how much you can reasonably be wrong; if everything else remains constant.
We need to know more about the climate system, but even knowing the mechanisms isn’t going to get us anywhere because the state of the climate system is largely indeterminate; and probably will be for a very long time.
Hence my suggestion that it’s folly. Our energies are better employed in resolving real problems in the real world.
10
Berfel,
Clouds, as they relate to the way the climate system operates, are an aggregate property integrated over time and space. Local effects like you suggest will cancel and/or aggregate. For example, 12 hours of clouds and 12 hours of clear per day will average to 50% cloud coverage.
GCM’s attempt to simulate the coming and going of clouds on short time scales and this method is subject to a high degree of uncertainty and your concern would apply to this kind of methodology. Behavioral simulations and analysis like mine are concerned with the fraction of cloud cover required, on a per grid basis and averaged across from 5 or 30 days per ‘tic’. This is the difference between trying to model and analyze weather (GCM’s) and trying to model and analyze the climate.
George
10
Dr D is one of the most pernicious of the AGW promoters. He is as cynical as Mann but works more behind the scenes.
10
So what do we have here then? Yet another example of alarmists fixing the peer review system in their favour? Or friends of alarmists being engaged to purposely write a paper that says the opposite of a paper they find inconvenient, which is then reviewed and published in a alarmist friendly publication?
Yet while a scientists tries to do his job he gets road blocks left, right and centre.
Mailman
10
[…] on Paltridge’s reply to Dessler (which was a response to Paltridge..), and linked to another blog article. It seems like even the author of that blog article is confused about NCEP/NCAR. This reanalysis […]
10
Just a note that I have published an article on this topic. After Part One, someone asked me to comment on your article, so I have made some comment:
http://scienceofdoom.com/2011/06/05/water-vapor-trends-part-two/
10
[…] Paltridge’s reply to Dessler and an analysis of Dessler’s paper has been done by Jo Nova: http://joannenova.com.au/2010/11/dessler-2010-how-to-call-vast-amounts-of-data-spurious/comment-page… […]
10