|
Joint Post: Geoff Sherrington and JoNova
The IPCC Synthesis Report first order draft has been leaked (h/t Tallbloke) . It is part of the big Fifth Assessment report see the parts already released here. The Synthesis Report supposedly summarizes the science. In the real world the topic du jour is the plateau, pause, or hiatus in warming which the IPCC can no longer ignore. Instead the masters of keyword phrases test new bounds in saying things that are technically correct, while not stating the bleeding obvious. Luckily we are here to help them. : -)
Translating IPCC-spin:
“The rate of warming of the observed global-mean surface temperature has been smaller over the past 15 years (1998-2012) than over the past 30 to 60 years (Figure SYR.1a; Box SYR.1) and is estimated to be around one-third to one-half of the trend over the period 1951–2012. Nevertheless, the decade of the 2000s has been the warmest in the instrumental record (Figure SYR.1a).”
Translated: Yes temperatures are not rising faster as we predicted, even though more CO2 was pumped out faster than ever. Let’s ignore that this shows the models were wrong, the important thing is to […]
Nicola Scafetta has a new paper (in long line of papers) on a semi-empirical model which has a better fit than Global Circulation Models (CGM) favored by the IPCC. We ought be careful not to read too much into it, but nor to ignore the message in it about the grand failure of the GCM’s. Scafetta used Fourier analysis to find six cycles, then uses those six cycles to produce a climate model he runs for as long as 2000 years which seems to match the best multiproxies. In terms of discovering the absolute truth about the climate, this is not an end-point way to use Fourier analysis, as it is just “curve fitting” With six flexible cycle frequencies (plus amplitude and phase) there are 18* 6 tuneable parameters, more than enough to model any wiggly line on a graph, and there are scores of astronomical cycles to pick from. *.[Nicola Scafetta replies to this below, pointing out he uses the “6 major detected astronomical oscillations”, and their phases are fixed. I am happy to be corrected. His model is more useful than I thought. Apologies for the misunderstanding. – Jo]
But Scafetta’s work suggests it’s madness not to pay […]
And the public conversation finally starts to move on to discussing not whether the IPCC is wrong, but why it was wrong, and what we need to do about it. Credit to Judith Curry and the Financial Post. I’ve posted a few paragraphs here. The whole story is in the link at the top. – Jo
Judith A. Curry, Special to Financial Post
Kill the IPCC: After decades and billions spent, the climate body still fails to prove humans behind warming
The IPCC is in a state of permanent paradigm paralysis. It is the problem, not the solution
The IPCC has given us a diagnosis of a planetary fever and a prescription for planet Earth. In this article, I provide a diagnosis and prescription for the IPCC: paradigm paralysis, caused by motivated reasoning, oversimplification, and consensus seeking; worsened and made permanent by a vicious positive feedback effect at the climate science-policy interface.
In its latest report released Friday, after several decades and expenditures in the bazillions, the IPCC still has not provided a convincing argument for how much warming in the 20th century has been caused by humans.
We tried a simple solution […]
Finally climate scientists are starting to ask how the models need to change in order to fit the data. Hans von Storch, Eduardo Zorita and authors in Germany pointedly acknowledge that even at the 2% confidence level the model predictions don’t match reality. The fact is, the model simulations predicted it would get warmer than it has from 1998-2012. Now some climate scientists admit that there is less than a 2% chance that the models are compatible with the 15-year warming pause, according to the assumptions in the models.
In a brief paper they go on to suggest three ways the models could be failing, but draw no conclusions. For the first time I can recall, the possibility that the data might be wrong is not even mentioned. It has been the excuse du jour for years.
Note in the chart that while the 10 year “pause” passed the basic 5% test of statistical significance, by 13 years, the pause was so long that only 2% of CMIP5 or CMIP3 models simulations could be said to agree with reality. By 16 years that will be 1% of simulations. If the pause continues for 20 years, there would be “zero” […]
How bad are these global forecast models?
When the same model code with the same data is run in a different computing environment (hardware, operating system, compiler, libraries, optimizer), the results can differ significantly. So even if reviewers or critics obtained a climate model, they could not replicate the results without knowing exactly what computing environment the model was originally run in.
This raises that telling question: What kind of planet do we live on? Do we have a Intel Earth or an IBM one? It matters. They get different weather, apparently.
There is a chaotic element (or two) involved, and the famous random butterfly effect on the planet’s surface is also mirrored in the way the code is handled. There is a binary butterfly effect. But don’t for a moment think that this “mirroring” is useful: these are different butterflies, and two random events don’t produce order, they produce chaos squared.
How important are these numerical discrepancies? Obviously it undermines our confidence in climate models even further. We can never be sure how much of the rising temperature in a model forecasts might change if we moved to a different computer. (Though, since we already know the models […]
This beautiful graph was posted at Roy Spencer’s and WattsUp, and no skeptic should miss it. I’m not sure if everyone appreciates just how piquant, complete and utter the failure is here. There are no excuses left. This is as good as it gets for climate modelers in 2013.
John Christy used the best and latest models, he used all the models available, he has graphed the period of the fastest warming and during the times humans have emitted the most CO2. This is also the best data we have. If ever any model was to show the smallest skill, this would be it. None do.
Scores of models, millions of data-points, more CO2 emitted than ever before, and the models crash and burn. | Graph: John Christy. Data: KMNI.
Don’t underestimate the importance of the blue-green circles and squares that mark the “observations”. These are millions of radiosondes, and two independent satellite records. They agree. There is no wiggle room, no overlap.
Any sane modeler can only ask: “But how can the climate modelers pretend their models are working?” Afterall, predicting the known past with a model is not-too-hard; the modeler tweaks the assumptions, fiddles with the fudge […]
Clouds over Amazon forest (Rio Negro). Image NASA Earth Observatory.
What if winds were mainly driven by changes in water vapor, and those changes occurred commonly in air over forests? Forests would be the pumps that draw in moist air from over the oceans. Rather than assuming that forests grow where the rain falls, it would be more a case of rain falling where forests grow. When water vapor condenses it reduces the air pressure, which pulls in more dense air from over the ocean.
A new paper is causing a major stir. The paper is so controversial that many reviewers and editors said it should not be published. After two years of deliberations, Atmospheric Chemistry and Physics decided it was too important not to discuss.
The physics is apparently quite convincing, the question is not whether it happens, but how strong the effect is. Climate models assume it is a small or non-existent factor. Graham Lloyd has done a good job describing both the paper and the reaction to it in The Australian.
Sheil says the key finding is that atmospheric pressure changes from moisture condensation are orders of magnitude greater than previously recognised. The paper concludes “condensation […]
Professor David Frame and Dr Daithi Stone have produced a paper claiming the IPCC predictions in 1990 were successful and seem accurate.
Those who read the actual FAR report and check the predictions against the data know that this is not so.
They ignore the main IPCC predictions (the prominent ones, with graphs, in the Summary for Policymakers) They don’t measure the IPCC success against an IPCC graph or within IPCC defined “uncertainties”. They measure success against a “zero trend” — something they defined as any rise at all beyond what they say are the limits of natural variability (which they got from the very models that aren’t working too well). Circular reasoning anyone? Frame and Stone themselves say the IPCC models didn’t include important forcings, and may have been “right” by accident.
Why did Nature publish this strawman letter? It’s an award-winning effort in selective focus, logical fallacies, and circular reasoning to be sure, but does it advance our understanding of the natural world? Not so.
Frame and Stone have produced a Letter to Nature saying that 3 is a lot like 6 (they are both larger than zero). If you ignore the Summary for Policymakers, pick a line […]
by Joanne Nova and Anthony Cox
UPDATED: See also Has the EPA done due diligence on the IPCC Report.
The theory that failed
It takes only one experiment to disprove a theory. The climate models are predicting a global disaster, but the empirical evidence disagrees. The theory of catastrophic man-made global warming has been tested from many independent angles.
The heat is missing from oceans; it’s missing from the upper troposphere. The clouds are not behaving as predicted. The models can’t predict the short term, the regional, or the long term. They don’t predict the past. How could they predict the future?
The models didn’t correctly predict changes in outgoing radiation, or the humidity and temperature trends of the upper troposphere. The single most important fact, dominating everything else, is that the ocean heat content has barely increased since 2003 (and quite possibly decreased) counter to the simulations. In a best case scenario, any increase reported is not enough. Models can’t predict local and regional patterns or seasonal effects, yet modelers add up all the erroneous micro-estimates and claim to produce an accurate macro global forecast. Most of the warming happened in a step change in 1977, […]
Yet more observations from the planet show that modelers misunderstand the water based part of the climate – on our water based planet.
Modelers thought that dry ground would decrease afternoon storms and rainfall over those frazzled parched lands (though I don’t remember many headlines predicting “More Drought means Fewer Storms” ). But observations show that storms are more likely to rain over dry soil. Why? Probably the dry soil heats up faster than moist areas thanks to the cooling effect of evaporation, and that in turn creates stronger thermals over dry land. Modelers assumed that wetter soils means more evaporation and thus more rain, but the moisture laden air is evidently coming from further away.
It’s another example of a point where climate modelers assume a positive feedback, yet the evidence suggests the feedback is negative. Once again water appears to be the dominant force with feedbacks (it does cover 70% of the surface). In a natural stable system the net feedbacks are likely to be negative. Positive feedbacks make the system less stable (and more scary and harder to predict.)
Climate change models misjudge drought: “A four-nation team led by Chris Taylor from Britain’s Centre for Ecology and […]
Yet another paper shows that the climate models have flaws, described as “gross” “severe” and “disturbing”. The direct effect of doubling CO2 is theoretically 3.7W per square meter. The feedbacks supposedly are 2 -3 times as strong (according to the IPCC). But some scientists are trying to figure out those feedbacks with models which have flaws in the order of 70W per square meter. (How do we find that signal in noise that’s up to 19 times larger?)
Remember climate science is settled: like gravity and a round earth. (Really?)
Miller et al 2012 [abstract] [PDF] find that some models predict clouds to have a net shortwave radiative effect near zero, but observations show it is 70W per square meter. Presumably, cloud shortwave radiative effect means the sunlight bounced upwards off the surface of the clouds and out into space.
What’s especially interesting about this paper is the level of detail. They test shortwave and longwave radiation, precipitation flux, integrated water vapor, liquid water path, cloud fraction, and they have observations from the top of the atmosphere and the surface. With so much information they can test models against short wave and long wave radiation, to see […]
Today in the Sydney Morning Herald and The Age, for the first time, David Evans has been published in the Op-Ed section. Something is going on in those newsrooms…? This article, below, simply makes the point that the models amplify the direct effect of CO2 by a factor of three and that is where the most important uncertainties lie. This key factor in the debate — which we cover repeatedly on this blog– has virtually never been made before in these newspapers which are the major dailies for Australia’s two largest cities. Any debate about the effects of CO2 needs to start with the fact that most of the warming in the models comes from amplification of humidity and clouds. If the models were right about water vapor, we would have found that missing hot spot. — Jo PS: The SMH and The AGE have both closed comments already! Have they run out of electrons? Oh my? Or were they afraid the comments looked like a debate?
UPDATE: I’ve just posted that these major dailies have “disappeared” the Muller conversion article too!
—————————————
Dr David M.W. Evans
31 Jul 2012
Climate scientists’ theories, flawed as they are, ignore […]
This is part of a series that Tony Cox and I are doing that drills down to the most important points and papers, with proper references, as a definitive resource.The models are wrong: not just “unverified”, not just “uncertain”, but proven to have failed. — Jo
Joint Post: Tony Cox and Jo Nova
Across different regions, and different time-spans over the last century, the models fail.
Koutsoyiannis and Anagnostopolous et al show those models can’t model the recent century, and because the models fail to predict regional and smaller scale effects it’s impossible that they could predict longer and global values.[i]
On 30 year time frames, the original observations are nothing like the models projections on a local scale. (Click to enlarge).
The models should retrospectively match the actual temperature over the past 100 years. This test of retrospectivity is called hindcasting. If a model has valid assumptions about the climatic effect of variables such as greenhouse gases, particularly CO2, then the model should be able to match past known data.
“…all the models were “irrelevant with reality” at the 30 year climate scale…”
When tested, the global climate models failed to […]
Antarctic Glacier Image: Paomic
It comes as not even a tiny surprize that when someone asks “Where does all the CO2 go in an ice age?” that the answer is “The Ocean“.
We already know temperatures rise 800 years before CO2 levels (Caillon 2003), and we know the oceans contain 50 times as much CO2 as the sky. Moreover, basic chemistry tells us that CO2 (like all gases) will dissolve better in cold water, and be released as the water warms. To cap it all off, the deep abyss of the oceans turns over once every millenia or so (which fits loosely with the “lag” between temperature and CO2 levels).
But you would think this new research was solving a deep mystery, rather than confirming what most sane knowledgeable people would expect. Nonetheless, this may be the first detailed study of C13 levels going back 24,000 years.
CO2 was hidden in the ocean during the Ice Age
EurekaAlert
Why did the atmosphere contain so little carbon dioxide (CO2) during the last Ice Age 20,000 years ago? Why did it rise when the Earth’s climate became warmer? Processes in the ocean are responsible for this, says […]
You’ll find this hard to believe but I get excited about the 1990 First Assessment Report (FAR). It’s very different from wading through the later ones, because it’s remarkably honest, and things are not hidden in double-speak (well, not so much). Scientists behave like scientists and talk of null hypothesis, and even of validating models. Indeed they had a whole chapter back then called “validation”. How times have changed.
This is the short summary of Chapter 8 “Attribution”
Thanks to Alan for sending me this link today (Chapter 8, IPCC FAR).
The “Attribution” Chapter is the part where they try to figure out what “caused” the warming. Chapter 8 says, essentially, “we don’t know, we might never know, our models don’t work, and we can conclude it might all be natural, but then again, it might not.” Got it?
This is in the same era that Al Gore was saying “the science is settled” and “there is no debate”.
What’s clear in 1990 from the FAR was that it was widely admitted that the models were bodgy, and that figuring out exactly what caused the recent warming was very difficult, indeed impossible at the time. There were too many variables, […]
Dr Andrew Glikson (an Earth and paleoclimate scientist, at the Australian National University) contacted Quadrant offering to write about the evidence for man-made global warming. Quadrant approached me asking for my response. Dr Glikson replied to my reply, and I replied again to him (copied below). No money exchanged hands, but Dr Glikson is, I presume, writing in an employed capacity, while I write pro bono. Why is it that the unpaid self taught commentator needs to point out the evidence he doesn’t seem to be aware of? Why does a PhD need to be reminded of basic scientific principles (like, don’t argue from authority). Such is the vacuum of funding for other theories that a debate that ought to happen inside the university obviously hasn’t occurred. Such is the decrepit, anaemic state of university science that even a doctorate doesn’t guarantee a scientist can reason. Where is the rigor in the training, and the discipline in the analysis?
Credibility lies on evidence
by Joanne Nova
April 29, 2010
Reply to Andrew Glikson
Dr Andrew Glikson still misses the point, and backs his arguments with weak evidence and logical errors. Instead of empirical evidence, often […]
One of the main arguments from the IPCC is that essentially, we can’t explain temperature changes any other way than with carbon forcings. This is matched with impressive pink and blue graphs that pose as evidence that carbon is responsible for all the recent warming.
This is argumentum ad ignorantiam — essentially they say: we don’t know what else could have caused that warming, so it must be carbon. It’s a flawed assumption.
It’s easy to create impressive graphs, especially if you actively ignore other possible causes, like for example, changes in cloud cover and solar magnetic effects.
…
1.
…
2.
…
3.
9.8 out of 10 based on 8 ratings […]
Carbon dioxide only causes 1.1°C of warming if it doubles. That’s according to the IPCC. Did you know?
The real game is water.
Researchers made guesses about humidity and clouds in the early 1980s and they built these guesses into their models. We now know they were wrong, not about carbon, but about water in the form of humidity and clouds. Here’s how the models can be right about carbon and wrong about the climate.
7.5 out of 10 based on 11 ratings […]
Thanks to Eric Raymond, famous computer guru and leader of the open-source movement, at ESR, we can see what those sophisticated climate modelers were doing. They’ve found the code from the leaked files, and Eric’s comment is:
This isn’t just a smoking gun, it’s a siege cannon with the barrel still hot.
Here’s the code. The programmer has written in helpful notes that us non-programmers can understand, like this one: “Apply a very artificial correction for decline”. You get the feeling this climate programmer didn’t like pushing the data around so blatantly. Note the technical comment: “fudge factor”.
6.6 out of 10 based on 8 ratings […]
Undoubtedly the best summary of the current state of affairs is the SPPI monthly CO2 report. The April report contains news that—if there was a free and high quality media—would have generated headlines like these (well, sort of—you get the idea).
Any investigative journalist who was doing their job only had to Google for the other side of the story. I’m not saying those journalists have to agree with us, just that, at the moment most environmental writers think ‘balanced’ means saying, “The world will cook: the question is, lightly toasted OR totally pan-fried’.
Here’s the counter summary of the headlines we didn’t see, accompanied by an analysis you probably won’t see anywhere else.
Planet Unmoved by IPCC Forecast
Despite the power of the authority vested in the International Panel on Climate Change (IPCC), The Planet appears to be unswayed by the large well funded international bureaucracy, and is similarly immune to following the collected wisdom of the software engineers who compress it’s 1100 billion cubic kilometers of complexity into a PC.
7 out of 10 based on 3 ratings […]
|
JoNova A science presenter, writer, speaker & former TV host; author of The Skeptic's Handbook (over 200,000 copies distributed & available in 15 languages).
Jo appreciates your support to help her keep doing what she does. This blog is funded by donations. Thanks!
Follow Jo's Tweets
To report "lost" comments or defamatory and offensive remarks, email the moderators at: support.jonova AT proton.me
Statistics
The nerds have the numbers on precious metals investments on the ASX
|
Recent Comments