Two papers on ocean heat released together today. The first says the missing heat is not in the deep ocean abyss below 2000m. The second finds the missing heat in missing data in the Southern Hemisphere instead. Toss out one excuse, move to another.
The first paper by Llovel and Willis et al, looked at the total sea-level rise as measured by adjusted satellites*, then removed the part of that rise due to expanding warming oceans above 2,000 m and the part due to ice melting off glaciers and ice-sheets.** The upshot is that the bottom half of the ocean is apparently not warming — there was nothing much left for the deep oceans to do. This result comes from Argo buoy data which went into full operation in 2005. (Before Argo the uncertainties in ocean temperature measurements massively outweigh the expected temperature changes, so the “data” is pretty useless.)
Figure 2 | Global mean steric sea-level change contributions from different layers of the ocean. 0–2,000m (red), 0–700m (green), 700–2,000m (blue). The dashed black curve shows an estimate for the remainder of the ocean below 2,000m computed by removing the 0–2,000m estimate from the GRACE-corrected observed mean sea-level time series. [...]
Finally, for only the 87th time, climate modellers have uncovered the definitive proof they’ve been finding in different forms every year since 1988.
ARC extreme unscience – corrected at no cost to the Australian taxpayer. Click for a big printable copy.
They seek, and find, the most excellent propaganda they can pretend is science. Look, this is the specific handprint of non-specific climate-change! Everything bar climate-sameness is proof the climate changes. How inane? The unscientific vagueness gives this poster away as being more about propaganda than about communication of science.
… in a special edition of the Bulletin of the American Meteorological Society, examining extreme events around the world during 2013, a series of papers home in on the Australian heat waves, and identify a human influence.
Using short, noisy records, with flawed and adjusted data, it is possible to run broken climate models and show “definitively” that current heat-waves and hottest years are due to man-made emissions. And if you believe that, you could be gullible enough to be a Guardian journalist.
That is, climate models that do not include solar factors like magnetic fields, solar winds, cosmic rays, solar spectral changes, or lunar effects are able to [...]
You won’t believe… Research shows surprise global warming ‘hiatus’ could have been forecast
[The Guardian] Australian and US climate experts say with new ocean-based modelling tools, the early 2000s warming slowdown was foreseeable. Australian and US researchers have shown that the slowdown in the rate of global warming in the early 2000s, known as a so-called “global warming hiatus”, could have been predicted if today’s tools for decade-by-decade climate forecasting had been available in the 1990s.
And I’ve got a model that would have predicted the 1987 stock market crash, the GFC, and the winner of the Melbourne Cup. What I would not have predicted is that lame excuses this transparent, would be made by people calling themselves scientists, Gerald Meehl, and repeated by people calling themselves journalists. (That’s you, Melissa Davey). Though I’m not surprised that research this weak had to be published by Nature. (Where else?)
Although global temperatures remain close to record highs, they have shown little warming trend over the past 15 years, a slowdown that earlier climate models had been largely unable to predict.
This has been used by climate change sceptics as evidence that climate change prediction models are flawed.
Imagine that, the stupid [...]
Remember how CO2 is supposed to cause warmer winters, and warmer nights? Well now CO2 also produces cold snaps. No matter what weather you get, there is a citation to blame CO2. Nature (the formerly great science journal) and Northeastern University have produced another permutation of outputs from models we know are broken.
The first line in the press release is false and smugly so: “most scientists — 97 percent of them, to be exact — agree that the temperature of the planet is rising and that the increase is due to human activities….” 10 seconds on Google would have shown — 60% of geoscientists and engineers don’t agree.
If Kodra and co were trying to be accurate, they could have said “97% of annointed climate scientists agree… “. If they were trying to be scientific, of course, they wouldn’t mention a consensus at all. If they had good evidence, they’d talk about that instead.
They dug deep in The-Book-of-Cliches for the press release. Strip away the advertising spin and I think this is the nub of the work:
“While global temperature is indeed increasing, so too is the variability in temperature extremes. For instance, while each [...]
We could spend hours analyzing the new IPCC report about the impacts of climate change. Or we could just point out:
Everything in the Working Group II report depends entirely on Working Group I.
( see footnote 1 SPM, page 3).
Working Group I depends entirely on climate models and 98% of them didn’t predict the pause.
The models are broken. They are based on flawed assumptions about water vapor.
Working Group I, remember, was supposed to tell us the scientific case for man-made global warming. If our emissions aren’t driving the climate towards a catastrophe, then we don’t need to analyze what happens during the catastrophe we probably won’t get. This applies equally to War, Pestilence, Famine, Drought, Floods, Storms, and Shrinking Fish (which, keep in mind, could have led to the ultimate disaster: shrinking fish and chips).
To cut a long story short, the 95% certainty of Working Group I boils down to climate models and 98% of them didn’t predict the pause in surface temperature trends (von Storch 2013) . Even under the most generous interpretation, models are proven failures, 100% right except for rain, [...]
The backdown continues. Faced with the ongoing failure of their models, the search rolls on for any factor that helps “explain” why the official climate scientists are still right even though they got it so wrong. The new England et al paper endorses skeptics in so many ways.
The world might warm by only 2.1 degrees this century, not 4c. (Skeptics were right — the models exaggerate). There has been and is a pause in warming which the 95%-certain-models didn’t predict. (The science wasn’t settled.) What the trade-winds giveth, they can also taketh away. If they “cause cooling” after 2000, then they probably “caused warming” before that. How much less important is CO2? Ultimately, newer models are less wrong if they include changes in wind speed, but they don’t know what drives the wind. It’s curve fitting with one more variable.
As usual, the models still can’t predict the climate, but they can be adjusted post hoc with new factors to trim their overestimates back to within the errors bars of some observations.
As I said nearly 2 years ago, Matthew England owes Nick Minchin an apology:
Nick Minchin: ” there is a major problem with the warmist argument [...]
Joint Post: Geoff Sherrington and JoNova
The IPCC Synthesis Report first order draft has been leaked (h/t Tallbloke) . It is part of the big Fifth Assessment report see the parts already released here. The Synthesis Report supposedly summarizes the science. In the real world the topic du jour is the plateau, pause, or hiatus in warming which the IPCC can no longer ignore. Instead the masters of keyword phrases test new bounds in saying things that are technically correct, while not stating the bleeding obvious. Luckily we are here to help them. : -)
“The rate of warming of the observed global-mean surface temperature has been smaller over the past 15 years (1998-2012) than over the past 30 to 60 years (Figure SYR.1a; Box SYR.1) and is estimated to be around one-third to one-half of the trend over the period 1951–2012. Nevertheless, the decade of the 2000s has been the warmest in the instrumental record (Figure SYR.1a).”
Translated: Yes temperatures are not rising faster as we predicted, even though more CO2 was pumped out faster than ever. Let’s ignore that this shows the models were wrong, the important thing is to [...]
Nicola Scafetta has a new paper (in long line of papers) on a semi-empirical model which has a better fit than Global Circulation Models (CGM) favored by the IPCC. We ought be careful not to read too much into it, but nor to ignore the message in it about the grand failure of the GCM’s. Scafetta used Fourier analysis to find six cycles, then uses those six cycles to produce a climate model he runs for as long as 2000 years which seems to match the best multiproxies. In terms of discovering the absolute truth about the climate, this is not an end-point way to use Fourier analysis, as it is just “curve fitting” With six flexible cycle frequencies (plus amplitude and phase) there are 18* 6 tuneable parameters, more than enough to model any wiggly line on a graph, and there are scores of astronomical cycles to pick from. *.[Nicola Scafetta replies to this below, pointing out he uses the "6 major detected astronomical oscillations", and their phases are fixed. I am happy to be corrected. His model is more useful than I thought. Apologies for the misunderstanding. - Jo]
But Scafetta’s work suggests it’s madness not to [...]
And the public conversation finally starts to move on to discussing not whether the IPCC is wrong, but why it was wrong, and what we need to do about it. Credit to Judith Curry and the Financial Post. I’ve posted a few paragraphs here. The whole story is in the link at the top. – Jo
Judith A. Curry, Special to Financial Post
Kill the IPCC: After decades and billions spent, the climate body still fails to prove humans behind warming
The IPCC is in a state of permanent paradigm paralysis. It is the problem, not the solution
The IPCC has given us a diagnosis of a planetary fever and a prescription for planet Earth. In this article, I provide a diagnosis and prescription for the IPCC: paradigm paralysis, caused by motivated reasoning, oversimplification, and consensus seeking; worsened and made permanent by a vicious positive feedback effect at the climate science-policy interface.
In its latest report released Friday, after several decades and expenditures in the bazillions, the IPCC still has not provided a convincing argument for how much warming in the 20th century has been caused by humans.
We tried a simple solution for a [...]
Finally climate scientists are starting to ask how the models need to change in order to fit the data. Hans von Storch, Eduardo Zorita and authors in Germany pointedly acknowledge that even at the 2% confidence level the model predictions don’t match reality. The fact is, the model simulations predicted it would get warmer than it has from 1998-2012. Now some climate scientists admit that there is less than a 2% chance that the models are compatible with the 15-year warming pause, according to the assumptions in the models.
In a brief paper they go on to suggest three ways the models could be failing, but draw no conclusions. For the first time I can recall, the possibility that the data might be wrong is not even mentioned. It has been the excuse du jour for years.
Note in the chart that while the 10 year “pause” passed the basic 5% test of statistical significance, by 13 years, the pause was so long that only 2% of CMIP5 or CMIP3 models simulations could be said to agree with reality. By 16 years that will be 1% of simulations. If the pause continues for 20 years, there would be “zero” [...]
How bad are these global forecast models?
When the same model code with the same data is run in a different computing environment (hardware, operating system, compiler, libraries, optimizer), the results can differ significantly. So even if reviewers or critics obtained a climate model, they could not replicate the results without knowing exactly what computing environment the model was originally run in.
This raises that telling question: What kind of planet do we live on? Do we have a Intel Earth or an IBM one? It matters. They get different weather, apparently.
There is a chaotic element (or two) involved, and the famous random butterfly effect on the planet’s surface is also mirrored in the way the code is handled. There is a binary butterfly effect. But don’t for a moment think that this “mirroring” is useful: these are different butterflies, and two random events don’t produce order, they produce chaos squared.
How important are these numerical discrepancies? Obviously it undermines our confidence in climate models even further. We can never be sure how much of the rising temperature in a model forecasts might change if we moved to a different computer. (Though, since we already know the models [...]
This beautiful graph was posted at Roy Spencer’s and WattsUp, and no skeptic should miss it. I’m not sure if everyone appreciates just how piquant, complete and utter the failure is here. There are no excuses left. This is as good as it gets for climate modelers in 2013.
John Christy used the best and latest models, he used all the models available, he has graphed the period of the fastest warming and during the times humans have emitted the most CO2. This is also the best data we have. If ever any model was to show the smallest skill, this would be it. None do.
Scores of models, millions of data-points, more CO2 emitted than ever before, and the models crash and burn. | Graph: John Christy. Data: KMNI.
Don’t underestimate the importance of the blue-green circles and squares that mark the “observations”. These are millions of radiosondes, and two independent satellite records. They agree. There is no wiggle room, no overlap.
Any sane modeler can only ask: “But how can the climate modelers pretend their models are working?” Afterall, predicting the known past with a model is not-too-hard; the modeler tweaks the assumptions, fiddles with the fudge [...]
Clouds over Amazon forest (Rio Negro). Image NASA Earth Observatory.
What if winds were mainly driven by changes in water vapor, and those changes occurred commonly in air over forests? Forests would be the pumps that draw in moist air from over the oceans. Rather than assuming that forests grow where the rain falls, it would be more a case of rain falling where forests grow. When water vapor condenses it reduces the air pressure, which pulls in more dense air from over the ocean.
A new paper is causing a major stir. The paper is so controversial that many reviewers and editors said it should not be published. After two years of deliberations, Atmospheric Chemistry and Physics decided it was too important not to discuss.
The physics is apparently quite convincing, the question is not whether it happens, but how strong the effect is. Climate models assume it is a small or non-existent factor. Graham Lloyd has done a good job describing both the paper and the reaction to it in The Australian.
Sheil says the key finding is that atmospheric pressure changes from moisture condensation are orders of magnitude greater than previously recognised. The paper concludes “condensation [...]
Professor David Frame and Dr Daithi Stone have produced a paper claiming the IPCC predictions in 1990 were successful and seem accurate.
Those who read the actual FAR report and check the predictions against the data know that this is not so.
They ignore the main IPCC predictions (the prominent ones, with graphs, in the Summary for Policymakers) They don’t measure the IPCC success against an IPCC graph or within IPCC defined “uncertainties”. They measure success against a “zero trend” — something they defined as any rise at all beyond what they say are the limits of natural variability (which they got from the very models that aren’t working too well). Circular reasoning anyone? Frame and Stone themselves say the IPCC models didn’t include important forcings, and may have been “right” by accident.
Why did Nature publish this strawman letter? It’s an award-winning effort in selective focus, logical fallacies, and circular reasoning to be sure, but does it advance our understanding of the natural world? Not so.
Frame and Stone have produced a Letter to Nature saying that 3 is a lot like 6 (they are both larger than zero). If you ignore the Summary for Policymakers, pick a line [...]
Yet more observations from the planet show that modelers misunderstand the water based part of the climate – on our water based planet.
Modelers thought that dry ground would decrease afternoon storms and rainfall over those frazzled parched lands (though I don’t remember many headlines predicting “More Drought means Fewer Storms” ). But observations show that storms are more likely to rain over dry soil. Why? Probably the dry soil heats up faster than moist areas thanks to the cooling effect of evaporation, and that in turn creates stronger thermals over dry land. Modelers assumed that wetter soils means more evaporation and thus more rain, but the moisture laden air is evidently coming from further away.
It’s another example of a point where climate modelers assume a positive feedback, yet the evidence suggests the feedback is negative. Once again water appears to be the dominant force with feedbacks (it does cover 70% of the surface). In a natural stable system the net feedbacks are likely to be negative. Positive feedbacks make the system less stable (and more scary and harder to predict.)
Climate change models misjudge drought: “A four-nation team led by Chris Taylor from Britain’s Centre for Ecology and [...]
Yet another paper shows that the climate models have flaws, described as “gross” “severe” and “disturbing”. The direct effect of doubling CO2 is theoretically 3.7W per square meter. The feedbacks supposedly are 2 -3 times as strong (according to the IPCC). But some scientists are trying to figure out those feedbacks with models which have flaws in the order of 70W per square meter. (How do we find that signal in noise that’s up to 19 times larger?)
Remember climate science is settled: like gravity and a round earth. (Really?)
Miller et al 2012 [abstract] [PDF] find that some models predict clouds to have a net shortwave radiative effect near zero, but observations show it is 70W per square meter. Presumably, cloud shortwave radiative effect means the sunlight bounced upwards off the surface of the clouds and out into space.
What’s especially interesting about this paper is the level of detail. They test shortwave and longwave radiation, precipitation flux, integrated water vapor, liquid water path, cloud fraction, and they have observations from the top of the atmosphere and the surface. With so much information they can test models against short wave and long wave radiation, to see [...]
Today in the Sydney Morning Herald and The Age, for the first time, David Evans has been published in the Op-Ed section. Something is going on in those newsrooms…? This article, below, simply makes the point that the models amplify the direct effect of CO2 by a factor of three and that is where the most important uncertainties lie. This key factor in the debate — which we cover repeatedly on this blog– has virtually never been made before in these newspapers which are the major dailies for Australia’s two largest cities. Any debate about the effects of CO2 needs to start with the fact that most of the warming in the models comes from amplification of humidity and clouds. If the models were right about water vapor, we would have found that missing hot spot. – Jo PS: The SMH and The AGE have both closed comments already! Have they run out of electrons? Oh my? Or were they afraid the comments looked like a debate?
UPDATE: I’ve just posted that these major dailies have “disappeared” the Muller conversion article too!
Dr David M.W. Evans
31 Jul 2012
Climate scientists’ theories, flawed as they are, ignore [...]
This is part of a series that Tony Cox and I are doing that drills down to the most important points and papers, with proper references, as a definitive resource.The models are wrong: not just “unverified”, not just “uncertain”, but proven to have failed. — Jo
Joint Post: Tony Cox and Jo Nova
Across different regions, and different time-spans over the last century, the models fail.
Koutsoyiannis and Anagnostopolous et al show those models can’t model the recent century, and because the models fail to predict regional and smaller scale effects it’s impossible that they could predict longer and global values.[i]
On 30 year time frames, the original observations are nothing like the models projections on a local scale. (Click to enlarge).
The models should retrospectively match the actual temperature over the past 100 years. This test of retrospectivity is called hindcasting. If a model has valid assumptions about the climatic effect of variables such as greenhouse gases, particularly CO2, then the model should be able to match past known data.
“…all the models were “irrelevant with reality” at the 30 year climate scale…”
When tested, the global climate models failed to [...]
You’ll find this hard to believe but I get excited about the 1990 First Assessment Report (FAR). It’s very different from wading through the later ones, because it’s remarkably honest, and things are not hidden in double-speak (well, not so much). Scientists behave like scientists and talk of null hypothesis, and even of validating models. Indeed they had a whole chapter back then called “validation”. How times have changed.
This is the short summary of Chapter 8 “Attribution”
Thanks to Alan for sending me this link today (Chapter 8, IPCC FAR).
The “Attribution” Chapter is the part where they try to figure out what “caused” the warming. Chapter 8 says, essentially, “we don’t know, we might never know, our models don’t work, and we can conclude it might all be natural, but then again, it might not.” Got it?
This is in the same era that Al Gore was saying “the science is settled” and “there is no debate”.
What’s clear in 1990 from the FAR was that it was widely admitted that the models were bodgy, and that figuring out exactly what caused the recent warming was very difficult, indeed impossible at the time. There were too many variables, [...]
Dr Andrew Glikson (an Earth and paleoclimate scientist, at the Australian National University) contacted Quadrant offering to write about the evidence for man-made global warming. Quadrant approached me asking for my response. Dr Glikson replied to my reply, and I replied again to him (copied below). No money exchanged hands, but Dr Glikson is, I presume, writing in an employed capacity, while I write pro bono. Why is it that the unpaid self taught commentator needs to point out the evidence he doesn’t seem to be aware of? Why does a PhD need to be reminded of basic scientific principles (like, don’t argue from authority). Such is the vacuum of funding for other theories that a debate that ought to happen inside the university obviously hasn’t occurred. Such is the decrepit, anaemic state of university science that even a doctorate doesn’t guarantee a scientist can reason. Where is the rigor in the training, and the discipline in the analysis?
Credibility lies on evidence
by Joanne Nova
April 29, 2010
Reply to Andrew Glikson
Dr Andrew Glikson still misses the point, and backs his arguments with weak evidence and logical errors. Instead of empirical evidence, often [...]
15 contributors have published
1657 posts that generated