New Science 21: The mysterious Notch in the Sun-Earth relationship — the dog that didn’t bark

The notch in the Sun-Earth relationship is the dog that didn’t bark — the clue that was there all along, telling us something about the way the Sun influences Earth’s climate. There is a flicker of extra energy coming in at the peak of every solar cycle — roughly every 11 years. It’s only a small peak, but there is no warming on Earth at all — it’s like the energy that vanished. A good skeptic would be saying but, the increase in energy is so small, how could we find it among the noise? And the answer is that Fourier maths is so good at doing this that it is used every day to find the GPS signals which (as David details below) are so much smaller than the noise that they are much harder to find than this signal from the Sun.

Thousands of engineers know about and use Fourier maths and notch filters, but due to a strange one-sided bureaucratic funding model, none of those thousands of experts have applied that knowledge, which is so well adapted to feedback systems to the Sun Earth energy flows. David has used an input-output “black box” method to find the empirical transfer function and discover the notch. Viva the independent scientist, supported only by independent donations  — at a fraction of the cost of the billion dollar models, David Evans has done something in three years which none of the bureaucrat-driven golden icons have managed in thirty years.

Is the notch real? It shows up independently in different eras and different datasets (see  fig 2). What does it mean? Something is occurring which is “tuned” to the solar cycles to change the way the Earth reacts to incoming solar radiation just as that radiation peaks. The mechanism for this must originate on the Sun, because the timing is too accurate, and that unknown mechanism obviously has an influence on Earth’s temperature (maybe through clouds). Obviously to build a full climate model we need to understand that.

Notch filters are used in electronics to filter out “hum”. Notch filters usually do not involve a delay, but they could, which alerted us to the possibility of a delay. This eventually led to the discovery of an 11-year (or half solar cycle) delay, originating on the Sun, between the solar peaks in sunlight and the factor that neutralizes their effect. David discusses a few possible mechanisms in a later post. He finds evidence suggesting that that this indirect effect of the Sun is ~14 times more powerful at driving changes in our climate than the influence of variations in direct solar heating. Something about the Sun, some force, is changing conditions on Earth in a way that conventional climate models don’t understand. They are “plugging the gap” with CO2 in the last 50 years, but can’t possibly work until they understand this missing key. In this post we start the hunt.

Thank you to those who keep us going. Together we hope to advance our understanding of what controls the climate…

— Jo

______________________________________________________________________________

21. The Notch in the Empirical Transfer Function

Dr David Evans, 27 November 2015, Project home, Intro, Previous, Next.

This post begins the search for the cause of global warming. This is the most mathematical post of the solar part of this series.

We start by finding the empirical transfer function from total solar irradiance (TSI) to surface temperature — which tells us how much surface warming we get per increase in TSI, at each frequency. We find an unexpected notch therein, and discuss its implications.

Our Formal System

For simplicity while searching for a relationship between TSI S and the surface temperature TS, we assume that the TSI is the only influence on TS. As discussed in the previous post, it is plausible that TSI is a dominant cause of global warming — or more precisely, contains information about a dominant cause because the mechanism is indirect. If TSI mostly predicts TS, and there is a strong and obvious relationship between them, then this assumption is adequate for the exploratory analysis here.

Formally, consider the system whose input is the TSI anomaly at 1 AU (the change in TSI at the average distance of the Earth from the Sun, one astronomical unit), denoted by ΔS, and whose output is the mean global surface temperature anomaly, ΔTS. Both ΔS and ΔTS are functions of time.


Formal system from TSI to surface temperature.

Figure 1: The formal system under consideration: surface warming 100% controlled by changes in TSI.

 

Some Background on Transfer Functions

We need to explain the empirical transfer function of interest here, and what it means. This can get technical, but you don’t really need to know the details to understand the implications for climate.

– The Transfer Function of Interest

The empirical transfer function discussed below is the ratio of surface warming to change in TSI, or how much surface warming we get per unit of increase in TSI, by frequency.

Electromagnetic radiation behaves independently at each frequency in a linear environment such as air or space, which is why we talk of UV, visible light, microwaves, infrared and radio waves and know they do not interact or interfere with each other. Keep on decreasing the frequency, from millions of cycle per second for radio waves down to fractions of a cycle per year, and we get the fluctuations in radiation coming off the Sun on climate time-scales. The formal system above is linear, so the behavior at each frequency is independent of other frequencies, even for the fluctuations on climate time-scales.

Our system is assumed to be linear, because we are only dealing with small perturbations to a stable climate near steady state, and any classical physical system is approximately linear for sufficiently small perturbations. The climate is widely assumed to be linear for warming influences during an interglacial. The system is also assumed to be (time-) invariant, that is, its properties have not changed significantly during the current interglacial.

– Transfer Functions Generally

Linear invariant systems have a special property. If the input is a sinusoid (see the systems document) then the output is also a sinusoid at the same frequency and, for a given system at a given frequency, the ratio of the amplitude of the output sinusoid to the amplitude of the input sinusoid is constant, and the difference between the phases of the output and input sinusoids is also a constant. Linear invariant systems are amenable to Fourier analysis: the input can be expressed as a sum of sinusoids using the Fourier transform, and then, by the linearity of the system, each of the input sinusoid maps to the same output sinusoid that it would if the input sinusoid were the only input present — that is, what happens at each frequency is independent of what happens at other frequencies. This is the only significance of sinusoids in analyzing systems.

A linear invariant system is completely described by its transfer function, which is a function of frequency whose value at each frequency consists of the amplitude multiplier and the phase shift caused by the system. Complex numbers are used to represent sinusoids: both complex numbers and sinusoids have amplitudes and phases, and a sinusoid is simply represented by the complex number with the same amplitude and phase. At a given frequency, the value of the transfer function is the complex number whose:

  • Amplitude is the amplitude of the output sinusoid at the frequency divided by the amplitude of the input sinusoid at the frequency.
  • Phase is the phase of the output sinusoid at the frequency less the phase of the input sinusoid at that frequency.

The value of the transfer function at a given frequency is the ratio of the complex number representing the output sinusoid to the complex number representing the input sinusoid, using complex division. The Fourier transform of the system output is the complex product of the transfer function and the Fourier transform of the system input.

The Data

We used the most prominent public datasets, on all time spans available. The datasets are noisy and sometimes contradictory, so we did not pick a “best” dataset or combine them into a composite dataset, but instead found a  single spectrum that best collectively fitted all their individual spectra .

The TSI datasets used are Lean’s reconstruction from sunspots from 1610 to 2008 with the Wang, Lean, and Sheeley background correction, the PMOD satellite observations from late 1978, the Steinhilber reconstructions from Be10 in ice cores going back 9,300 years, Delaygue and Bard’s reconstruction from C14 and Be10 from 695 AD, the f10.7 solar radio flux from 1947, and the SIDC/SILSO (V1) sunspot counts from 1749. ACRIM satellite data from 1978 was omitted because it disagrees with the other data before 1992, which then leaves only two sunspot cycles — too short to establish its spectrum.

The temperature datasets used are the satellite records from late 1978 (UAH and RSS), the surface thermometer records from 1850 or 1880 (HadCrut4, GISTEMP, and NCDC/NOAA), the two comprehensive proxy time series of Christiansen and Ljungqvist 2012 going back to 1500 with 91 proxies and 1 AD with 32 proxies, the Dome C ice cores going back 9,300 years (to match the period for which there is TSI data), and Moberg’s 18-proxy series from 1 AD.

Low-Noise Fourier Analysis

We are interested in two functions of time: the TSI anomaly ΔS, and the surface warming ΔTS. If those functions were known perfectly for all time then we could calculate their Fourier transforms and learn the amplitude and phase of all of their constituent sinusoids. Instead our data samples these functions, imperfectly, to form a number of overlapping time series of limited extents, from which we estimate the main constituent sinusoids in the functions.

The potential signals we are looking for are small, at about the noise level in the data. Consider the direct heating effect of the TSI peaks that occur at the maximum of each sunspot cycle. From the diagram of the sum-of-warmings model (Fig. 1 of post 13), the direct surface warming due to a change in TSI of ΔS is

where α is the albedo (~0.30), λSB is the Stefan-Boltzmann sensitivity (0.267 °C W−1 m2), M is the ARTS multiplier (2.0 [1.5,2.7]), and fα is the albedo feedback to surface warming (0.4±0.5  W m−2 °C−1).

TSI typically varies from the trough to the peak of a sunspot cycle by ~0.8 W m−2 out of 1361 W m−2, thus causing ~0.1 °C of surface warming, about the same as the 0.1 °C typical error margin in modern temperature records. However processing techniques like Fourier analysis that correlate an expected signal against the data can easily see beneath the noise floor — for example, the global positioning system signal is typically one four hundredth of the noise floor of the Earth (−26 dB, power).

There are many ways of applying Fourier analysis to estimate the constituent sinusoids. We took care to use methods that minimized the introduction of noise. In particular, we did not arbitrarily change the data in the time series by adding data points whose values are zero in order to “pad” the data series to a convenient length, or use “windowing” — both often done automatically by software packages that apply a Fast Fourier transform (FFT). We originally discovered the notch (below) using the standard Discrete Fourier transform (DFT), by matching a TSI time series to a temperature time series covering the same period with the same number of regularly-spaced data points. But while it is possible to detect the notch using the DFT, we wanted to be surer; so we developed a superior low-noise version of the DFT, called the Optimal Fourier transform (OFT). The OFT has far greater sensitivity and frequency resolution than the DFT, mainly because it is not confined to the preset frequencies of the DFT — but it takes much longer to compute. All the results here use the OFT.

To be properly confident in a Fourier analysis, it must not vary significantly if minor changes are made to the processing methods or to the data — for instance, removing the initial or final 5 or 10% of a data series. The results here are all robust with respect to minor changes in processing technique or data.

The Empirical Transfer Function

The various datasets each contributed points to a combined amplitude spectrum of either TSI or temperature. The combined spectra were each smoothed, then their ratio found. The result is the amplitude of the empirical transfer function, the black line in Fig. 2. Repeating the procedure, but with the data restricted to pre-1910, to pre- or post-1945, to pre-1970, or to instrumental data (no proxies), gives the colored lines in Fig. 2. The TSI data is at 1 AU so it is deseasonalized, whereas the eccentric orbit of the Earth means the actual TSI incident on the Earth is seasonalized, so we ignore frequencies greater than one cycle per year.



Empirical transfer function (amplitude only)

Figure 2: The amplitude of the empirical transfer function, when the data is restricted as marked. The black line is for all data (unrestricted); the gray area is a zone around the black line. The second horizontal scale is the period of the sinusoids. Both scales are logarithmic.

 

The data was only sufficient to estimate the amplitudes of the constituent sinusoids in the TSI or temperature (or equivalently, their power spectra — the power of a sinusoid is the square of its amplitude). Estimations of phases were not robust. Although the amplitude spectra of physical phenomena like radiation tend to be relatively smooth functions of frequency, the phase spectra are often highly discontinuous so smoothing and averaging is not appropriate.

The Notch

The first robust feature of the empirical transfer function is the notch, a relatively narrow region of lower response centered on a period of ~11 years. Sinusoids in ΔS with periods around 11 years are severely attenuated when transferred by the system to ΔTS, relative to sinusoids at other frequencies. Peaks in TSI and sunspots, which occur ~11 years apart on average, do not result in corresponding peaks in the surface temperature.

In electronic audio equipment, a filter that removes the hum due to mains power is called a notch filter, because it removes the sinusoids in a narrow range of frequencies around the mains frequency. It appears that something is removing the 11-year “solar hum” from ΔTS.

The notch is a curious fact. Solar radiation warms the Earth, providing all the heat as incoming radiation — visible light, UV, infrared, and so on. So we’d expect the peaks in TSI from the Sun every ~11 years to produce small but detectable corresponding peaks in surface temperature; yet they don’t. (This observation gave rise to the notch-delay theory.)

The notch occurs because the amplitude spectrum of the surface temperature is flat — also found by Eschenbach, independently via different means — while the TSI amplitude spectrum has, of course, a pronounced peak around 11 years.

If low-altitude cloud cover troughed at every sunspot maxima (as suggested by Fig. 2-12 of Lockwood et. al.’s Earthshine Mission case from 2004), perhaps in response to troughs in galactic cosmic rays at sunspot maxima, then presumably albedo would also trough, and surface temperature peak, during sunspot maxima. This would make the empirical fact of the notch even more remarkable.

The notch is not intrinsic to the main part of the notch-delay hypothesis. The force ND hypothesis coming up allows for force D, the most consequential part of the hypothesis, which does not depend on either the notch’s existence (datasets are subject to revision) or its meaningfulness (under the notch-delay hypotheses below, the formal system is not quite invariant because the delay is one sunspot cycle and the duration of a sunspot cycle varies with time).

Implications of the Notch for Climate Influences

The TSI peak every sunspot cycle fails to cause a corresponding peak in the Earth’s surface temperature record — therefore a countervailing cooling influence is present at precisely the times when TSI peaks.

The duration of the sunspot cycle varies considerably, from 9 to 14 years, yet notching implies that the countervailing influence is always synchronized to the TSI peaks — therefore the timing of the countervailing influence is controlled by the Sun.

The countervailing influence completely counters the warming influence of the TSI peaks — therefore the countervailing influence is as at least as strong as the direct heating effect of the changes in TSI.

The Indirect Solar Sensitivity (ISS)

The second robust feature of the empirical transfer function is that it is remarkably flat for periods over 20 years (frequencies less than a twentieth of a cycle per year). For slower TSI fluctuations the sensitivity of surface warming to increases in TSI, herein called the indirect solar sensitivity (ISS), appears to be relatively constant. For periods over 200 years, the ISS is ~1.7±0.2 °C W−1 m2 (the value of the black line in Fig. 2 as the period increases). This is well above the direct solar sensitivity of 0.12 [0.07, 0.36] °C W−1 m2 in Eq. (1)‍, suggesting that the long term influence of the TSI on ΔTS is ~14 [4.2, 27] times larger than the direct heating effect of TSI.

Hence changes in TSI signal something that has an effect on the surface temperature that is much larger than the direct heating effect of changes in TSI — which is compatible with the finding in post 10 about the relatively large influence of EDA.

9.2 out of 10 based on 80 ratings

143 comments to New Science 21: The mysterious Notch in the Sun-Earth relationship — the dog that didn’t bark

  • #
    Alexander Carpenter

    Do the glacial-interglacial transitions come about because of a change in the synchrony of Force D and TSI?

    I.e., is there a third force that perturbs the sun, so it enters a different state (or, as a metaphor, “rings” like a bell), for the duration of a glacial (or interglacial) period, and then reverts to some prior state until perturbed again?

    Is there a fourth force that changed the glaciation-cycle frequency from ~41,000 years to ~100,000 years?

    Is there an analogous force that initiates ice ages (with their enclosed glaciation cycles)?

    How co-resonant are all these ‘forces”? Are they all periodic? What perturbs them?

    70

    • #

      Alexander, I am confining my attention here to changes within the interglacial. The climate system might not be so linear across the transitions into and out of ice ages. No idea about ice ages.

      72

      • #
        ianl8888

        Yes, as you’ve said previously

        In any case, when we are experiencing the next glacial epoch, ABC/Fairfax moues, hot whoppers, notches of any and all sorts, none will be of the slightest concern to us

        So your decision to stay within the confines of the interglacials is sound

        130

      • #
        Alexander Carpenter

        The use of “strong inferrence” to guide us to look for a Force D invites the further exploration for other forces (or events or “black swans” or emergent properties, or cyclic influences, or whatever…) that must seem from our present perspective to be “pie in the sky” phenomena, but which represent the next level of challenge after we integrate this advance (for which thank you).

        My conjectures are rather premature, but I was so excited, and felt my vision of a now-much-larger and more-coherent overall process so enhanced, that I didn’t restrain myself.

        Certainly the present task is to make sense of what’s happening right here, right now, and ground that as best we can in the dynamics of the solar and planetary systems, and yet this approach opens the door to a new style of investigation, one that at the very least recognizes the potential for entirely different players and relationships. This is especially intriguing if some of those are ones that we know nothing about — “unknown unknowns” to discover.

        We stand on the shoulders of giants, and even if the giants are of necessity taking baby-steps for now, our vision can reach farther than ever before.

        I refer us to Carlo Rovelli, Science Is Not About Certainty: A Philosophy Of Physics, at http://edge.org/conversation/a-philosophy-of-physics. In any short term this may be off-topic, but ultimately this is the topic.

        90

    • #
      el gordo

      ‘Is there a fourth force that changed the glaciation-cycle frequency from ~41,000 years to ~100,000 years?’

      Around 0.8 million years ago there was a transition to a different state and I assumed it had something to do with activities on earth rather than the sun.

      Doing a little reading the closure of the Central American Seaway would have been a prime candidate, but they have now pushed that date back to 15 million years BP.

      Still, your question is worth puzzling over.

      60

  • #
    Franktoo (Frank)

    David wrote: “For periods over 200 years, the ISS is ~1.7±0.2 °C W−1 m2 (the value of the black line in Fig. 2 as the period increases).”

    Could this be amplification of a 1 W/m2 forcing by feedbacks to produce the equivalent of a 1.7 W/m2 forcing?

    00

    • #

      Frank: Different forcings can produce very different surface warmings, depending on the nature of the climate driver causing the forcing, so it doesn’t make sense to speak of a forcing as producing a particular warming. Different drivers have different driver-specific feedbacks.

      The old notion of a forcing being roughly equivalent to an amount of extra absorbed sunlight (ASR) and causing some feedbacks (which were all feedbacks to surface warming) belongs to the old failed architecture. Instead, different drivers may cause different warmings per unit of forcing.

      From the empirical transfer function, which covers the last 9,000 years via ice core data, it would appear there is a fairly constant low-frequency relationship between changes in TSI and surface warming: the ISS of ~1.7±0.2 °C W−1 m2.

      Note that the TSI is measured in W/m2, but this is at 1 AU where the background value is 1,361 W/m2. One could convert this to a forcing at the TOA by diving by 4 and multiplying by 70%, but this would be pointless because the effect is NOT due to the direct heating effect of the change in TSI. Instead it is 14 times larger than that.

      This cannot be due to a quick-acting TSI-specific feedback that amplifies it by 14, or the TSI peaks would be similarly amplified — see the notch: the TSI peaks don’t even show up in the surface warming records.

      It could conceivably be due to a very slow acting TSI-specific feedback, that operates on times scales of at least 20 years and more like 200 years to fully kick in. Since the forcing concept is conventionally used to apply to forcings from any source, including CO2, I think the whole forcing idea seems completely inappropriate and anachronistic at this stage. Trying to fit this into conventional forcings is beyond feasible. No point flogging that horse further.

      Alternatively, the empirical fact of the ISS could be due to some influence coming out of the Sun. This appears more like a new driver, not some weird feedback that doesn’t fit into the conventional paradigm in any case.

      141

      • #
        Konrad

        David,
        what is needed is a plausible, testable mechanism for the “notch” to complete the picture. In this you don’t have to look for a new driver, but an old one.

        The logical place to look is in major energy storage and release mechanisms on the planet, the oceans, land/sea ice and biology. But these are all “surface properties” and you are surface-phobic. In your modelling you always tried to eliminate the surface and its response to solar radiation. This is the same error as the CO2=doom crowd are making.

        The main energy storage and release mechanism on the planet is the oceans, so that is the first place to look. This requires understanding how energy enters, is transported within and leaves the oceans. This is where the critical error in the foundation of the old models lays. They try to claim that atmospheric LWIR is slowing the cooling rate of the oceans and raising their temperature by 33K. However empirical experiment shows incident LWIR cannot heat nor slow the cooling rate of water free to evaporatively cool. Something else is raising ocean temps by 33K over theoretical blackbody temperature. The simple answer is that the oceans are nowhere close to a theoretical blackbody, they are an extreme SW selective surface.

        For a solar illuminated SW translucent ocean that is LWIR opaque and has a slow speed of internal conduction/convection, depth of radiation absorption is critical to determining energy accumulation. This cannot be modelled with simple linear equations. But this is not hand waving or fancy maths, this is solid physics easily demonstrated by empirical experiment. The effect is major, and cannot be dismissed with assumptions or approximations. After all the selective surface effect is what is driving our oceans 33K above theoretical blackbody temperature.

        With regards the “notch”, the oceans being an extreme selective surface is the most plausible mechanism. This is because for the oceans all frequencies of solar radiation do not have the same heating effect. 1 w/m2 from UV will stay longer in the ocean than 1 w/m2 of SW. This is why using TSI as a metric in modelling solar forcing of the oceans is so very wrong. You need to know about solar spectral variance. While TSI varies little, UV varies greatly over and between solar cycles. It is UV that penetrates deepest into the oceans, deep below the diurnal overturning layer. The variance in energy accumulation at this depth will take years to be reflected in surface temperatures. Hello “notch”!

        616

        • #

          Konrad, the notching implies a phenomenon synchronized to the irregular sunspot cycle. How do the oceans do that?

          UV is a prime suspect, as are some ocean properties. Post soon.

          20

          • #
            Greg Goodman

            Sinusoids in ΔS with periods around 11 years are severely attenuated when transferred by the system to ΔTS, relative to sinusoids at other frequencies. Peaks in TSI and sunspots, which occur ~11 years apart on average, do not result in corresponding peaks in the surface temperature.

            The fact that solar peaks do not appear in the ΔTS record is prima facea evidence that THERE IS NO DETECTABLE SOLAR EFFECT.

            Your “notch” response defined as it is , as the ratio of input to output means that there is strong 11y component to the input which is not present in the output.

            Your notch “response” is only based upon the ASSUMPTION that it is a response: that solar is the dominant “input”.

            The proper conclusion to draw from this is that the underlying assumption was wrong , not that there is some absurd “notch response”.

            01

            • #
              Greg Goodman

              How about this as a test. Take the rate of change of the Dow-Jones index and do the same thing.

              I bet you find a “notch response” in the D-J too 😉

              01

            • #

              Well obviously Greg. That assumption, and the reasons for it, are laid out in some detail in the previous, post 20. Don’t suppose you’d mind reading first, so I don’t have to keep answering the bleeding obvious?

              10

          • #
            Greg Goodman

            Look David, to show a notch filter you need to show that SOMETHING gets let through, ie no or less attenuation.

            All you have here is the SSN which really only have one peak in this kind of crude resolution. So wherever dTS has a peak you have nothing in SSN to divide by it.

            This is NOT the system response since you are not testing with a broad band signal. It is just the upside down spectrum of SSN.

            I commend you efforts and perceverence but you need to take stock of your errors and get this back on track.

            Best regards, Greg.

            00

            • #

              Greg: When I wish to be patronized, you’ll be the first person I ask, promise. Too bad you didn’t read the details. The input signal is sufficiently broadband, and the sides of the notch are clearly visible in the transfer function.

              00

      • #
        Franktoo (Frank)

        David: I didn’t interpret you transfer function correctly in the above comment. The proper units for long-term warming in response to TSI forcing is 1.7 K/(W/m2). If I understand correctly, you are using W/m2 at 1 AU, and I need to divide by 4 to get 0.425 K/(W/m2) where incoming flux is a global solar forcing.

        You and I disagree about whether 1 W/m2 of solar and CO2 forcing cause the same amount of surface warming, and I don’t want to debate that point at the moment. To avoid this problem, I’m going to convert the ECS values for Otto (2013) and Lewis and Curry (2014) into K/(W/m2) by dividing by 3.7 W/m2/doubling. That gives 0.54 and 0.43 K/(W/m2) for their climate sensitivities (2.0 and 1.6 K/doubling). So your value of 1.7 K/(W/m2) is compatible with other estimates of climate sensitivity – if you wish to believe that all forcings produce roughly the same amount of surface warming. (:))

        You and researchers estimating climate sensitivity rely on the same instrumental temperature record, but explain it with different forcing: solar forcing in your case and the sum of all anthropogenic and natural forcing in the other. To get the same answer, your solar forcing for the last century must be equal to that used by LC14 for the same period, about 2.0 W/m2. Obviously there are a lot of things I don’t understand about the data that went into your transfer function. Have you described it elsewhere?

        00

        • #

          Frank, you got your arithmetic backward and consequently have missed the point.

          The central ECS in AR5 is 2.5 K, which for a forcing of 3.7 W/m2 indicates a sensitivity to (TOA) forcing of 0.68 K/(W/m2).

          The conventional approach is to apply this sensitivity, which is a sensitivity to a change in absorbed sunlight employing the Planck sensitivity (very close to SB) and surface warming, to the forcing produced by CO2. Yes, we disagree here — I say AGW arose because the strong solar sensitivity of 0.68 K/(W/m2) was applied to the CO2 forcing, instead of the much weaker CO2 sensitivity, weaker because it includes CO2-specific feedbacks such as the rerouting feedback.

          Fortunately here we are concerned with solar sensitivity, so using the conventional sensitivity is fine. Note in Eq. (1) above we derived the solar sensitivity to changes in TSI at 1 AU as 0.12 K/(W/m2). Taking (1 – albedo) and the 1-AU-to-TOA conversion of division by 4 into account (explicit in Eq. (1)), that sensitivity is 0.12 * 4 / 70% or 0.69 K / (W/m2). This agrees with the conventional solar sensitivity (no surprise, our figures came right out of AR5, as per the earlier posts in the series).

          Turning to the ISS of 1.7 K/(W/m2) derived from the empirical transfer function, as you noted it is the warming per unit of TSI at 1 AU. A change of 1 W/m2 of TSI at 1 AU corresponds to a change of (1 – albedo)/ 4 or 0.17 W/m2 in absorbed sunlight (or forcing). It produces a 1.7 K of warming, so a full 1 W/m2 of forcing produces 1.7 / 0.17 or 10 K of warming, in long term solar sensitivity.

          Yes, you got that bit backward. (I did it too about two years ago, and Joanne and I went for an afternoon thinking we had found something interesting — then realized I had it backwards, back to the drawing board. Remember it well.)

          So the ISS, converted to traditional forcing, is 10 K /(Wm2).

          This is what the post is about. This long-term solar sensitivity is 14 times higher than the immediate sensitivity to a change in absorbed sunlight, namely 0.68 K/(W/m2) for forcing. Or, as I was saying in TSI at 1 AU in the post, 1.7 K/(W/m2) in the empirical transfer function versus 0.12 K/(W/m2) in Eq. (1).

          Something else is going on. This strongly implies an indirect solar force that is an order of magnitude stronger than the direct heating effect of changes in TSI. This does not fit into the conventional feedback-forcing paradigm using direct solar heating.

          20

      • #
        NeedleFactory

        David:

        In the main post you say The TSI peak every sunspot cycle fails to cause a corresponding peak in the Earth’s surface temperature record and in your response (#2.1) to Frank you say the TSI peaks don’t even show up in the surface warming records.

        It seems to me these statements contradict Chart 1 in “Solar Cycle Warming at the Earth’s Surface…” by Tung & Camp, wherein surface temperature seems to rise and fall with TSI (with a lag of about a year). (http://www.calpoly.edu/~camp/Publications/Tung_Camp_JGR_2008.pdf.) (I found the chart by looking at the Eschenbach link you provided in the main post, which referenced a 2010 paper by Roy Spencer, which referenced Tung & Camp.)

        I must be missing something elementary; I would appreciate it if you could clarify for me what seems to be a discrepancy between your statements and the Tung&Camp chart. I follow your posts eagerly.

        00

        • #

          NeedleFactory: Some researchers claim to have found the TSI peaks in the temperature record. Everyone expects to find it, so this wasn’t a remarkable thing.

          When I examined the transfer function in the frequency domain I had no idea there might be a notch. Twice I threw out the results and ceased working on the problem because there were points showing up from the notch that seemed really wrong — third go around I realized it was a notch, right on 11 years. And that the TSI peaks weren’t in the surface temperature record.

          Eschenbach at WUWT found the same, independently, much to his surprise.

          I think the people who found TSI peaking in the temperature record were using odd datasets or they found an artifact of their processing method.

          Tung and Camp used the NCEP dataset — which is not one of the main five global temperature datasets. Their Fig. 1 is “globally and annually averaged and detrended, but otherwise unprocessed”. You can easily see the TSI peaks in their temperature record, just by eye — so then the rest of their paper follows, and the technique they used to process it from there does not really matter. The NCEP dataset is nothing like UAH, RSS, HadCrut, NOAA/NCDC or GISS — which all agree with each other but not that NCEP dataset.

          People also report finding the TSI peaks in temperature datasets at height (not at the surface). These datasets would have far less data than the main five surface ones — maybe there is some artifact or difference there too. I don’t know.

          I just used the five big main datasets for global surface air temperature, and none of them show the TSI peaks.

          00

          • #
            Greg Goodman

            Tund and Camp is a classic example of seeing what you want to find.

            The cycles line up quite well for 1070,1980 peaks though it is very crude data, with little resolution and the match is very approximative.

            By the time they get to 1990 it starts to slip, 2000 is getting close to 1/4 cycle or 1/2 cycle out of phase.

            What they are probably seeing is 9y lunar cycles that happen to be in phase with solar in 1970.

            00

  • #
    CC Reader

    Jo, The mechanism for this [much] originate…. must.

    10

  • #
    Bob Weber

    Dr. Evans, terrific persistence on your part that can only be admired, from one EE to another.

    Your notch-delay method gives the same basic result as my solar flux accumulation model, which is based on the simple concept of SSTs warming or cooling at a particular level of solar activity. Both methods indicate imminent cooling based on low solar activity to start within two years or less, to last for some time depending on the next solar cycles’ relative power.

    The solar flux accumulation model reveals the Earth’s supersensitivity to solar irradiance.

    The Sun’s magnetic field creates sunspot activity that generates F10.7cm solar flux, among other spectral fluxes, that comprise TSI. I found that OHC & SST respond to TSI in a layered, time-dependent fashion.

    There is variability in the delay in TSI changes that are driven by SSN-flux changes, and also a delay in OHC & SST following the TSI changes, via continuous energy accumulation or dissipation, back and forth, forever, bounded by the maximum and minimum of TSI.

    And do we really know for sure how high or low TSI can go, as we only have a ~400 year record?

    Naturally I’ve been curious about your work, and am going to think about how and why your notch-delay system produces similar results to my system of using solar flux accumulation.

    The sun caused global warming.

    Sunspot activity was 65% higher for the 70 years of the modern solar maximum from 1935.5-2004.5, when the annual average SSN was 108.5, than it was during the previous 70 years from 1865.5-1934.5, when it averaged 65.8, using http://www.sidc.be/silso/DATA/SN_y_tot_V2.0.txt.

    TSI, which tracks w/sunspot number, was higher during the modern maximum period:

    https://i1.wp.com/spot.colorado.edu/%7Ekoppg/TSI/TIM_TSI_Reconstruction.png

    There are other reconstructions, and the modern maximum stands out in all of them.

    150

    • #

      Bob, it’s hard not to notice that a slightly delayed cumulative-TSI signal seems to explain most temperature variation, isn’t it?

      One wonders when the bulk of research effort is going to abandon CO2 research, and stop trying to make models track surface temperature using mainly a CO2 driver. The conventional basic climate model, or “basic physics” as they like to call it, is a flawed model with a dopey assumption dating back to 1896 that has the world furiously barking up the wrong tree.

      303

      • #
        Manfred

        One wonders when the bulk of research effort is going to abandon CO2 research…

        When it finally becomes unfashionable to deny The Bureau of Adjustment or simply too expensive to maintain it against the competing needs of real science and research.

        The taste for Ersatz Climate ‘science’ must inevitably be replaced by a desire for the real thing.

        81

        • #
          King Geo

          Quoting David Evans – “One wonders when the bulk of research effort is going to abandon CO2 research, ….”

          So so true – this obsession with CO2 has to stop and the sooner the better.

          Quoting David Evans – “When it finally becomes unfashionable to deny The Bureau of Adjustment …..”

          This reminds me of that 2011 movie ‘The Adjustment Bureau” starring Matt Damon. Ii is about a bureau based in New York that endeavours to control the fate of individuals and their destiny – a bit like the IPCC – the IPCC are an “Adjustment Bureau” trying to manipulative peoples thinking of Earth’s Climate & temperature based almost solely on CO2, a greenhouse gas that makes up just 400 ppm of the atmosphere – even my 7 year old grandson finds that hard to believe.

          41

          • #
            King Geo

            Make that quoting Manfred – “When it finally becomes unfashionable to deny The Bureau of Adjustment …..”

            01

  • #
    Frederick Colbourne

    Ann Maunder commented on the influence of the planets on sunspots.

    20

  • #
    Bob Weber

    Exactly! The sun used to be seen as the main climate driver until the CO2-heads took over.

    I am positive this world will grow up by recognizing the truth that the sun drives the climate as soon as perhaps next year. Accumulating evidence indicates that once this El Nino is over, the La Nina that follows won’t receive a tail-end TSI boost because of lower solar activity, and temps are going to slide downward (until the next El Nino), and no one is going to be able to temperature-adjust their way out of that.

    It won’t be very many more years before the sun is widely recognized as the big climate powerhouse, in part thanks to your work, and that of others.

    240

  • #
    Leonard Lane

    David. I have a question on your smoothing methodology. I am no sure my comments apply in your case, but please consider them and let me know if they are relevant to the discussion.
    I am always cautious in doing any smoothing to a data set before I do the time series analyses. This is because of the Slutsky-Yule effect (which was just the Slutsky effect when I was involved in learning various time series analysis methodologies in the early to mid 1970’s).
    In any event, we learned by actual data analyses and from the instructor’s lectures that when a time series was smoothed by running means, it produced false periodicities. We started with random series and applied various running mean smoothing and then Fourier analyses and found periods in the smoothed data.
    Would any of this apply to the data smoothing you have used in your analyses?

    (http://mathworld.wolfram.com/Slutzky-YuleEffect.html )

    Thanks,
    Leonard

    20

    • #

      Leonard: Never smooth before doing analyses, because smoothing adds noise. Anything that changes the data adds noise. The OFT goes to a lot of trouble to avoid adding noise — whereas typical applications of FFTs add a bunch of noise (by padding and windowing, then only using sinusoids of preset frequencies), which doesn’t matter much in most applications, but with climate datasets with few cycle repeats and trying to extract every bit of info, it is crucial not to add any noise.

      160

      • #
        Greg Goodman

        Good general point ( especially if you “smoother” is a running mean ! ).

        I invite you to do a lagged cross-correlation on you two datasets. The auto correlation function should show your lag. If you provide you data series somewhere I could have a look.

        00

  • #
    Steerpike

    Wonderful theory. Must be a Nobel prize in the offing. Quick submit it to a peer reviewed journal like Science, Nature, GRL etc.
    Or leave it on a blog without verification and blame its lack of acceptance on conspiracy theories. I’m sure this echo chamber will love you for it.

    554

    • #

      “Wonderful” and most of it is not yet revealed? Wow, you are full of praise.

      As mentioned in the introductory post (near the top):

      In its complete form this work has evolved into two scientific papers, one about the modelling and mathematical errors in the conventional basic climate model and fixing them (carbon dioxide isn’t the culprit), and another for the revamped notch-delay solar theory (it’s the Sun). Both are currently undergoing peer review. These posts are useful in airing the ideas for comments, and testing the papers for errors.

      431

    • #
      Bill Burrows

      Wow a flyby shooter. Let me guess Steerpike – when you submit anonymous unintelligent remarks to threads such as this, I bet you wear a clown mask in front of your keyboard to protect your identity.

      [FYI – I estimate that I have refereed about 150 papers submitted to national and international scientific journals over a 40 year career and I suggest that few of all the papers published in those journals were ever exposed to the level of rigorous questioning that David’s current work is being subjected to on Jo’s blog. Green zealots like to invoke the ‘Precautionary Principle’ and if ever there was a need for this it is now, when the role of carbon dioxide as the prime driver of global warming is clearly under challenge – because its’ supposed modelled impact is just not backed up by real world observations. We should be rejoicing that people such as David have the intelligence and knowledge to offer a plausible alternative hypothesis deserving our calm and measured evaluation – so that its eventual publication is just not ‘another day another paper’ but truly deserving the accolades which the World may then bestow on it and its author].

      Now off you go back to kindy Steerpike. PS Don’t forget to remove your mask and cloak of anonymity so you don’t frighten the other kids in your class.

      463

    • #
      AndyG55

      Blog posting such as this get FAR MORE PEER REVIEW than any journal publication.

      Having two of your mates check you spelling and grammar.. as in la pal-review in “Climate science™” is NOT science. !!

      283

      • #
        Dave

        .

        All of them offer this:
        Pay for quick peer review!
        Plus 3 steak knifes from Orangutan Bones
        The Environment is NOT the concern of any of these publications TurnPike mentioned!

        Best science is being ignored over paid for published articles

        In most of the publications

        Now a Billion dollar industry

        Yup! Science can be delivered to 97% of paying CAGW scientists

        172

      • #
        wert

        I’m not a peer, but I could do a beer review. I could do with a beer.

        20

    • #
      Peter C

      Identity Theft!

      Not good Steerpike. Bad Steerpike!

      10

      • #
        Peter C

        Makes no sense now that the Avatars have been corrected. At the time I made my comment Steerpike was displaying Bernd Felche’s image. It may have been a problem with the website, in which case my apology to Steerpike.

        20

    • #
      gai

      Cries of ‘peer-reviewed’ are the cries of the mediocre defending their status in academia. As Dick Pothier wrote in the 1982 article.

      If you want to publish an article in some scientific or medical journal, here is some unusual advice from Scott Armstrong, a professor of marketing at the University of Pennsylvania’s Wharton School: Choose an unimportant topic. Agree with existing beliefs. Use convoluted methods. Withhold some of your data. And write the whole thing in stilted, obtuse prose

      “Although these rules clearly run counter to the goal of contributing to scientific knowledge — the professed goal of academic journals — they do increase a paper’s chance of being published,” Armstrong said.

      “Some readers may feel that the suggestions here … are extreme,” he wrote in his article. “However, they provide a description of many papers published…

      In one study, Armstrong said, academics reading articles in scientific journals rated the authors’ competence higher when the writing was less intelligible than when it was clear.

      In another study, Armstrong said, research papers were mailed to a sampling of dozens of researchers. Half the scientists received a paper that described an experiment confirming existing beliefs; the other half received a paper describing an identical experiment but with a different conclusion that challenged the consensus.

      Although the methods used in the two sets of papers were identical, the scientists surveyed generally approved of the procedures used in the papers that confirmed existing beliefs and generally disapproved of the same methods when they were used to contradict what most scientists believed, Armstrong said.

      “Papers with surprising results are especially important for adding significantly to what is known. Presumably, the editors of journals want to publish important papers,” Armstrong said. “On the other hand, they are concerned that the journal might look foolish — and so they reject many of the important papers.”…

      From the Philadelphia Inquirer, March 23, 1982.

      Decades later a science publisher concurs. “Peer review is sick and collapsing under its own weight,” so says science publisher Vitek Tracz, who has made a fortune from journals

      “Nobody reads journals,” says science publisher Vitek Tracz, who has made a fortune from journals. “People read papers.” Tracz sees a grim future for what has been the mainstay of scientific communication, the peer-reviewed print journal. Within the next 10 years, he says, it will cease to exist.

      This prophecy ought to carry weight. Over the past 3 decades, Tracz, chairman of a conglomerate called the Science Navigation Group, has helped transform the world of science publishing. His most notable creation to date may be BioMed Central, the first for-profit open-access publisher. The pioneering site, founded in 2000 in London, has grown into an empire with more than 250 biology and medicine journals in its stable.

      BioMed Central earned Tracz a reputation as a visionary. “He’s one of the most important publishers of the last decade,” says Michael Eisen, a biologist at the University of California, Berkeley, and co-founder of the Public Library of Science (PLOS), a nonprofit open-access publisher that launched its first journal in 2003.

      Tracz “always has many irons on the fire; he likes to experiment. That’s unlike the rest of science publishers who are quite conservative and work on standardizing, consolidating, and reducing costs,” says Matthew Cockerill, managing director of BioMed Central, which Tracz sold in 2008. By contrast, he says, “Vitek doesn’t believe in business plans, but in ideas.”…

      And how about the actual truth about peer-review?
      Three myths about scientific peer review

      Myth number 1: Scientists have always used peer review
      … in most scientific journals, peer review wasn’t routine until the middle of the twentieth century, a fact documented in historical papers by Burnham, Kronick, and Spier…..

      …How many of Einstein’s 300 plus papers were peer reviewed? According to the physicist and historian of science Daniel Kennefick, it may well be that only a single paper of Einstein’s was ever subject to peer review. …

      Myth number 2: peer review is reliable
      …There’s not much systematic evidence for that presumption. In 2002 Jefferson et al (ref) surveyed published studies of biomedical peer review. After an extensive search, they found just 19 studies which made some attempt to eliminate obvious confounding factors. Of those, just two addressed the impact of peer review on quality, and just one addressed the impact of peer review on validity; most of the rest of the studies were concerned with questions like the effect of double-blind reviewing. Furthermore, for the three studies that addressed quality and validity, Jefferson et al concluded that there were other problems with the studies which meant the results were of limited general interest; as they put it, “Editorial peer review, although widely used, is largely untested and its effects are uncertain”….

      As regards validity and quality, you don’t have to look far to find striking examples suggesting that peer review is at best partially reliable as a check of validity and a filter of quality.

      Consider the story of the German physicist Jan Hendrik Schoen. In 2000 and 2001 Schoen made an amazing series of breakthroughs… His work was investigated, and much of it found to be fraudulent. Nature retracted seven papers by Schoen; Science retracted eight papers; and the Physical Review retracted six. What’s truly breathtaking about this case is the scale of it: it’s not that a few referees failed to pick up on the fraud, but rather that the refereeing system at several of the top journals systematically failed to detect the fraud. Furthermore, what ultimately brought Schoen down was not the anonymous peer review system used by journals, but rather investigation by his broader community of peers.

      What about the suppression of innovation? Every scientist knows of major discoveries that ran into trouble with peer review. David Horrobin has a remarkable paper (ref) where he documents some of the discoveries almost suppressed by peer review; as he points out, he can’t list the discoveries that were in fact suppressed by peer review, because we don’t know what those were. His list makes horrifying reading. Here’s just a few instances that I find striking…

      Myth: Peer review is the way we determine what’s right and wrong in science
      ….peer review is given a falsely exaggerated role, and this is reflected in the understanding many people in the general public have of how science works. Many times I’ve had non-scientists mention to me that a paper has been “peer-reviewed!”, as though that somehow establishes that it is correct, or high quality. I’ve encountered this, for example, in some very good journalists, and it’s a concern, for peer review is only a small part of a much more complex and much more reliable system by which we determine what scientific discoveries are worth taking further, and what should be discarded.

      And just to top it off from the Crime and Consequences Blog where the discussion is about crime and the criminal law: Peer Review, Corruption, Junk Science, and Advocacy Science

      Seems the reputation of ‘peer-reviewed science’ is really hitting the skids.

      41

  • #
    KinkyKeith

    Some comments above ask about a fourth mechanism at work to produce global warming.

    Didn’t Milankovitch put forward a theory concerning the combination of Earths orbit around the Sun varying from circular to elliptical in combination with axial tilt?

    http://www.sciencecourseware.org/eec/globalwarming/tutorials/milankovitch/

    David’s model only covers the last 8,000 years or so, the inter-glacial, and the Milankovich model covers much longer features of Earth – Sun orbital mechanics.

    Milankovich’s work doesn’t seem to qualify as a 4th factor because of the longer periodicity.

    KK

    50

  • #
    Ceetee

    So much of the debate seems to involve around trying to pin a basic rhythm to what we understand. What I struggle with is the immensity of what we don’t understand. Struggle is probably the wrong word, perhaps it’s just a difficult riddle. What I absolutely understand is that the world has became a a fat, lazy and philosophically decadent place if the so called leaders of the world at large will go to a city like Paris to debate on how to deal with phenomenon that has existed since men hunted to survive. No wonder evil does so well now. there are very few good people who can actually THINK.

    180

    • #
      KinkyKeith

      Ceetee

      How did you manage to fit so much into just 3 lines?

      In other posts there have been people who have listed various influences and effects that would
      be a good starting point for quantifying “the immensity of what we don’t understand”.

      Mentioned were things like galactic influences (charged clouds) or zones of difference in the galactic
      framework, then orbital mechanics within our solar system and meteors and meteorites but the biggest may be the least visible.
      In very recent posts one or two people mentioned the enormous energy involved in holding the galaxy and solar system in place:
      gravity and magnetic forces of immense energy are still to be assessed.
      No doubt there are other inputs and drains on the Earths energy budget that slip past without being noticed.

      I agree with your thinking on politics; this Global Warming concept is so easily demolished by scientific analysis that only a very corrupt political climate could keep it afloat.

      We need change and accountability in the systems of world and national governance.

      KK

      91

  • #
    el gordo

    ‘…orbital mechanics within our solar system…’

    Saturn and Jupiter align every 20 years and every 60 years they align in the same solar quadrant plus 9 degrees.

    Straight away I jump to the conclusion that this is the mark of a well recognised climate cycle. Its bordering on astrology but I like it.

    50

  • #
    graphicconception

    My only concern is that if surface temperature does not depend on TSI then you would still get a notch if you divided the spectra?

    30

    • #

      But we know surface temperature DOES depend on TSI. More TSI means more heat, immediately and directly, which causes more surface warming and triggers the usual and well-known feedbacks to surface warming — the result is the dependency in Eq. (1) above (with which the IPCC would agree).

      We also note that on lower frequencies surface temperature tracks TSI rather well — hence the flat frequency response for frequencies less than 1/200 of a cycle per year. It is a commonplace observation that surface temperature simply tracks smoothed TSI, to a reasonably good degree.

      70

      • #
        Greg Goodman

        “It is a commonplace observation that surface temperature simply tracks smoothed TSI, to a reasonably good degree.”

        Yes “tracking” is usually a sign of poor correlation as is ” to a reasonably good degree.”

        Both those terms are warning bells to me. The other one in the set is “robust”.

        00

  • #
    Timboss

    An old post of your hasn’t turned out so well.

    http://joannenova.com.au/2012/01/global-cooling-coming-archibald-uses-solar-and-surface-data-to-predict-4-9c-fall/

    I expect this new effort will go the same way given that you can’t get it published in a reputable peer-reviewed journal.

    129

    • #

      That post was by David Archibald. Same first name, but my last name is “Evans”. Different bloke.

      Who says I cannot get this published in a reputable peer-reviewed journal? Do you know something about the process that I don’t? As mentioned in the introductory post (near the top):

      In its complete form this work has evolved into two scientific papers, one about the modelling and mathematical errors in the conventional basic climate model and fixing them (carbon dioxide isn’t the culprit), and another for the revamped notch-delay solar theory (it’s the Sun). Both are currently undergoing peer review. These posts are useful in airing the ideas for comments, and testing the papers for errors.

      230

      • #
        wert

        Love your wit, and to tell you the truth, I earlier mixed you two for some reason. I was wondering why you had such web ages – they were Archibald’s.

        Archibald Bunker is the other dude though.

        20

    • #
      ianl8888

      I expect this new effort will go the same way given that you can’t get it published in a reputable peer-reviewed journal

      Different bloke

      The constant, unrelenting stupidity is why I despair

      I’m still undecided on David Evans’ “New Science” but David Archibald is not the reason for that

      Hindcasts and forecasts are next, I hope

      80

  • #
  • #

    Great progress and three years or more well spent by David.
    I’m looking forward to the post that attempts to narrow down the underlying mechanisms since I have some ‘skin’ in that game

    100

  • #
    Greg Cavanagh

    David, I think I’m going to have to buy a book and do some third party learning to have a hope to follow this filter theory.

    I have a good understanding of hydraulics and acoustics. Could you please find a book or three on Fishpond suitable for a novice on the subject. I’ll be happy to buy one or two just to get a handle on what you’re doing here. Electricity has always been a black magic to me, I’ve never understood anything about it. Thanks.

    10

    • #
      Greg Cavanagh

      I should also clarify, something specific to filters suitable for an electrical novice. If there is something out there like that, you would have a better pick than my random buying of books. 🙂

      10

    • #

      Greg, have you tried reading the systems document I posted? It explains everything needed here. Although parts of it are complicated (because the maths gets heavy), the early bits are a minimum of what you need to know and are simple enough — if you are comfortable with calculus.

      If you are not comfortable with calculus, I don’t know where to go. Like most who have learned filters, I learned calculus in high school and university before learning about filters and systems theory.

      There might be some good sources on the web — if you find any, please let us know.

      20

      • #
        gai

        A calculus book I found very readable in college is

        Calculus and Analytic Geometry, 4th Edition (Addison-Wesley Series in Mathematics)

        As the comments at Amazon show this is a truly superb book. My college prof was hopeless so I never went to lecture for the entire two years I took Calc. instead I read this book and got Bs with no problem. Make sure you get the 2nd to 4th edition newer editions are not written by the same person. (Interesting that other authors kept the Thomas name. )

        00

        • #
          Greg Cavanagh

          Thanks. I bought the 7th edition of that Calculus book and read up to Field Theory. I got lost at that point. I also bought a book on Algebra from the USQ. (I was writing a 3d game and needed to understand vector mathematics and plane equations).

          I don’t read every link you put into the articles, there is simply so much reading, and for the most part I can understand the assumptions and consequences without doing the math.

          I’ll give the doc a read. It does look pretty heavy.

          00

        • #
          Greg Cavanagh

          That is so cool. With my acoustics study and music production hobby, I’m familiar with Low Pass Filters, though I never knew how they worked.

          Now I understand what your Notch Filter is. It’s all about phasing of part of the spectrum. Much appreciated. I’ll continue reading.

          00

  • #
    Roy Hogue

    Now to the meat and potatoes. Why the notch? My curiosity isn’t ever satisfied to know just the what. It also wants to know the why and the how. And science seems so limited in that it gives us the what and leaves out the why and the how.

    Maxwell’s equations are an example. They show us what but provide no information on the why or the how. We build and use very sophisticated communication systems because those equations show us what’s there to be used. But the underlying mechanism and the reason for it is a mystery.

    50

    • #
      el gordo

      ‘Why the notch?’

      Dunno, but I’m eagerly awaiting the debate between Leif and Vuk on this question before deciding my position. In the meantime …

      ‘The stable, narrow natures of the Gleissberg and other periodicities suggest that there is a strong “frequency control” in the solar dynamo, in strong contrast to the variable nature (8 – 15 years) of the Schwabe (11-year) solar cycle.’

      Ken McCracken

      20

      • #

        Vuk I believe is an electrical engineer. I would put his knowledge and experience above that of Leif whose predictions have been shown to be wanting.

        50

        • #

          Have to second your Leif vs Vuc observation. I don’t always agree with Vuc but I get the feel of an open mind. Leif’s brain has calcified.

          A generalization:

          Leif “This is what I KNOW you ignoramus.”
          Vuc “That is an interesting question.”

          10

      • #
        sophocles

        strong “frequency control” in the solar dynamo,

        …now thought to be a dual dynamo. That’s going to complicate projections a bit.

        10

    • #
      Tanner

      From the 1950’s

      “The Sun has a tendency toward instability within the gaseous envelope. As the rotational velocity is slowing down, magnetic flip-flops occur in a cycle that varies between seven and seventeen years, averaging about eleven years. Magnetic polarity is reversed at each cycle, and sunspots are triggered by magnetism, emitting brilliant and lethal solar flares. These flares erupt through the photosphere to spew out ionized hydrogen—streams of fast protons and electrons—toward the planets in the form of deadly solar winds.”The Earth’s ozone layer can be destroyed during a cycle of the Sun’s expansion. “The electron density in the Earth’s ionosphere waxes and wanes in step with the eleven- year cycle. Likewise, the auroral displays aloft wax and wane as planet Earth respires within her star’s atmosphere.

      “Earth lies well within the great radiating atmosphere of the Sun. The tenuous corona extends so far from the visible disk that Mercury, Venus, Earth and its Moon and Mars are enveloped in it. Undoubtedly, intensified radiations emitted from the Sun will disturb the Earth’s ionosphere, increasing ionization of the outer atmosphere. The Sun affects the weather all the time as the wind patterns change. The physics of planet Earth cannot be studied without reference first to her star, the Sun.”

      “At the poles, the Earth’s magnetic fields dip earthward in a funnel- shaped pattern. As solar particles spiral down toward Earth in the magnetic funnels above the poles, they hit and excite atoms in the upper air that give off the flashing spectral light of the auroras. These streams of charged particles from the Sun wax and wane with the eleven-year solar cycle as flares burst from the surface and bombard the Earth with radiation channelled toward the poles along the lines of force of the Earth’s magnetic field. These particles are trapped and swept from pole to pole, and they are moving fast enough—and in sufficient numbers—to excite the molecules of the ionosphere to emit their characteristic luminous spectra. The areas on the Earth’s surface from which auroral displays aloft are observed wax and wane with the eleven-year cycle of solar activity. In its output of light in the far ultraviolet range of the spectrum, unobservable at the Earth’s surface, the Sun is a variable star.

      “The electrified curtain of the auroras cuts right across the Antarctic continent, as the southern auroral zone is centered about the south magnetic pole. The south pole itself is on a high, windless plateau, where powder snow lies unbroken above hundreds of meters of compacted ice. Antarctica is the coldest and windiest region. It is much colder than the Arctic, where the relatively thin ice over the Arctic Ocean allows warming of the atmosphere from below, while the air above Antarctica has no such central heating system. It lies over a massive continent, and it is literally in an ice age. Temperatures of 100 degrees below freezing can result. As a result, the insulating roof of the troposphere disappears in midwinter, leaving the lower atmosphere open to outer space.

      The Sun’s corona extends so far from the visible disk that the Earth is enveloped in it, so you can understand how the weather on Earth is controlled by the Sun. Magnetic storms in the ionosphere are worldwide since the ring current encircles the Earth and the global circulation of the atmosphere is powered by the Sun.”

      😉

      60

      • #

        Undoubtedly, intensified radiations emitted from the Sun will disturb the Earth’s ionosphere, increasing ionization of the outer atmosphere.

        Long distance radio transmission in the roughly 3 to 30MHz range is strongly influenced by ionospheric ionization. This has been well known at least since the 1930s when sufficient experience with those frequencies had been acquired. Amateurs were always looking towards the peak of a sunspot cycle in order to do DX. When the sun died down radio became more of a local affair. It was the Amateur Radio Relay League for a reason.

        20

  • #

    If the Warmistas were not so obsessed and overwhelmed by their cheese-holed CO2 theory, they could begin to see in David’s work the start of a real explanation for the origin of the main climate driver, also known as the Sun. A few more failed Climate-Junkets, the running out of other people’s money and a cooling trend could eventually wedge the Warmistas from their pet theory.

    141

  • #
    Don B

    David, your series of posts has prompted me to revisit Nigel Calder’s “Magic Universe,” and the chapter Ice-Rafting Events, beginning at page 417 in my copy. Various scientists have correlated cool period ice-rafting events with weak solar activity. Two excerpts:

    ‘This abrupt climate change occurred simultaneously with a sharp rise in radiocarbon starting around 850 BC and peaking around 760 BC.’ he [Bas van Geel] said. ‘Just because no one knows for sure how the solar changes work on the climate is no excuse for denying their effect.’

    and

    [Gerard] Bond and his team reported in 2001: ‘Our correlations are evidence, therefore, that over the last 12,000 years virtually every centennial time scale increase in drift ice documented in our North Atlantic records was tied to a distinct interval of variable and, overall, reduced solar output.’

    80

  • #
    Bernie Hutchins

    Friends,

    In her intro to this Part 21 above, Jo wrote:
    “….. Notch filters usually do not involve a delay, but they could,….”

    This is incorrect. Most simply, ALL filters (including notch) DO have a delay. To qualify this a bit, I add that the filter I am talking about must be physically real (thus causal) and that the independent variable of the signal is time (else “delay” is meaningless – it’s a shift independent of time). The filters delay (often specifically, the “group delay”: d(phase)/d(omega) ) is not constant with frequency except in the important case of “linear-phase”. In the case where all the data is at hand (as a “record” rather than as an incoming stream) we often use a non-causal linear-phase filter to smooth out noise (the famous “moving average” is such a case).

    David has indicated that the phase of the inferred transfer function is indecisive due to phase jumps. True. Nonetheless, it would seem that the group delay (derivative) could still be calculate (or estimated) most everywhere. We need to know about what value the delay IS and/or NEEDS to be. Originally, David favored something like 7 years. (?) Is there a current estimate?

    Further I believe the physical mechanism of any notch is not discussed (is this upcoming?). David, citing simplicity, chose a 2nd-order bi-quadratic. This is recursive and requires a feedback mechanism, which is harder to do than non-recursive. As I pointed out in comments to Part 20, a delay/add contrivance (physically trivial – just destructive interference) notch is possible:

    http://electronotes.netfirms.com/AN422.pdf

    This has a delay, for a notch at 1/(11-years), of 5.5 years.

    One last remark. It seems to me that David is sometimes using the term “noise” as not specifically restricted to some random component in the data, but (with reference to his OFT), to include things perhaps better described as “distractions” or artifacts (zero-padding, windowing, leakage). Is this correct? The DFT is certainly not noisy in the random sense.

    Best wishes and thanks.

    Bernie

    50

    • #

      Bernie, by “delay” Joanne meant that a separate delay mechanism in addition to the notching mechanism (of the kind required to make a combined unit approximately causal if the notching mechanism is causal).

      Yes, the group delay might turn out to be involved. We are not us to that stage of refinement or resolution yet! The data is too fuzzy.

      The delay originally found, and the one I have always advocated is about 11 years. I might have mentioned 7 years as the minimum delay required to make a non-causal notching mechanism approximately causal.

      I am using “noise” in the sense of anything that isn’t signal. If you start with a pure signal then pad the time series you have a new time series that is noisy — it is the combination of the signal and something else.

      30

  • #
    Dan Pangburn

    The time-integral of sunspot number anomalies combined with a simple approximation of the net effect of ocean cycles achieves a 97% match with measured average global temperatures since before 1900. http://agwunveiled.blogspot.com Everything not explicitly included (such as aerosols, volcanos, non-condensing ghg, ice changes, uncertainty in measurements, heating from earth’s core, heat stored in ocean depths, etc.) must find room in the unexplained 3%.

    40

  • #
    Kim

    Hum is removed by high pass filters. Notch filters are typically used to receive CW or SSB transmissions. A notch filter is a low pass filter combined with a high pass filter.

    00

    • #
      Bernie Hutchins

      Kim said November 28, 2015 at 8:32 am three things:

      (1) “Hum is removed by high pass filters.”

      True – not by a notch. Because audible power-line “hum” is not so much from 60 Hz (or 50 Hz) which are nearly inaudible (see Fletcher-Munson curves) but from 2nd and 3rd harmonics. Remove just the fundamental and it sounds much the same. See also:
      http://electronotes.netfirms.com/AES5.PDF

      (2) “Notch filters are typically used to receive CW or SSB transmissions.”

      Not that I know of. You need a spectral shift, quadrature phase differencing networks (all-pass), multiplies, and adds.

      (3) “A notch filter is a low pass filter combined with a high pass filter.”

      Yes, but lacking in details!:
      http://electronotes.netfirms.com/AN413.pdf

      20

      • #
        Michael Hebert

        Kim and Bernie,

        I have a modicum of experience with the use of filters for CW and SSB. I have been a licensed ham for over 50 years in addition to working in the commercial two-way radio for about 20 years. CW filters prior to the solid-state era were primarily resonant peaking filters. Since the advent of SSB in the 60’s most of the filters in use have been bandpass types (300-2700Hz for SSB and 200 to 500 Hz width at a nominal 700Hz center F for CW). Many of the old timer OPs prefer the use of low pass filters for CW with -3dB cutoff around 600Hz or thereabouts.

        I have also built and operated ULF/ELF/VLF magfield and e-field monitoring equipment that required the use of notch filters to deal with powerline hum. Contrary to what Bernie mentions, my experience has been that the fundamental frequency must be heavily notched along with as many of the odd order multiples as practical. Even order products were not as troublesome in my experience. Typically the spectrum from DC up to as high as 15KHz is an absolute swamp of power grid harmonic interference. You would think that the spectrum below the power grid fundamental would be quiet but… nope! Subharmonics are a strong interference source at the /n frequencies.

        Oh… a notch filter can also be a parallel resonant filter connected in series between input and output. Not commonly used due to difficulties with implementing sufficient isolation between the ports nevertheless… it does find application.

        50

      • #
        Bernie Hutchins

        Michael – I agree.

        However, my specific points of reference were to David’s mention (in the main post) of AUDIO hum, to my mention of the Fletcher-Munson curves, and to my 1982 Aud. Eng. Soc. paper (With Walter Ku). It is certainly true that if you hear the hum, and then notch out 60 Hz, you will STILL hear the same hum much the same. In fact, you can SEE a huge 60Hz AC signal on the scope face which has a bit of “fuzz” (being the audio signal) – hum like crazy. Notching out the 60Hz makes the scope trace LOOK better, but it SOUNDS much the same. This is why a “comb filter” is used (see our AES paper). (If the AC signal actually causes clipping, little can be done to help.

        So it was about audio.

        In OTHER applications, perhaps your AC field cancelling apps, the 60 Hz DOES matter. The receiver (analyzer) is NOT the ear (a high-pass filter). In such cases, as you probably know, an “adaptive filter” (usually LMS) with a “reference signal” (like from the AC line itself) is what is used. Not really a notch, but to the same purpose.

        Thanks for the opportunity to clarify.

        Bernie

        30

        • #
          Michael Hebert

          Bernie,

          What you mention regarding the adaptive comb filters is very definitely true. What gets dodgy about using them is obtaining a suitable reference signal. When I was doing ULF/ELF/VLF magfield monitoring and recording the phase differences between a locally derived reference and the constantly varying “signal” that the sensor was picking up often made it impractical to use the filter. It seemed to contribute more interference to the spectrum than it removed. IOW, it was often easier to see the spectral content close in to the interfering power grid frequency without a filter than with one.

          I even tried using a second sensor and associated circuitry as the reference but that was even trickier to use. Too much GIGO!

          20

          • #
            Bernie Hutchins

            Michael –

            Right. Getting that nasty reference right. Question – how long did YOU spend using your scope to look at the AC waveform from the power lines BEFORE you decided that the scope was telling the truth! For myself, I KNEW it was supposed to be a (perfect – I guess) sinewave, so why was my scope lying to me ! Why it’s a wonder a light bulb even come on.

            Sounds like we have been down a lot of the same roads.

            Bernie

            20

            • #
              Michael Hebert

              Bernie,

              I didn’t have a scope at the time. What I had to work with was a multi-channel ADC and software. When I was trying to use the power grid as reference I was picking it off a transformer plugged into a wall outlet. Much too noisy to really be of any use. Any time a transient came along my PLL would set off hunting again!

              The best combination I had was for magfield sensing using a large degaussing coil as the pickup feeding the input of an LT1028 in negative impedance converter configuration then into 4 stages of Butterworth active filters with cutoff at 15Hz. Between the coil and the LT1028 I had a microwave oven transformer repurposed as an absorptive filter. The signal fed through the filament winding and the secondary winding was resonated to 60Hz. I got somewhere in the neighborhood of 26dB rejection of the 60Hz with that combo.

              The whole idea behind it was to record the magfield to see if earthquake precursor “signals” could be spotted. The thing that made it a futile effort was trying to do it from within an urban environment. I eventually gave up on the idea but it was fun while it lasted. The one part that I may someday be tempted to try again is recording the variations below 1Hz. I did that for several months and shared the data with Ray Tomez down in New Zealand. He was intrigued by it and was of the opinion that he could spot variations due to interplanetary influences in the data. Eventually I had to move to a different apartment and had to put it all away. Maybe in future I’ll give it another go.

              20

              • #
                Bernie Hutchins

                Michael –

                What a delight to suppose that there are still people like yourself who, when presented with an interesting question, think of heading for the workbench and perusing the junk box for something useful.

                Bernie

                20

    • #
      tom0mason

      Kim,
      There are other ways to make a notch…

      A notch in a system’s overall frequency response is also caused by the introduction of secondary signal. If this secondary signal is at, or very close, in frequency, and is 180 degrees out of sync with the main signal, then a notch is seen in the main signal’s overall frequency response (at this one frequency). Partial or complete cancellation at a particular frequency depends on how closely aligned in frequency, phase, and amplitude the secondary frequency is to the main signal. The best notch is produced when the secondary signal matches in frequency and level, and the phase is exactly 180 degrees different from the main signal. If either or both signals are not stable then the notch will appear to come and go, or even peak instead of notch.

      As our sun is affected by an abundance of secondary signals (planetary effects, internal cyclic events, etc…) I suspect this may be the more profitable area to investigate.

      Also note that a notch can be produced by reflection of the main signal.
      This is however effectively a notch caused by signal delay. As the signal travels a 1/4wavelength outward is reflected and returns a the same 1/4 wavelength back. This distance takes a finite amount of time — delayed time to cover the distance. These two 1/4 wavelength journeys means that the returning wave is 1/2 a wavelength out of phase with the main signal, or 180 degrees out of phase, thus it notches (amplitude reduces) at that frequency.

      20

    • #
      Roy Hogue

      Help!

      Can we agree that a notch filter cuts out a specific, usually very narrow part of the spectrum, electromagnetic, sound, etc., — the notch — and passes everything on either side of that notch? That’s the definition of a notch filter I understand. No great technical expertise is required to get the principle. But I’m neither an EE nor a ham operator, so maybe I’ve misunderstood something for the last umpteen years? 🙂

      10

      • #
        tom0mason

        star comment
        In the broadest sense of everyday use you are correct.
        However consider this —

        You measure the response of your loudspeakers at a place in your room. The response curve looks flat enough, but I then say move 3feet in any direction from that spot and the flat response will, in all probability, disappear. Indeed you do this and now find a thumping great notch in the response. What has caused it? It could be caused by the speakers directivity pattern (this may only partly account for some effect), maybe the otherwise flawless amplifier system has now failed (retests shows it has not), or maybe you are now at a place where, at some particular frequency(-ies), the main speaker output and the reflected sounds from the walls, ceiling, floor, etc., converge and partially cancel out.
        Now if you test all around the room you will find this effect nearly always happens — only at very particular “sweet spots” is the response reasonably flat. Surprisingly no amount of adjustment of an equalizer gets rid of these notch/peak places. They are a product of the aggregation of signals from the speakers *and* the reflected sound(s).
        Similarly with the sun. What we perceive as a notch is the result of the aggregation of all signal vectors at any instance. The true make-up of the notch may be caused by many effects as I have tried to outline in my previous reply above. Just inspecting the notch or its position in the overall band only tells us about the amplitude and some relative timing attributes of these signal aggregations.
        To fully understand, and scientifically describe this solar notch issue, IMO, we have to account for amplitude, phase, and frequency of all other confounding signals — and in this case that means better explaining the internal dynamics of the sun including rigorously defining then accurately measuring what energy (in particles, radiation, magnetic and electrical outputs) our sun radiates at any instant; accurately assessing all the effects the planets have on our sun; and finding and accounting for any other confounding signal that the rest of the universe generates.

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

        P.S. If you try the experiment with the speaker, for best effect use a sparsely furnished room with hard surfaced floors, and hard, bare, flat walls. Such rooms ‘sound’ very poor when listening to music or speech.

        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~

        P.P.S David Evans processing of the solar output highlights and makes very apparent the location and amplitude of the notch in solar output but tells us little of its nature or cause. IMHO resolving that will take a considerable scientific effort.

        40

      • #
        Bernie Hutchins

        Roy –

        Quite correct – that’s the idea.

        But – with regard to SYNTHESIS (engineering design) it gets down to how one actually does this that works. Relatively speaking, there are a good number of approaches. Each has consequences and limitations. But we are looking right at the attempt – MAKING it happen. Almost trivial to get our minds around.

        However, with regard to ANALYSIS (where did this observed “dip” in the spectrum come from!) we are trying to understand what is happening based (too often) on partial evidence. Something else is MAKING it happen. In such a case, a plausible mechanism of a notching effect, particular to all related knowledge, is very useful, if available.

        Bernie

        30

  • #
    Robk

    Interesting piece, the anticipation.
    In my view, your reworking of the model architecture is more than sufficient to sink the IPCC argument on their own grounds. Your pursuit in using your talents to tease out any possible meaning of observations (data) to date is commendable, especially that you do so in a very public manner which allows many of the public to share your journey in search of knowledge.
    Thank you for sharing.
    I found a comment on the J Curry blog which I thought pertinent:

    “It is easier to find a score of men wise enough to discover the truth than to find one intrepid enough, in the face of opposition, to stand up for it.”
    —A. Hodge

    I’m looking forward to the next post.

    91

  • #
    pat

    O/T apologies, but I’ve posted this on jo’s previous “Life copes” thread. it might make a good weekend thread:

    27 Nov: BBC: Matt McGrath: COP21: Public support for tough climate deal ‘declines’
    Public support for a strong global deal on climate change has declined, according to a poll carried out in 20 countries.
    Only four now have majorities in favour of their governments setting ambitious targets at a global conference in Paris.
    In a similar poll before the Copenhagen meeting in 2009, eight countries had majorities favouring tough action.
    The poll has been provided to the BBC by research group GlobeScan (LINK).
    Just under half of all those surveyed viewed climate change as a “very serious” problem this year, compared with 63% in 2009…
    The findings will make sober reading for global political leaders…
    http://www.bbc.com/news/science-environment-34900474

    also posted in comments on the “Life copes” thread, from #46 on, is my questioning of the claims made by ACF for the Melbourne climate march numbers. surely there’s a journalist or sceptical politician in Melbourne who can investigate and report the facts.

    41

  • #
    Peter Shaw

    Dr Evans –
    I follow you, but am not yet expert.

    1. Your Eq 1 central estimate is of the same order as your notch floor. Is this coincidence, or a natural outcome of your analysis?

    2. Pse reality-check the following:
    I recast your Eq 1 as fractional changes d ln(Ts) = 0.5 d ln(TSI) roughly.
    Simplistic S-B law gives d ln(Pout) = 4.d ln(Ts) = 2 d ln(TSI) or +3dB(!)
    Using your longer-period estimate gives ~20x.

    Common-sense suggests that these high values ain’t physical, and the true ratio should be close to unity. If so, Ts appears a poor estimator of the (atmospheric) system response to any heat source on your timescale of 20 – 1000 yr; which includes the IPCC projections.

    20

    • #

      Peter,

      1. It is a coincidence that the noise floor of the surface temperature observation is about the same as the typical sunspot cycle peak warming due to direct heating, AFAIK.

      2. I don’t see how you got those relationships. Also note that the SB equation cannot be literally applied to Earth, especially not to the surface.

      20

  • #
  • #
    Rod

    Ok, a brief increase in TSI at Solar Cycle peak and a matching cooling for surface temp. One possible solution is:

    At solar peak the sun’s magnetic field flips, sometimes both hemispheres together, sometimes separately. A pulse of low solar magnetic field should allow more cosmic rays to enter earth’s atmosphere and so seed more clouds at the equator. That could trigger a cooling action until the sun’s magnetic field is restored.

    That might work, but I have no idea how weak the magnetic field gets before, during and after flipping.

    Anyone?

    10

    • #
      Robk

      And how does all that relate to earth’s magnetic field and it’s periodic inversion?
      What are the properties of rarified gases subjected to plasma particles and intense radiation whilst the whole lot is moving at speed in a magnetic field?

      20

    • #

      Rod, except that cosmic rays and cloud cover troughed at the sunspot peaks of 1990 and 2000, as noted in passing above:

      If low-altitude cloud cover troughed at every sunspot maxima (as suggested by Fig. 2-12 of Lockwood et. al.’s Earthshine Mission case from 2004), perhaps in response to troughs in galactic cosmic rays at sunspot maxima, then presumably albedo would also trough, and surface temperature peak, during sunspot maxima. This would make the empirical fact of the notch even more remarkable.

      We initially thought that too, but it ain’t so.

      20

      • #

        Yes, I’d noticed that problem for the cosmic ray idea and seem to recall mentioning it at some point.
        A better fit is between global cloudiness and changes in jet stream behaviour

        30

        • #
          el gordo

          A better fit is between global cloudiness and changes in jet stream behavior.

          Now that is a novel idea and worth following up.

          10

          • #
            tom0mason

            el gordo,
            “Now that is a novel idea and worth following up.

            Erl Happ’s cc1 blog at http://climatechange1.wordpress.com/ has a lot to say on that subject via the effects of ozone production.

            10

            • #

              Erl has been trying for several years to draw attention to meteorological features that show global energy budget and air circulation changes over time. His detailed work is very thorough and informative.

              I corresponded with him from 2008 to 2010 since there was some overlap between our respective ideas.

              In the end he could not accept my proposition about changing tropoopause heights and latitudinal climate zone shifting with changing jet stream patterns being the primary cause of global cloudiness changes.

              A pity, because I saw my proposals as consistent with his more general views.

              38

  • #
    Michael Hebert

    I have often wondered over the past couple of years why I have not heard nor read anything about a link between plasma physics and climate change. On a whim, I did a quick google search today and came up with this link…

    http://science.nasa.gov/science-news/science-at-nasa/2013/08jan_sunclimate/

    Seems to me given the immense amounts of power required to produce the auroras at both poles that the contributions of plasma changes must be of some significance… or perhaps I’m just an iggernut old man 😉

    30

    • #
      gai

      Is solar variability reflected in the Nile River?
      Alexander Ruzmaikin,Joan Feynman,Yuk L. Yung

      ABSTRACT
      …We investigate the possibility that solar variability influences North African climate by using annual records of the water level of the Nile collected in 622–1470 A.D. The time series of these records are nonstationary, in that the amplitudes and frequencies of the quasi-periodic variations are time-dependent…

      In section 2, we carry out the data analysis of these water level records. In section 3, we analyze the concurrent aurora data that serve as a proxy for solar variability….

      3. Concurrent Aurora Record

      [10] For almost 1,500 years aurora appearances at mid latitudes in Europe and Asia were carefully recorded because of the belief that they portended important events such as droughts and deaths of kings. These records have been collected and evaluated for accuracy (see Siscoe [1980] for a comprehensive review of sources of auroral data).
      Auroras are caused by very high speed solar wind disturbances produced in association with fast coronal mass ejections and major solar flares. The rate of auroras is strongly correlated with the sunspot number (correlation coefficient 0.85, see Feynman [1988]) and thus is an excellent proxy for solar variability.

      5. Conclusions

      [16] Low-frequency 88-year variation is present in solar variability and the Nile water records during the same time period (622–1470 A.D.). The 11-year cycle is seen in high-water level variations, but it is damped in the low-water anomalies, apparently because of the influence of lakes and swamps at the start of the White Nile. On the basis of this finding we suggest the following possible link between solar variability and the Nile: (1) Solar UV variations act in the stratosphere to modulate the Northern Annular Mode (NAM); (2) The NAM’s sea level manifestation (NAO) affects the air circulation over Atlantic and the Indian Oceans during high levels of solar activity; (3) Variations of this air circulation influence rainfall in eastern equatorial Africa at the Nile sources. At high solar activity, the air is descending there and conditions are drier, with the opposite effect occurring at low solar activity. Further investigations of this link will shed more light on the connection between the Sun’s variability and the Nile…

      20

  • #
    Manfred

    — at a fraction of the cost of the billion dollar models

    At an infinitesimal fraction of the multi-billion dollar input cost of the models combined with their spurious multi-trillion dollar resultant, namely the unaffordable and unsustainable output policies that fuel marxist driven eco-shackling and wealth redistribution.

    I find this compelling post utterly riveting, thank you, including that magnificent drive-by shooting @ #8, by a deranged Steerpike.

    Is it the The Spectator’s gossip columnist taking a pot-fueled shot, given the recent article run by The Spectator about Prof. Judith Curry? Is The Spectator not in clima-harmony?

    41

  • #
    Clyde Spencer

    David,

    I haven’t fleshed this out yet, and it is time to go to bed. However, off the top of my head: UV increases significantly during sunspot peaks. This increased UV creates more ozone, principally in the equatorial latitudes. This ozone spreads out towards the poles. Once created, the ozone is opaque to UV and additional UV heats the ozone. Thus, UV that might have formerly (during SS minima) made it deeper into the atmosphere and result in heating, is constrained to the upper reaches of the stratosphere which gets heated. One can then expect UV heating to oscillate between the upper stratosphere (and radiate back out into space?) and the lower stratosphere and upper troposphere where it heats the nitrogen and oxygen by conduction (collision). Would this fit into the notch?

    10

    • #
      • #

        David,

        If the notch is correct (I expect that it is) then climate models will have to break solar radiation into bands and no longer use TSI as the metric.

        I think it will be a hoot to see the current models relegated to phlogiston status.

        Expect a LOT of push back. There is big money riding on those models.

        I have been of the opinion for a few years that CO2 has no effect on climate. Before then I was a luke warmer. My current personal understanding is that the CO2 relationship posited by the models (and over estimated by them) is an alias of the ocean cycles (roughly 30 years).

        40

        • #
          gai

          “….climate models will have to break solar radiation into bands and no longer use TSI as the metric….”
          …………

          I have always been of that opinion however I am a chemist.

          Different wavelengths of sunlight affect NOx and Ox in the atmosphere in different ways. For example the ratio of 240 nm vs 320 nm is important.

          240 nm = ozone formation
          Sunlight + oxygen (O2) ===> O + O
          O + O2 ====> O3 (ozone)

          320 nm = ozone distruction
          Sunlight + ozone (O3) ===> O2 + O
          O + O3 ====> 2O2

          Formulas from: (wwwDOT)oxidationsystems.com/products/ozone.html

          Once you get into ozone there are tons of papers. Here are three:

          Magnetic field changes, NOx and Ozone by James A. Marusek. Nuclear Physicist & Engineer. U.S. Department of the Navy, retired.

          http://phys.org/news/2013-01-solar-variability-terrestrial-climate.html

          Several researchers discussed how changes in the upper atmosphere can trickle down to Earth’s surface. There are many “TOP-DOWN” pathways for the sun’s influence. For instance, Charles Jackman of the Goddard Space Flight Center described how nitrogen oxides (NOx) created by solar energetic particles and cosmic rays in the stratosphere could reduce ozone levels by a few percent. Because ozone absorbs UV radiation, less ozone means that more UV rays from the sun would reach Earth’s surface….

          CHARACTERISTICS OF THE GENERAL CIRCULATION OF THE ATMOSPHERE AND THE GLOBAL DISTRIBUTION OF TOTAL OZONE AS DETERMINED BY THE NIMBUS III SATELLITE INFRARED INTERFEROMETER SPECTROMETER

          40

          • #

            Conventional climatology has it that an active sun creates more ozone in the stratosphere which is why the expanding ozone ‘hole’ and the cooling stratosphere (less ozone) were deemed a man made problem during the recent period of more active sun.

            The feature that I proposed and which was supported by certain observations was that an active sun increases ozone below about 45km and above the equator but at the same time reduces ozone above about 45km and above the poles.

            As far as I know nobody else has inserted that factor into any other model or hypothesis.

            I deemed that scenario essential if one were to produce the latitudinal climate zone shifting and associated changes in jet stream behaviour that were actually observed from MWP to LIA to date.

            In order to get such latitudinal shifting over time it is necessary to alter the gradient of tropopause height between equator and poles and stratospheric ozone variations are the only means I know of to achieve that.

            Altering that gradient changes global cloudiness and thus the proportion of solar energy able to enter the oceans.

            412

            • #
              el gordo

              ‘As far as I know nobody else has inserted that factor into any other model or hypothesis.’

              Good work Stephen, your theory makes perfect sense and needs a wider audience.

              I’ve been scouring through J M Stratton’s Agricultural Records AD 220-1977, looking for cooling signals in the UK, and the key is a preponderance of cool wet summers.

              The wayward jet stream is clearly a major player in creating extra cloudiness in some places and drought elsewhere, depending where you are and what the oscillations are doing.

              26

              • #

                Thanks.

                However, I don’t think one can offset increases in cloudiness in one place with decreased cloudiness elsewhere when the jets wave around more in latitudinal loops.
                If the length of the lines of air mass mixing increases then there will inevitably be more clouds overall.

                20

              • #
                el gordo

                OK thanks.

                00

  • #

    It looks like I tuned in just in time to get back in the action.

    BTW for some reason – although I am signed up – I’m not getting notification of new posts.

    00

  • #
    doubtingdave

    From my basic layman understanding of this subject , is the problem with the established climate scientists that they have focused to much on infra red radiation and not enough on the effects on our climate of changes in the amount of UV shortwave that we receive at earths oceans. Longwave infra red cannot easily pass the surface tension of water but shortwave UV doe’s , and so can pass heat/energy into the ocean and create warm blobs of water that have a delayed notch effect on our climate. Trying to understand TSI and how it changes during Solar maximum is not really about the amount of UV the ocean receives directly from small increases in TSI , but about changes in Earths atmosphere such as cloud cover and ozone levels that varies the amount of UV that reaches the surface of the ocean . Have i got that about right ?

    01

    • #

      Dave, two basic problems with conventional climate science.

      1. Because in 1896 they couldn’t work out how extra CO2 redistributes OLR between the emitters, they assumed that it was the same as for extra absorbed sunlight. This led to the architecture of the conventional basic climate model, in 1896, which is deeply flawed and shows a high sensitivity to extra CO2 because they apply the strong solar response instead of the weak CO2 response to the CO2 forcing. Fixing the flaw and using modern climate data (unavailable in 1896) shows sensitivity is an order of magnitude less. Heat due to extra CO2 just reroutes — now have some pictures which explain it well, in the latest version of the synopsis (p. 17, 18). See http://sciencespeak.com/climate-basic.html on the error.

      2. They ignore the Sun, except for the direct heating effect of the minor fluctuations in the solar radiation (which are too small to matter much). SO they never investigated or included the indirect solar effects. Since externally-driven albedo (EDA) causes at least twice as much as warming as the direct effect of TSI fluctuations (post 10), this leaves their climate models trying to explain global warming using a CO2 driver but not the real driver.

      The problems are related:

      a. The conventional basic climate model estimates the sensitivity to carbon dioxide as ~2.5 °C (the equilbrium climate sensitivity, or ECS). But this is an overestimate: fixing the faulty architecture shows it is less than 0.5 °C.

      b. A sensitivity of ~2.5 °C very roughly accounts for observed warming since 1910. To believers in the conventional basic climate model, this implies that increasing carbon dioxide alone can explain 20th century warming.

      c. So GCMs use increasing carbon dioxide as the dominant driver to reproduce 20th century warming. GCMs that do not succeed in this task are not published (see p. 32). Hence the GCMs give the same crazy answer as the conventional basic model that assumes extra CO2 is just like extra sunlight.

      I’m not sure if the oceans storing heat has much to do with theis, but it might.

      71

      • #
        doubtingdave

        Thanks Dr Evans , that helps a lot , i can’t help but to see that CGWF (believers) scientists , look back on the work of Svante Arrhenius as a type of religious creationism as in , in the beginning there was nothing but a vacuum until God sprinkled some light and matter into it , and the universe was born , and in the beginning Arrhenius created a vacuum in a laboratory added some light and CO2 and AGW science was born 😉

        41

      • #
        Alexander Carpenter

        NOTE TO MODS: the link for synopsis is absent in Comment 32.1

        10

  • #
    John Watt

    Sorry for being trivial but both Evans and now Shorten are referring to various works of Fourier when presenting their cases re role of CO2 in climate change. Conclusions are diametrically opposed.Clearly Evans is across the error in the Fourier conjecture referred to by Shorten. How can Shorten be made aware of the Evans’ conclusions? Via Jensen MHR?

    00

  • #
    Andrew

    Regarding humidity in the synopsis : Pielke jr tweeted the other day that NVAP is to return in 2016.

    00

  • #
    Kim

    The Earth moves relative to the sun from a perihelion of 147095000 km to an aphelion of 152100000 km. This corresponds to a perihelion sphere of 2.718977947×10¹⁷ km2 in area, and an aphelion sphere of 2.9071557×10¹⁷ km2 in area. As such the decrease in received heat from the sun is 2.718977947×10¹⁷ / 2.9071557×10¹⁷ -> 6.47%. Further the more minor variations of all the various physical aspects of the movement of the Earth cause more long term cycles. Then there is its movement through the galaxy and the local intergalactic area.

    The orbit changes over time due to gravitational influences.

    http://www.timeanddate.com/astronomy/perihelion-aphelion-solstice.html

    00

  • #
    J Martin

    “Suggesting that the long term influence of the TSI on ΔTS is ~14 [4.2, 27] times larger than the direct heating effect of TSI”

    Presumably that works for cooling as well, something we may well see over the coming years.

    00

  • #

    “Amplitude is the amplitude of the output sinusoid at the frequency divided by the amplitude of the input sinusoid at the frequency.”

    Amplitude! David? Do you mean value of 1D length or 2D power, a product of? Phase is the phase of the output sinusoid at the frequency less the phase of the input sinusoid at that frequency Now your phase is a difference of something rather than an amplitude product! Sounds lots like “complex conjugate” of the same thing!

    05

    • #

      Will, the first “amplitude” is the amplitude of the transfer function at a given frequency. The value of a transfer function at a frequency is a complex number; I mean the amplitude of that complex number.

      The second and third “amplitudes” are of the sinusoids. The amplitude of the sinusoid A cos (2 pi * f * t – phi) is A, and its phase is phi. Amplitude squared is power.

      00

      • #

        Thank you David, just the 3db vs 6db thingy! Hard to tell from context sometimes. Would not power magnitude and direction be more appropriate in the case of climate/weather?

        00

      • #

        David,
        I notice You are doing your MFT, (my DFT), on a finite length step function. If you are willing to assume the end values are constant in both directions, The rest of the transfer from t(final) to infinity, is analytic. The whole transform is the sum of the two parts. This will also reduce ‘noise’ from the truncation of the finite measurement. If you do not have that analytic, I can try to find it. Been 45 years now since I did the single frequency optical modulation transfer function from measurement of a finite slit image cut with a knife edge! Also if you interchange the real and imaginary parts of your step function transform, and rescale you have the impulse response of your system.
        All the best! -will-

        010

      • #
        Bernie Hutchins

        David –

        (1) When I have a notch filter on my bench, I know, in addition to the circuitry, the paths of the input test signal, and of the output. Typically the input would be a flat amplitude sinewave from a function generator (sweeping in frequency), and the output would be viewed as a sinewave on an oscilloscope. Alternatively the input might be a white noise source, and the output device would be a spectrum analyzer. Either could well provide near-certain evidence of the benchtop notch.

        (2) If I found a flat output from either type of test, I would have to conclude that there was no notch filter, or that the input had a bump UP exactly counteracting (in amplitude, frequency, and phase) the notch (highly unlikely).

        (3) If I did not have the notch filter “on the bench” but rather perhaps only the INFERENCE of a notch based on the ratio of Fourier transforms of two signals, this would be a situation as in the unlikely case (2). But there would be additional possible, and more likely explanations. Most likely, there is NO input/output relationship between the two Fourier transforms – the two are unrelated – no empirical transfer function.

        (4) The value of clearly demonstrating the existence of the input/output mechanisms (equivalent to seeing the cables on the workbench) and of giving the basic mechanism of the notching mechanism should not be underestimated.

        (5) David, you suggest in Part 21 here: “The notch is NOT intrinsic to the main part of the notch-delay hypothesis”, and speaks of a “countervailing influence” rather than a notch filter. This may well be the most useful route going forward.

        Bernie

        00

        • #
          Greg Goodman

          Most likely, there is NO input/output relationship between the two Fourier transforms – the two are unrelated – no empirical transfer function.

          Thanks Bernie. I think you are making the same point I posted above ( later than you but before reading this far ).

          David’s conclusion of the presence of a ‘notch response’ is erroneous since he does not have a broadband input signal: only a single broad peak around 11y.

          What his in/out plot is most likely evidence of is that there is no direct solar signal in dTS. The spectrum he plots is basically upside down SSN spectrum: ie 1-solar.

          00

  • #
  • #
    Steve Richards

    I would like to offer a possible explanation of sun/earth physical processes which are exposed by this new model.

    1) take a modern merchant marine radar (my area of concern). To increase its range/sensitivity and therefore its effectiveness, each radar pulse transmitted is typically made up of, say 6 individual pulses, in precisely timed burst. So a 20 kW pulse 6 times.

    We receive 6 weak echo pulses from a target, if we were to feed these pulse into a pulse compressor, (by building 6 parallel delay lines and merging the outputs together) the net result is an echo pulse attributed to our transmission is now 6 times the size (or power).

    We have made the radar system 6 times more sensitive.

    For the above system to work, the receiver delay in the pulse compressor must match the transmit pulse timing.

    If we fed the compressor with a constant stream of pulses, its output would continue to grow, if the period of the pulses matched the delay periods.

    2) The sun pulses some of its output, don’t know or care currently, requires more research.

    This change in sun output has a very slight but UNMEASURABLE influence on the earth.

    However, if there are delay mechanisms on the earth, ocean currents, differential heating of land mass/ocean mass, 24 hour cycle, lunar cycle, annual cycle etc etc.

    If the sun pulse period matched some earth cyclic periods one could easily see an effect on earth occurring that did not appear coupled to a very small change in sun output.

    If the above is difficult, imagine pushing a child on a swing. A small push causes the child to swing slightly. Keep repeating (pulsing) the push, and the child/swing will increase its range of movement. But you need to push/pulse at the correct time and period for this to happen.

    If there are differential heating effects due to continental land mass/ocean heat capacity then continental drift (slow and rapid) could explain why we have had some major changes in global climate.

    (Please note: each radar manufacturer designs their pulse compression systems differently).

    10

  • #
    Louis Hissink

    Link to Bruce Leybourne’s presentation this year explaining the latest plasma connection model explaining weather and hence climate.

    It is a link to Youtube.

    The Solar TSI and its variation is more or less explaining how an automobile’s speedometer works. Leybourne’s theory is explaining how the engine of the climate system works.

    https://lhcrazyworld.wordpress.com/2015/12/12/plasma-induced-weather/

    00

  • #
    ItsGettingHotinHereSo

    Will there be more postings?

    00

    • #

      Yes, postings will resume soon (there are seven more posts to come). Joanne is having a bit of quiet period and rest after Paris and over Xmas, but we’ll be getting back into it soon.

      00

  • #
    ItsGettingHotinHereSo

    Thank you. Looking forward to the posts.

    00

  • #
    • #

      Sorry J. Martin, but Kimoto’s critique is wrong, he doesn’t understand how the AGW model works, and it does not convince anyone knowledgeable about the topic. Touched on it here, in comment 1 and replies.

      10

  • #
    Albert Jacobs

    For the last few years I have followed with interest several papers by Silvia Duhau (Buenos Aires) and Kees deJager (Netherlands), lately complemented by Simon Shepherd and Valentina Zharkova (UK), about the behaviour of the sun’s unstable toroidal and poloidal magnetic fields throughout the solar cycle and on longer cyclic patterns.
    Seems that the dual dynamo within the sun is flipping in prominence within itself during every cycle and that the change is accompanied by a difference in magnetic frequency types, mostly in UV changes).

    I am surprised that this regular feature has not been mentioned in the above comments as a possible link to the existence of the “notch”.
    Or am I hitting out of ballpark?

    00