JoNova

A science presenter, writer, speaker & former TV host; author of The Skeptic's Handbook (over 200,000 copies distributed & available in 15 languages).


Handbooks


Advertising


Australian Speakers Agency



GoldNerds

The nerds have the numbers on precious metals investments on the ASX



The Skeptics Handbook

Think it has been debunked? See here.

The Skeptics Handbook II

Climate Money Paper



Archives

Hadley excuse implies their quality control might filter out the freak outliers? Not so.

#datagateThe Met Office, Hadley Centre response to #DataGate implied they do quality control and that leaves the impression that they might filter out the frozen tropical islands and other freak data:

We perform automated quality checks on the ocean data and monthly updates to the land data are subjected to a computer assisted manual quality control process.

I asked John to expand on what Hadley means. He replies that the quality control they do is very minimal, obviously inadequate, and these errors definitely survive the process and get into the HadCRUT4 dataset. Bear in mind a lot of the problems begin with the national meteorological services which supply the shoddy data, but then Hadley seems pretty happy to accept these mistakes. (Hey, it’s not like Life on Earth depends on us understanding our climate. :- ) )

As far as long term trends go, the site-move-adjustments are the real problem and create an artificial warming trend. On the other hand, the frozen tropical islands tells us how competent the “Experts” really are (not a lot) and how much they care about understanding what our climate really was (not at all). That said, we don’t know what effect the freak outliers have on the big trends, but then, neither do the experts.

Below, John drills into those details which show just how pathetically neglected the dataset is. For data-heads — the freak outliers affect the standard deviation and calculation of the normal range. This is a pretty technical issue here “for the record” and to advance the discussion of what Hadley neglected data means. For what it’s worth, McLean can manually copy the process that is documented as the right way to create the HadCRUT4 set, and he can produce the same figures they get. That suggests he knows what he’s doing.     — Jo

__________________________________________

Leaving outliers in the key years means that Hadley won’t filter out real outliers in other years.

Guest Post by John McLean

In its response to the HadCRUT4 data audit the Hadley Centre says that it applies quality control, and implies that obvious errors couldn’t possibly be in the dataset.  In these comments I will show not only that the errors do get carried through but I will explain how this happens.

For each calendar month and each station, the long-term average temperatures, which HadCRUT4 people call normals, are the average of the temperatures for that month from 1961 to 1990.  In a similar fashion, standard deviations are calculated across the period from 1941 to 1990. The Hadley Centre method of quality control uses these two values to set upper and lower limits beyond which data will be assumed to be errors and therefore excluded from the main data processing.

The presence of obvious errors in the HadCRUT4 data is due to a major failure in this approach.

Let’s start with my audit and Apto Uto with its three monthly mean temperatures above 80°C.  Figure 6.3 (pg 43) of the audit is of data extracted from the HadCRUT4 dataset; I didn’t perform any calculations of the values.  The Figure shows three very abnormal values in the grid cell that covers Apto Uto.

The grid cell in question is entirely over land, which means that we can trace each step in the processing as the HadCRUT4 people describe it.  The grid cell value is simply the average temperature anomaly for each station in the cell that reported data.

We start by determining the temperature anomalies for Apto Uto in those months, which is just a matter of subtracting the monthly values from the long-term averages as they appear in the file of data for that station.  The mean monthly temperatures in April, June and July 1978 are 81.5, 83.4 and 83.4 respectively, the long-term average temperatures for those months are 27.8, 27.9 and 28.0 and the anomalies are therefore 53.7, 55.5 and 55.4 degrees.  We repeat the operation for the other stations in the grid cell and then calculate the average anomaly for each month.  The table below shows the anomalies in April and June, the average of those anomalies and the data extracts from both the CRUTEM4 dataset (observation stations only) and the HadCRUT4 dataset (Land and sea).

Temp. Anomalies

ID

Station

Country

Apr-78

Jun-78

800890

Apto Uto Colombia

53.7

55.5

803920

Blonay Colombia

0.3

0.2

800970

Cucuta/Daza A Colombia

-0.5

-0.2

804250

Mene Grande Venezuela

-1.2

-0.7

804380

Merida Venezuela

-0.3

-0.4

804470

San Antonio del Tach Venezuela

-0.1

-0.3

averages of the above

8.65

9.02

Extract from CRUTEM4

8.65

9.02

Extract from HadCRUT4

8.65

9.02

There can be no doubt that the obviously flawed data for Apto Uto has been used in the HadCRUT4 dataset regardless of what the Hadley Centre says.   A similar analysis was undertaken for Golden Rock Airport in St Kitts with its 0°C in December of two years and the CRUTEM4 dataset contained the average anomalies but the data was merged with sea surface temperature data when it came to HadCRUT4 because the grid cell covers both land and sea and the result is less clear.

It’s not that the Hadley Centre has been dishonest about this, it’s that the quality checking it uses has a serious flaw.  It works sometimes but not others, and there’s a good reason for that.

Section 7.6 (pg 54) of the audit shows many examples of outliers that are present in the temperature data from stations.  Wad Medani, for example reports a mean monthly temperature of 99.9°C and Oruro reports 90.0°C.  Other stations report mean temperatures of 0°C when their average temperatures in the same month are at least 8.2°C and as high as 27.4°C.  Table 7-5 lists 25 examples of monthly mean temperatures that are more than 25 standard deviations away from the long-term average for that month.

(At this point you might like to consider what it means that so many errors exist in the data that national meteorological services supply to the people at the CRU for inclusion in the CRUTEM4 and HadCRUT4 datasets.  Should we trust any data at all from these people?)

The grid cell that contains Wad Medani has five other grid cells that reported mean temperatures in that month.  These ranged from 0.5°C to 2.0°C, compared to 99.9°C for Wad Medani.  The average anomaly is 12.38°C if we include Wad Medani and 1.1°C without it.  The HadCRUT4 grid cell value is 1.14°C, so it seems that Wad Medani was correctly rejected by the quality control processing used by the Hadley Centre or the CRU.

So what’s going on?

We can get clues from the number of outliers that the relevant documentation says would be rejected – those more than five standard deviations from the mean – by looking at the number of rejected values each year.  The two years in which more than 100 outliers would be rejected are 2003 (212 outliers) and 2015 (163).

HadCRUT4, outliers, analysis, quality control.

Outliers are defined here as being over 5 standard deviations from “normal”.

The very low number of outliers from 1941 to 1990 is obvious.  This is the period over which standard deviations are calculated (compared to 1961 to 1990 for long-term average temperatures).  Just 26 outliers were discovered for this period, none more than 6.1 standard deviations from the mean

The problem in a nutshell is that the Hadley Centre and/or CRU fail to remove outliers from the data before they calculate the standard deviations.  This can lead to ridiculous values, which when multiplied by five to set the limits above and below the mean become positively bizarre.

The metadata for Apto Uto contains the following line:

Standard deviations =   0.6   0.6   0.5  11.9   0.5  11.8  12.0   0.6   0.5   0.6   0.6   0.7

Five standard deviations for most of those months means no more than 3.5°C but in three of those months they are 59, 59.5 and 60 degrees.  The long-term averages in those months are around 28°C and together that means the temperatures of 81.5, 83.4 and 83.4 are all less than five standard deviations from that mean.

Before calculating the standard deviations from a subset of the data any outliers should have been removed and the process repeated until all the data fell within limits, which probably should have only been the more common three standard deviations anyway.

The people who created the HadCRUT4 dataset are simply incompetent, there is no other word for it. 

____

*Station Site number for San Antonio was a repeat of Apto Uto. It has been corrected, thanks to Jim Ross.  23 Oct 2018

VN:F [1.9.22_1171]
Rating: 9.7/10 (75 votes cast)
Hadley excuse implies their quality control might filter out the freak outliers? Not so., 9.7 out of 10 based on 75 ratings

138 comments to Hadley excuse implies their quality control might filter out the freak outliers? Not so.

  • #
    Clyde Spencer

    John McLean,
    The somewhat well known (in our circles) Nick Stokes has claimed on WUWT that you have done your analysis on the raw, archived data, and not the data that has been cleaned, and is used for analysis. Would you please speak to that claim?

    60

    • #
      ivan

      Why? That sounds like the typical ill-informed FUD that usual from Stokes. Or in other words he is trying to divert from the real debate.

      61

      • #
        el gordo

        Yeah OK, but Stokes should be given a clear answer. We need to convince the modellers that they have got it wrong.

        60

        • #
          Jeremy Poynton

          No chance.

          10

          • #

            John McLean has replied below.
            “Stokes’s implied claim of a secret cache of corrected temperature data that’s used for CRUTEM4 and HadCRUT4 processing is therefore complete nonsense.”

            There is only one HadCRUT4 dataset and John has linked to it in the original post. It is a gridded global set — meaning that say “St Kitts” Golden Rock data — which is for one station — is an input to it, and gets amalgamated with any other stations or sea data in that particular grid cell and “yes” that ridiculous station data influences the final number reported for that grid cell in HadCRUT4. Same with Apto Uto (as John discussed in the post).

            To make it as bluntly clear as possible, the Hadley experts have taken that obviously flawed data and accepted it as if it were real.

            50

            • #
              StephenP

              When I was at school I had to show my workings for maths, chemistry and physics. Also destroying or not keeping the original data and measurements was a real no-no resulting in a fail. Why was the original data for Hadcrut destroyer, leaving only the adjusted measurements and no workings? Who was responsible?

              30

              • #
                John McLean

                The original data is probably not destroyed. I doubt that the CRU has it. It’s the national meteorological services (e.g. Bureau of Meteorology, UK Met Office) that have that data. They simply forward the adjusted data to the CRU for inclusion in the datasets.

                00

            • #
              Doug Lavers

              I do not know whether this was covered, but I would have thought there is a more fundamental problem:

              If obvious errors exist in the outliers, what does it say about the “quality” of the bulk of the data set?

              Possible multiple small errors will not be identifiable.

              10

              • #
                John McLean

                Absolutely Doug.
                What trust can we have in national meteorological services if they supply data with such obvious errors?
                What trust can we have the CRU if it allows those errors to get into the HadCRUT4 dataset?

                10

      • #

        I think you just tried to divert from a valid need for clarification.

        38

    • #
      robert rosicka

      Isn’t the answer to Nick Stokes question about raw data addressed in John’s article above ?

      100

      • #
        Clyde Spencer

        Robert Rosika,

        As I read it, the working data is cleaned using input from the raw data. The large errors in the raw data prevent the optimal cleaning of the working data. But, Stokes is right that that original data should be preserved, untouched, in case there is ever a need to go back and find the source of a problem. The implication being that an intermediate product is necessary to create a high-quality, final working copy.

        McLean’s article above implies that there are at least potential problems with the working data set, for reasons he sets out. However, it doesn’t directly address the basic question of whether McLean only looked at the raw data, or if he looked at both the raw and ‘final’ datasets. I’m trying to get to the answer to reply to Stokes. He has a reputation of being very sophistic and very bright. So, I want to be sure my ducks are lined up before I challenge him.

        70

        • #

          surely his peer reviewed Ph.D states this clearly? Why would he bother working on the raw data though except for comparative reasons to check if the weird things he finds in more polished data have their origins in the raw?

          25

          • #
            OriginalSteve

            If the raw data is rubbish, how the dickens can you get anything useful from it?

            Garbage in = Garbage out

            122

            • #

              that is not the issue and is another distraction (or you are just confused).

              All science, and I mean all, will have circumstances when rubbish data is produced for one reason or another. If it is detected and withdrawn from downstream analyses, with proper justification, then it never becomes part of the “in”. So the existence of rubbish raw data does not mean rubbish in rubbish out.

              The question being raised here is whether the study being discussed looked at raw data or whether it looked at the data being used to interpret climate history and create climate models.

              56

              • #
                robert rosicka

                And what is the answer ahh gee ?

                50

              • #
                OriginalSteve

                GA – That doesn’t make any sense – the raw data is from themometers – either the themometers are wrong, or?

                There arent many ways you can stuff this raw data up……

                70

              • #

                Robert – I don’t know the answer but I think that Clyde’s question merits a response even if it is as simple as “read page 50 of my thesis”.

                Steve – yes, data outputs from devices can be wrong, systematically or sporadically. If a device gives good verified data and outputs an unexpected outlier then you investigate, determine the reason and disregard the data if justified. You don’t throw out years of verified good data because there is some bad data. I can’t think of a single scientific instrument that produces 100% “good” data. Can you?

                51

              • #
                crakar24

                Typical of GA arrives just in time to support the AGW agenda, throws a few dead cats and then scurries away again.

                This has nothing to do with raw/adjusted data its just how you manipulate the data.

                So if apto uto has 3 months of 90C and these values are rejected prior to manipulation for being beyond 5 standard deviations then you will get a (lets call it) accurate temp for that grid cell.

                If on the other hand you include the 90C temps in your grid cell manipulation then you will get inaccurate (read garbage) temp for your grid cell, if it falls outside the 5 std deviation it is rejected, if it falls within it gets accepted.

                Ergo Garbage in = Garbage out.

                Now scurry away GA

                130

              • #
                Sceptical Sam

                @ GA

                If a device gives good verified data and outputs an unexpected outlier then you investigate, determine the reason and disregard the data if justified.

                Really?

                Tell that to BoM. It homogenizes from hundreds of kilometres away.

                BoM is well overdue for an audit by the ANAO given that:

                a lot of the problems begin with the national meteorological services which supply the shoddy data

                120

              • #

                Let’s be clear, the original raw data is either hand written notes which our BOM throws in dumpsters or it’s the electronic records they routine destroy.

                At no point are we talking about the real raw records.

                131

              • #

                Still not clear Jo. The question is what data is being examined not what is not being examined.

                41

              • #

                Thanks Crakar… I like to scurry as it is good for raising the heart rate. I’m looking forward to your useful follow-up comments and evidence that you are hanging around.

                It is funny that you see virtue in hanging around but don’t do it yourself.

                31

            • #
              Crakar24

              Lol your funny ga, the data is the data produced by the individual meteorological organisations from countries.

              To simplify the situation for you ga.

              Let’s assume it is raw data used and have an error of 90c the error rejection techniques used by hadcrut leave open the possibility this error will be accepted.

              If it is the processed data that has a 90c error the techniques used by hadcrut leave open the possibility this error will be accepted.

              Feel free to scurry at your earliest convenience

              40

              • #
                Gee Aye

                I’m not scurrying. I’m interested in the answer to Clyde’s question. When it is answered I’ll scurry. Your answer suggests you don’t understand what is going on.

                12

          • #
            Clyde Spencer

            Gee Aye,
            You asked, “Why would he bother working on the raw data though except for comparative reason…”

            I’m trying to find out if there is any validity to Nick Stokes claim that McLean only worked with the raw data. Stokes doesn’t carry a lot of credibility with me because he will apparently say anything to win an argument. But, I want to be sure.

            70

            • #
              robert rosicka

              My guess is that he had to do both once he discovered inconsistencies, eight years is a long time to check but given the huge amounts of data he had to sift through I think maybe he only picked those that stood out .
              Hadley seem to agree the raw data is corrupted but insist they have corrected for errors ,John has shown that it’s not always the case .
              It would be near on impossible to go back and check every temp to see if it was ok and even from the place it was supposed to be recorded from .
              Many comments here say that what they are dealing with is no longer relevant data anyway once adjusted and I have to agree .

              10

          • #
            Peter C

            You were going to read Dr McLean’s PhD thesis Gee Aye.

            Have you done so? If so, you might be able to answer the question.

            50

            • #

              I’ve only skimmed it. I’m probably not going to get into it until after the weekend, by which time this thread will be long old.

              41

              • #
                Annie

                You can still come back. There are comments on old posts listed above right if you look; I quite often follow one back to an older thread.

                30

              • #
                Peter C

                Not much point for Gee Aye to do that Annie. He is not interested in the answer.

                The main reason for looking at the McLean thesis was to see if he (Gee Aye) could find anything wrong with it. A quick skim was enough to show that McClean’s work was ok, so no point in making any further effort.

                Nothing really wrong with that. About the same as most peer reviewers might do.

                50

              • #
                Sceptical Sam

                You can make reference to it at Mid-week unleashed or whatever.

                As they say om the backs of the (now banned) “Wicked” camper-vans: “we’re not gynecologists but we’re happy to have a look”.

                20

              • #

                Like Christina Figures said, like the Climategate
                emails revealed, ’twasn’t evah about the science.
                Oh Galileo! … oh Gee Aye.

                70

              • #
                Gee Aye

                this is the quality of skeptics we get these days

                A quick skim was enough to show that McClean’s work was ok, so no point in making any further effort.

                Nothing really wrong with that. About the same as most peer reviewers might do.

                03

        • #
          robert rosicka

          My interpretation of the above would be both raw and adjusted ,the graph of outliers as stated shows that anomaly between 1940 – 1990 in adjusted data .
          The two biggest problems here that I see are the reported raw data and the inconsistencies in the adjusted data especially in those years .
          Although they say all raw data is adjusted obviously some is getting through ,and being calculated and sent to the IPCC and this is unforgivable because of the dollars being invested and computing power they have .

          50

        • #
          Clyde Spencer

          Jo,

          You said, “Let’s be clear, the original raw data is…” OK, we are talking semantics here. Clearly the original meteorological data comes from the individual reporting stations, in various forms. But, the article, and my question, are about the HADCRUT database. That is, after the station data are received, someone has to collate all the individual submissions and create one master database. (Which offers the opportunity for introducing additional error.)

          So, my question was whether the original HADCRUT collation, or ‘scrubbed’ HADCRUT data, or both, was the subject of John McLean’s dissertation. I had expected this to be an easily answered question. Apparently not.

          11

          • #
            Gee Aye

            Clyde… have you gone to the source, not the third-hand hype?

            00

          • #
            John McLean

            It’s not a database, it’s a dataset.
            It’s a single file, which for every month since January 1850 has values for each of the 5 deg Latitude x 5 deg longitude grid cells. (If you work in IT you’ll know what I mean when I say it’s an array of 72 x 36.)
            Some grid cells have a flag value that indicates missing data, but others have data. Those values have been produced by processing the data from observation stations and from sea surface temperature measurements, which you can think of as inputs to the dataset. There is no database as such.

            10

    • #

      To Clyde who said:
      John McLean,
      “The somewhat well known (in our circles) Nick Stokes
      has claimed on WUWT that you have done your analysis
      on the raw, archived data,
      and not the data that has been cleaned …”

      What raw data ?

      There are no raw data.

      In fact, there are no data at all.

      There are infilled numbers
      that can’t be verified or falsified,
      for a majority of the grids.

      For the rest of the grids,
      there are “adjusted” data.

      After raw data have been “adjusted”
      they are no longer data — they are someone’s
      guess as to what the raw data should have been
      if measured accurately in the first place.

      I was too cheap to buy the $8 report
      so I assumed the errors McLean found
      were incoming numbers from the national
      meteorological offices.

      If those offices are so sloppy
      with the numbers they send in,
      for HadCRUT4, then why should
      ANYONE ELSE in the same business,
      ( goobermint bureaucrats,
      with science degrees,
      with permanent and job security
      as long as they keep predicting
      a coming climate catastrophe! )
      compiling the global average,
      be assumed to be perfectly accurate?

      I would be very surprised if any “outsiders”
      get to view the raw data and “final numbers”
      for each grid.

      The HadCRUT4 numbers are irrelevant –
      the two other measurement methodologies,
      weather satellites (UAH) and weather balloons,
      correlate well with each other –
      – HadCRUT4 is the outlier,
      and should NOT be used.

      Real scientists do not use the outlier data,
      and ignore the other methodologies !

      Surface temperatures are ONLY used
      because they show more
      global warming than the
      other two methodologies.

      I propose renaming HadCRUT4
      as HadCRUD, in honor of
      Mr. Nick Stroker.

      80

    • #

      As Nick Stokes should be aware, the “raw” data is not that at all, but has been cleansed of obvious errors. Further, it may have data corrections already made by the people on the ground. For instance, if an air conditioning unit is located near to the thermometer, it should be noted. Then if there is an anomalous movement in temperatures, an adjustment should be made prior to submission.

      50

    • #
      Bitter&twisted

      The people who created the Hadcrut dataset are completely incompetent.
      Step forward “ Professor” Phil Jones.

      40

    • #
      John McLean

      Clyde,
      Thanks for mentioning this and telling me where to find the comments. I’ll get onto them right now. I’ve just been rather busy today.

      John

      20

    • #
      John McLean

      star comment I don’t know where Stokes is getting his information from. In the guest post above I point out that the obvious errors in the station data that the CRU makes available for download do indeed sometimes find their way into the HadCRUT4 dataset and I show why that happens.

      The Hadley Centre says that it applies quality control and it does, the problem is that it doesn’t always work. Table 7-5 of the audit lists some even more ridiculous errors, like a monthly mean temperature of 99.9C, another of 90.0C and a third of 88.0C. These are (correctly) rejected. Given that outliers during the period over which standard deviations are calculated are NOT rejected prior to that calculation it’s probably a very good thing that there were no serious outliers for these three stations across that time.

      Incidentally I also examined Golden Rock Airport (on St Kitts in the Caribbean) with its two December mean temperatures of 0C. I gathered the temperature anomalies for the other observation stations in the same grid cell and calculated their average. December 1984 was easy because only three of six stations reported and their anomalies were -23.4, 0.3 and 0.3 (and no prizes for guessing which was Golden Rock). The average is -7.60, which is the same value as the CRUTEM4 dataset says.

      In contrast HadSST3 for the same grid cell says -0.72 for sea surface temperature anomaly.

      The HadCRUT4 dataset includes sea surface temperatures, the data from land and sea being proportionally weighted but with a minimum of 10% (I think) for land-based data. The HadCRUT4 value is -2.43, which is a quite a spike for the tropics. In the guest post above I kept things simple – the grid cell for Apto Uto has no sea surface temperature component.

      Okay, I’ve shown two examples of the inclusion of obvious errors. The other thing I’ve shown is that the station data provided by the CRU can be manually used to replicate the steps used in the processing for CRUTEM4 and HadCRUT4.

      Stokes’s implied claim of a secret cache of corrected temperature data that’s used for CRUTEM4 and HadCRUT4 processing is therefore complete nonsense.

      90

  • #
    OriginalSteve

    That’s gotta hurt….

    “Ignore that little man behind the curtain”
    - Wizard of Oz, “Wizard of Oz”

    130

    • #
      Roy Hogue

      A very apt comparison. I’d give you a dozen green thumbs if I could. But have a few of these on em.

      :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-) :-)

      80

      • #
        Roy Hogue

        Backward me is yoR. About how things go sometimes. :-(

        40

      • #
        OriginalSteve

        Thanks Roy…i figure these guys are already in a deep hole….best to stop digging it deeper…

        30

        • #
          Roy Hogue

          If only they would stop digging. And strange as it may seem to you and me, the world gazes into the hole and says,”Wonderful. Give us more.”

          I guess we should be grateful that it keeps Jo supplied with things to write about.

          00

    • #
      Yonniestone

      If that little man behind the curtain can gift anyone at Hadley with a Brain a Heart and Courage then I’d forgive his little sideshow.

      80

  • #
    Roy Hogue

    We perform automated quality checks…”

    And therein lies a big part of the problem. Automated quality checks according to what criteria would be a good question to ask, would it not? But no one asked it. So automation is just an excuse for failure.

    140

    • #
      Yonniestone

      A machine is only as good as the person that programmes or controls it, the person in this case preoccupied with,

      A- YouTube.
      B- RedTube.
      C- Doughnuts.
      D- Firefly.
      E- All of the above.

      70

      • #
        OriginalSteve

        Garbage in = garbage out

        31

        • #
          Another Ian

          Refined in the case of “climate science” to

          Garbage in = gospel out

          Particularly when the real answer can be felt (/s)

          50

      • #
        Graeme No.3

        Yonniestone:

        20 years ago I had the “pleasure” of checking a database of approx. 8600 products with 32 descriptive records. The problem was that it was in MS Access and data had been transfered from a spreadsheet. Unfortunately there was a bug in Access 97 around the 8,200 records which jumbled the subsiduary data. I had to go through it for each product and try to work out where the descriptive records had gone. Some had been transposed by columns, some columns had been re-ordered up side down and some had been jumbled in groups of about 20. It wasn’t helped by a ‘manager’ who kept making changes. It was an “interesting” experience.
        There was an internal war on between the IT Dept. which insisted on MS SQL being used, him who thought he knew Access and me who wanted to use the Software bought especially for the job for $40,000 which would integrate with the Unix Mainframe and make life a lot easier by being (semi) automated. Shortly afterwards he left (just before he could fire me as non-performing [as happened to 4 out of his 7]) and I got some of the software going, but as far as I know most of the software was never used and is sitting in a cupboard somewhere gathering dust. A couple of years later (after a move elsewhere ) I was ‘offered’ redundancy and was glad to go as I saw no future for that subsiduary Company (of a large USA) but I was wrong, it took 6 years (not 5) before they shut the whole thing down. losing about 170 jobs.
        The problem was always that too many people wanted to set targets/finish dates without any of them worrying about getting things right in the first place.

        30

    • #
      Harry Passfield

      I bet the ‘automated quality checks’ are as accurate as the models. Ha! Of course they are.

      60

  • #
    Yonniestone

    “Apto Uto with its three monthly mean temperatures above 80°C.” and “Five standard deviations for most of those months means no more than 3.5°C but in three of those months they are 59, 59.5 and 60 degrees.”

    Is this ‘C’ as in Celsius (Centigrade)? if so WOW.

    30

    • #
      John McLean

      The HadCRUT4 dataset is of temperature anomalies in Celsius. The 80+ degrees from Apto Uto seem to be Fahrenheit that wasn’t converted to Celsius before sending the data to the CRU for inclusion.
      The Colombian meteorological people didn’t find and correct the errors and neither did the people at the CRU.

      Given that this happened, would you trust any data from Colombia? Would you trust any data that the CRU uses?

      10

      • #
        Yonniestone

        Given that’s a grade school error then no, after following this entire climate agument for ~ten years this error is one of if not the worst witnessed.

        Great pick up John and thank you for the reply and your good work.

        10

  • #
    MrGrimNasty

    If the MO wants to suggest that such errors make no meaningful difference and it all averages out in the end, then surely ‘adjustments’, like for time of observation in other datasets, are all a complete waste of time!

    I wish they would all just be honest. The surface data record is only one step away from being garbage, it was never intended for, and is completely unsuitable for, gauging global warming. Pub talking points about the weather maybe, serious climate science – no.

    140

    • #
      MrGrimNasty

      What this episode (and all the other adjustment examples) highlight is – institutional confirmation bias.

      If the data is giving the right answer, there is no curiosity to investigate it and thoroughly validate it.

      If the data isn’t agreeing with the theory, then there is a highly motivated search to find reasons to adjust it. Inevitably if an error/problem is discovered that takes the data the wrong way – it will immediately be dismissed as wrong, and the search resumed for perceived errors that must exist to take it the right way.

      This is one reason that over time all the data-sets have come into agreement with each other.

      That is of course the innocent, benefit of the doubt, explanation. The other is straight up calculated dishonesty.

      40

  • #
    Harry Passfield

    “Hey, it’s not like Life on Earth depends on us understanding our climate. :- )”

    They don’t care. They’ve been given their instructions and the results are pre-defined to fit the scare-story the (Bilderbergers/UN?) have determined. God knows what their objective is, other than to impoverish the West and leave China in the ascendant. And for this, Gore and others have sold their souls. Faust lives.

    140

    • #
      OriginalSteve

      Agreed.

      I have always said this a religious war – the war between the Satanic religion of the Occult human-hating UN ( funded & endorsed by world govts – think about what that means…!! ) and the rest of humanity, specifically monothestic religions, and specifically Christianity.

      This is the warm up act – the fake scare is put ion place, the draconian liberty cleansing laws are running, now were waiting for the hammer to fall.

      Make the most of your freedoms while you still have them….I think the energy debacle is just a way of trashing this country and ushering in gulags for the deplorables and start rating power to those who will sell their souls to the Satanic globalists….

      70

  • #

    Everything about this entire issue, worldwide, is only about the money. Every meeting and conference they hold is always about the money. This is the biggest gold rush in history and no one is going to give up their stake without a fight.

    190

    • #
      el gordo

      A serious change in world weather, the opposite of global warming, would bring the edifice down in a flash.

      We need to draw up a list of negative feedbacks to support our argument that global cooling has begun.

      70

      • #

        All I can attest to is that the last few winters have been bloody cold, so happy to have hot summers to compensate. Therefore, -1 + 1 = 0.

        60

        • #
          el gordo

          The previous couple of winters in Australia have been bitter and even now in late October it remains cool.

          BoM’s seasonal forecast of warmer and drier is wrong, all we need is Cameron and Dean to demolish them.

          60

          • #
            theRealUniverse

            It was stated on Ch7 weather that there hadnt been a day over 30 (in Brisbane) since April, that was about a week ago.(cant remember the day).

            30

            • #
              el gordo

              Its the cool south easterly winds onto the Queensland coast, caused by blocking high pressure, should make conditions cooler and wetter.

              Assuming this pattern continues I wonder what impact this would have on cyclone activity?

              30

      • #
        Albert

        Last winter in the northern hemisphere there was record snows, blizzards and many new record low temperatures so it seems the winters are getting colder

        60

  • #
    Another Ian

    Hadley results in this update

    “September 2018 Global Surface (Land+Ocean) and Lower Troposphere Temperature Anomaly Update”

    https://wattsupwiththat.com/2018/10/18/september-2018-global-surface-landocean-and-lower-troposphere-temperature-anomaly-update/

    40

    • #
      robert rosicka

      I love the divergence from the IPCC predicted rise in temps to the actual rise , time for some more fiddling .

      50

  • #

    Yep.

    And let’s remember that average temp is often the same for a day with lots of cloud and one with no cloud. A day when the sun did not come out at all and a day when it shone down hard at afternoon peak can achieve the exact same average because what determined high max also determined low min, and vice versa. One is an apple, the other an orange…but statistics make them the same. Yep. Your precious statistics just turned an apple into an orange.

    Getting away from averages of min-max, let’s remember that a whole year with little cloud will almost certainly achieve a higher mean max than a cloudy one. Yet here’s a mystery. The difference in max temp between Sydney’s driest year (1888) and its wettest (1950) is just .1 of a degree, with 1950 actually warmer. Yet Newcastle Pilot, with little or no UHI, shows the expected disparity between parched 1888 and drenched 1950, with 1888 showing a much higher average max than 1950: a whopping 1.9 degrees. Which makes me think we need to chuck out ALL Sydney temps from ALL calculations leading to comparisons, even early stuff. But with that handy warming trend they won’t do that, will they?

    Cloud. Duh. UHI. Duh.

    Let’s remember that a temp achieved is not as important as a temp held. That Sydney record in 2013? It was a matter of seconds. And the record broken, that of 1939? We don’t get to know. Reckon a dose of UHI in 2013 didn’t tickle things up just a tad? More than a tad? (If I use my loaf and hunt around I quickly gather that the heat emergency of 1939 was far more dire than that of 2013.)

    Here’s the first problem: statistics are born lame and distorted. People can help the poor things along and they can help people if people have common sense and no agendas. Good luck with that.

    Another thing. Global cooling is a dry affair in more places than not. So don’t be surprised if any global cooling shows lots of high maxima. And any global warming (real) may very well repress maxima. With cloud.

    Clouds get in the way. Anne and Joni got it right.

    120

    • #

      Correction. 1950′s mean max was actually .4 warmer than 1888 in Sydney. With all that cloud! Really, the UHI at the Obs is poison, and probably has been for many decades.

      No wonder the bloke who runs the organic garden at the top of Paris’s Galeries Lafayette says urban gardening gives you weeks of extra growing time in the year, with a much earlier spring for planting.

      Why don’t we stop comparing what a drunken postmaster might have recorded a hundred years ago with what some buggy electronic gizmo records in 2018?

      Let’s just grant that the world is warming a fraction. It’s done that plenty of times in recent millennia, so why not now? The real problem is that it probably won’t warm for very long. It never has.

      60

      • #
        robert rosicka

        Difference between a hundred years ago and today would also be no Tarmac or Bitumen anywhere near a Stevenson screen !

        90

      • #
        Greg Cavanagh

        That is the beginning of the elephant in the room mosomoso. The temperature reading at 9am by itself doesn’t mean a great deal. String a lot of them together, and you’ve got a great deal of not much meaning.

        Then you add the 9am temperature to the 3pm temperature, and what have you got? Another nothing-meaningful. String a lot of them together and you get a big pile of junk that, in reality, doesn’t say much about anything.

        Then you homogenise those numbers from individual sites over a large area to create a huge pile of nothing-much.

        It’s turtles all the way down.

        Believing that this grand homogenised value for the entire world means something, is hubris of a grand scale. Ridiculous on steroids.

        80

    • #
      Greebo

      One is an apple, the other an orange…but statistics make them the same. Yep. Your precious statistics just turned an apple into an orange.

      And then the Hadley Centre turns them both into lemons.

      70

  • #
    toorightmate

    It is a very sad scientific world when models have taken over from real data and trends from the real data.
    Never has so much horsesh*t been expended on real data.

    91

    • #
      robert rosicka

      Now now don’t go slandering “Models” or the leaf will be here to tell you your life depends on them !

      60

    • #

      your life depends on them

      41

      • #
        robert rosicka

        My life depends on bacteria but some of them can make me sick or worse , modelling in climate science is junk science nothing more nothing less .

        91

      • #
        Annie

        Green thumb for fulfilling prediction!

        30

        • #
          Annie

          Reply to GA, that was.
          Mind you, some modelling for things like bridges and aircraft is necessary. It’s the ones that try to tell us about climate when they can’t possibly factor in all that affects the weather that I can’t trust, at all. I’m thinking Sun, Earth’s deep innards, etc.

          50

      • #
        FarmerDoug2

        Well a lot of lives are being ruined by them.

        20

  • #
    Uppyn

    Industry is miles ahead of this bunch with routine implementation of statistical process control techniques. Recently I stumbled across the industrial standard JESD50C with the title: Special Requirements for Maverick Product Elimination and Outlier Management

    This standard describes techniques to eliminate outliers from a population of electronic components.

    30

    • #
      robert rosicka

      I’m open to a model being useful somewhere in some field but if your asking a computer to make a guess why not save yourself millions and just get a fortune teller , same random accuracy .

      10

      • #
        robert rosicka

        Climate and finance are both modelled but both have unexpected and unknowns that you just can’t anticipate, none of the majors seen the subprime fiasco in the USA and none can predict volcanic eruptions or meteor strikes or even the weather accurately even just 12 hours ahead .
        There is no substitute for real life testing in real environments, I’m sure the motor car industry can attest to that , I’m sure they model every aspect of their design but recalls are common .
        If you can’t know 100% what the outcome of your model is going to be and you can’t know all the variables all your doing is making million if not billion dollar guesses .

        10

      • #
        Another Ian

        Even cheaper – use your own two bob bit

        20

  • #
    OriginalSteve

    A train wreck in the making?

    https://www.abc.net.au/news/2018-10-19/reef-company-altered-scientist-report-crown-of-thorns-program/10391730

    “A company given millions of taxpayer dollars to cull the devastating crown-of-thorns starfish on the Great Barrier Reef altered a scientific report about the “poor management” of its own program, an ABC investigation can reveal.

    The changes to the document were made despite the author demanding it be published “as is”, with a company employee suggesting it “wear the wrath” of the scientist.

    Marine biologist Dr Udo Engelhardt told the ABC he was “baffled” as to why changes were made to his report.

    “It’s difficult to speculate about this, the reason for these changes appearing. They are quite extensive changes, so a bit of effort has gone into that quite clearly,” he said.

    The not-for-profit company, the Reef and Rainforest Research Centre (RRRC), was helping deliver a $4.2 million government contract to destroy crown-of-thorns between 2013 and 2015.

    In 2014, it hired Dr Engelhardt to determine how effective the program had been.

    Dr Engelhardt’s final report, which has been obtained by the ABC, was highly critical, stating there were “several serious concerns relating to overall poor management [of the program]“.

    The report also said the program had been “operated without any concern given to the inherent ecological risks of inadequate control measures”.
    The ABC has also obtained email correspondence between Dr Engelhardt and the company, as well as internal correspondence between its managers.
    They show Dr Engelhardt agreed to a number of changes to the document, and then put his foot down.

    “Please find attached my FINAL, FINAL COTS [crown-of-thorns starfish] Controls Efficacy Report for publication as is!!!,” Dr Engelhardt wrote in an email to the RRRC.

    A few days later it was forwarded to Sheriden Morris, RRRC’s managing director.
    In that email, a RRRC project manager said: “Udo has accepted a reasonable amount of changes. But the barbs and emotive words are still peppered through.”

    The employee also noted that the report could cause problems with the Great Barrier Reef Marine Park Authority, which had given millions of dollars of grants to the RRRC.

    The email then provided four options for what to do next, including a possibility to: “Keep our changes and publish it with Udo’s name and wear the wrath of Udo.”

    Six days later, a version of the report was uploaded to a commonwealth Department of Environment reporting portal, with changes made throughout the document, including ones Dr Engelhardt had refused to make himself.

    That version of the document is still available on the government website.”

    https://fieldcapture.ala.org.au/project/index/69388477-fea4-4225-89d0-3fc656da46ac

    90

    • #
      robert rosicka

      Perfect example of $$$$$ versus actual data ,no money in telling the truth so fudge the numbers to get more cash .

      50

    • #
      Another Ian

      Isn’t that company somewhat connected to the recipients of the latest $400- odd million?

      20

  • #
    pat

    18 Oct: Science Mag: Climate change doubters are finalists for Environmental Protection Agency Science Advisory Board
    By Scott Waldman, E&E News
    Finalists for the Environmental Protection Agency’s (EPA’s) Science Advisory Board include researchers who reject mainstream climate science and who have fought against environmental regulations for years.

    Among them is an economist from the conservative Heritage Foundation whose work was cited by President Trump as a justification for withdrawing from the Paris climate agreement. Another downplays the dangers of air pollution. Several scientists are from energy companies like Exxon Mobil Corp. and Chevron Corp., and the list includes a researcher who argues that more carbon dioxide is good for the planet.

    A few are associated with the Heartland Institute, which has advocated for the rejection of climate science to lawmakers, teachers and voters…
    The agency released a list of 174 nominees yesterday and will accept public comment until Nov. 7…

    The latest round of selections, which will be made by acting Administrator Andrew Wheeler, could shift the makeup of the board toward an industry viewpoint as EPA weighs a number of deregulatory actions. It currently has 44 members…

    Last week, Wheeler replaced five of seven members on the agency’s Clean Air Scientific Advisory Committee. He weighted the panel with industry voices and regulators from states critical of regulations adopted under former President Obama (Climatewire, Oct. 11). (LINK). EPA also disbanded an affiliated panel that had been working on an assessment of the current limits on airborne particulates…

    John Christy, another nominee, is Alabama’s state climatologist and a professor at the University of Alabama, Huntsville…
    Anthony Lupo, a professor of atmospheric science at the University of Missouri, Columbia, has argued that global warming is natural, not man-made. He co-founded Climate Exit, or “Clexit,” which asserts that rising levels of carbon dioxide benefit the Earth…

    William Happer, an emeritus physics professor at Princeton University, is also on the list…
    Happer was recently appointed to serve on the Trump administration’s National Security Council as the senior director for emerging technologies…
    http://www.sciencemag.org/news/2018/10/climate-change-doubters-are-finalists-environmental-protection-agency-science-advisory

    40

  • #
    Chad

    Asked in a previous thread, but probably more relavent here….
    Question for the Mathematicians and Statisticians..
    Knowing that there is a real spread of temperatures on the planet surface of -30C to +50C, ..or more ..
    How “valid” is an average calculated to 0.01C. from such a large data range ?
    And how Valid is it to compare that average to a “similar” result from data collected montha or years earlier ?

    30

    • #

      Chad
      Mo’ decimal places = more accurate.

      Didn’t you learn that in school?

      The next question is why use temperature anomalies
      rather than absolute temperatures?

      The answer is that a 0,1 degree C. change looks huge
      on a chart that has a vertical range of 1 degree C.
      or 1.5 degree C.

      Check out the home page of my climate science blog
      to see global warming using absolute temperatures,
      not anomalies, in Fahrenheit
      – shown like a row of glass thermometers,
      one for each year from 1880 to 2016:
      http://www.elOnionBloggle.Blogspot.com

      30

      • #
        theRealUniverse

        Statistician William Briggs stated a while ago that ‘you dont average time series’.
        Good graph Richard.

        30

        • #
          Clyde Spencer

          theRealUniverse,

          And I would agree with Briggs. Instead of taking the diurnal high temp’s for a month, and averaging them, and doing the same for the diurnal lows, and then finding the mid-range (median) of the two averages, (12 samples for a year) the highs and lows (and/or their anomalies)for a full year (365 samples) should be calculated and presented separately. The averaging of mid-ranges of averages so reduces the annual variance that the claimed uncertainties are a fraction of the predicted standard deviation from the frequency distribution of the annual global temperatures, from the Empirical Rule.

          00

    • #
    • #
      John McLean

      Chad, I know that talk about change in average global temperature but that’s not really what they mean. They usually omit the vital word “anomaly” from the end but it’s really what it’s all about.

      In case you don’t know, a temperature anomaly, in the context of climate data, is the difference between the mean temperature in a given month and the long-term average for that month. HadCRUT4 uses long-term averages calculated across 1961 to 1990, but Nasa/GISS uses something weird like 1901 to 2000, this despite the poor coverage prior to 1950 (and station data that has probably been incorrectly adjusted multiple times).

      HadCRUT4 global averages are therefore not of temperatures but of temperature anomalies. The habit of people who should know better is to take a situation like a 1950 anomaly of -0.2 and a 2016 anomaly of +0.4 and say that the average global temperature has risen 0.6 when it’s really the anomaly. What’s the difference? Usually the coverage, the number of reporting stations and the number and location of sea surface temperature measurements.

      20

      • #
        Jim Ross

        John,

        You make a very important point about the use of anomalies. HadSST3 is of course a key component of HadCRUT4 and, if you look at the two hemispheres separately, there is an issue with HadSST3 that shows up like a sore thumb. See here:
        http://www.woodfortrees.org/plot/hadsst3nh/from:1990/plot/hadsst3sh/from:1990

        While the SH data show a reasonable (non-annual-cyclic) monthly trend with peaks and troughs largely reflective of ENSO variations, the NH data show a sharp and increasing influence from the annual temperature cycle starting in early 2003. This coincides with a sharp increase in the number of monthly observations (your Figure 5.1). In addition, the low values of the annual NH cycle (usually in March) seem to track the SH data reasonably well (as did all NH values prior to 2003), whereas the peaks (usually in August) would indicate a much faster warming trend (roughly double the March rate, based on 1990 to present). I note too that there is also a significant change in the number of grid cells with values in the Arctic region between March and August during recent years:
        https://www.metoffice.gov.uk/hadobs/hadsst3/charts_past.html

        Not too surprising, given the variation in ice cover, but why would this have such a dramatic effect on the monthly anomaly for the whole of the northern hemisphere? It seems to me that not only does this render any trends derived from, or incorporating, these data as potentially seriously misleading, it also completely compromises the global monthly data for possible use for correlation purposes (e.g. with ENSO) due to the interaction between the NH and SH values, which is essentially an average of the two datasets (easily checked by adding the global data to the WFT plot linked to above – look in particular what happens around the 2016 peak).

        Have you looked in detail at this specific issue and/or have any insights based on your work?

        20

        • #
          John McLean

          Hi Jim,

          I hadn’t noticed the recent difference in SSTs between the two hemispheres. It looks very seasonal to me. Like you, I suspect more data from the Arctic region. The first thing that needs to be done is break it down into latitude bands and maybe longitude ranges (4 each of 90 degrees?) to see where the annual cycles are happening.

          The next thing to consider is whether the SST normals that the Hadley Centre people use are in fact accurate (see section 6.4.1 and maybe 6.4.2 of the audit). Don’t forget section 3.4.4. which indicates that Hadley Centre might be trying to say that sea water can have a temperature of less than -2.0C.

          20

    • #
      John McLean

      It’s not an average temperature.
      It’s all about temperature anomalies based on monthly mean temperatures. You take a monthly mean, such as the one for September this year and compare it to the average September mean temperature for the 30 years from 1961 to 1990. (This is the period that HadCRUT4 uses, other datasets sometimes differ.) Ultimately it’s a question of how the mean for the month compares to the average mean for that month.

      Using the anomaly has nothing to do with scaring people; it’s simply a way to take into account that different parts of the world have different temperatures in every month.

      When some says the HadCRUT4 global average temperature has increased by X they simply don’t know what they are talking about. HadCRUT4 is all about temperature anomalies and it’s the global average anomaly that changes.

      30

  • #
    pat

    this is so funny. first fact-check has Borenstein wrongly claiming Trump suggested “scientific community is substantially split”…and it doesn’t get any better as he tries to debunk the rest:

    17 Oct: AP FACT CHECK: Trump misses on storms, science, clean air
    By SETH BORENSTEIN
    President Donald Trump in his interview with The Associated Press said he has “a natural instinct for science.” But what he said about hurricanes, clean air and climate doesn’t quite get the science right.
    More than a dozen scientists, economists and climate negotiators pointed out where his comments didn’t fit with the reality of human-caused climate change, hurricanes and air pollution.
    A look at his statements Tuesday:

    TRUMP, when asked about a dire United Nations report this month on climate change that said dangerous warming has already happened and that with each degree, the many harms to Earth will get even more treacherous: “No, no. Some say that and some say differently, I mean you have scientists on both sides of it.”

    THE FACTS: He’s wrong to suggest the scientific community is ***substantially split. Scientists from around the world wrote the recent report, and it was unanimously accepted by government representatives around the world, including in the United States, said Cornell University climate scientist ***Natalie Mahowald, a lead author of the report…

    University of Illinois climate scientist Donald Wuebbles, a lead author of that national report, emailed that “there is no debate AT ALL going on about this within the scientific community.”

    “Trump might as well be saying that there are scientists on ‘both sides of the gravity debate,’” Pennsylvania State University climate scientist Michael Mann said in an email. “Dangerous climate change impacts are already apparent. Of course there are uncertainties. There always are. There are uncertainties in the science of gravity (we have never measured a graviton, the fundamental unit of gravity). That doesn’t make it safe to jump off a cliff.”…
    https://www.apnews.com/8f9a5a44695a4a6cbf22a9edad755e4d

    40

    • #
      pat

      ***Natalie Mahowald:

      18 Oct: CornellUniChronicle: UN climate report author: ambitious actions needed to slow global warming
      By David Nutt
      (David Nutt is managing editor of the Atkinson Center, Cornell)
      In March 2017, Natalie Mahowald, professor of earth and atmospheric sciences and the Atkinson Center for a Sustainable Future’s faculty director for the environment, was selected by the United Nations’ Intergovernmental Panel on Climate Change (IPCC) as a lead author on the “Special Report on Global Warming of 1.5 degrees Celsius.”…
      In this Q&A, Mahowald discusses her role in the report, its findings and proposed solutions, and the work everyone must do to limit global warming…

      Mahowald: This report is also unique in that the governments called for its creation. It came out of the United Nations Framework Convention on Climate Change. The governments asked us to look at how we could limit global warming to 1.5 C, as well as a 2 C target (or 3.6 F), and what the differences would be. In previous reports, the scientists decided what the questions would be. But I think policymakers are better at figuring out what’s policy-relevant. Now we’ve passed it off to them to decide what to do with it…

      Mahowald: There are a lot of different ways to limit warming to 1.5 C, but we can’t just choose one of these options. It’s much easier to reach a lower temperature target if we change our own behavior, if we conserve energy, if we think about where our food is sourced, if we reduce food waste. In addition, we need to convert to sustainable energy, without a doubt. Luckily there’s all these innovations in wind and solar, and now they’re cheaper than using fossil fuels.
      A lot of ways we’re going to reduce our impacts actually help us now. For example, if we switch off of fossil fuels like coal or natural gas and move onto something like wind or solar, air quality will improve…

      Mahowald: Climate change is really a long-term problem, so we need to be moving relatively quickly to transition from dirty energy sources to cleaner, cheaper energy sources, like wind or solar, as well as sustainable agriculture, over the next 10 years…

      Mahowald: I think it is important that some of the new studies within the report show a rise of 0.5 C – which doesn’t sound like very much – actually will be felt by humans and by ecosystems…We’ve already felt climate change, right now at 1 C, and at 1.5 C we’re going to be able to tell the difference. And between 1.5 C, and 2.0 C, we’ll be able to tell the difference…
      http://news.cornell.edu/stories/2018/10/un-climate-report-author-ambitious-actions-needed-slow-global-warming

      About: David R. Atkinson Center for a Sustainable Future, Cornell Uni
      Founded by a generous gift from David and Pat Atkinson, the David R. Atkinson Center for a Sustainable Future empowers faculty to think outside of their departments and across disciplines when it comes to tackling the world’s greatest challenge—creating a vital and resilient future for the global community…the Atkinson Center supports the pioneering risk-takers at the very heart of radical collaboration through multiple funds and fellowship programs, including:

      Innovation for Impact Fund—The Innovation for Impact Fund (IIF) connects our nonprofit, government, and industry partners with the research capacity at Cornell in order to jointly develop and test evidence-based solutions. Current collaborators include CARE, The Nature Conservancy, Environmental Defense Fund, Avangrid,
      and the Smithsonian Conservation Biology Institute…ETC

      (from About David and Patricia Atkinson): David Atkinson is the founder of Atkinson & Company in Princeton, New Jersey. He retired in 1992 as general partner of Miller, Anderson & Sherrerd LLP, a Philadelphia-based investment counseling firm. He held a number of previous positions in the financial industry, including vice president in the research department of Morgan Stanley, cofounder of Franklin Capital Investors, and manager of the Scudder Development Fund at Scudder, Stevens & Clark. David earned an MBA in 1964 from the University of Pennsylvania’s Wharton School of Business in Philadelphia. He was an officer in the U.S. Navy for two years, after graduating with a BS from Cornell
      News coverage:
      October 28, 2010 Cornell U. gets $80M gift for sustainability work (AP)
      https://www.atkinson.cornell.edu/about/index.php

      20

    • #
      Sceptical Sam

      President Trump is correct.

      In excess of 31,000 scientists have signed the Oregon Petition. That includes some 9,029 PhDs; 7,157 MScs; 2,586 MDs and DVMs; and 12,715 BSc or equivalent academic degrees. Most of the MD and DVM signatories also have underlying degrees in basic science.

      They maintain that

      There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.

      http://www.petitionproject.org/index.php

      If that’s not a split then the atom is still in one piece too.

      100

    • #
      yarpos

      ““Trump might as well be saying that there are scientists on ‘both sides of the gravity debate,’” Pennsylvania State University climate scientist Michael Mann said in an email. “Dangerous climate change impacts are already apparent. Of course there are uncertainties. There always are. There are uncertainties in the science of gravity (we have never measured a graviton, the fundamental unit of gravity). That doesn’t make it safe to jump off a cliff.”…”

      The Golden Barf for false equivalence , goes to…….

      00

  • #
    pat

    behind paywall:

    ‘Developing nation’ China seeks $260m climate handout
    The Australian-15 hours ago
    China has asked for money from a UN-backed Green Climate Fund for developing countries that exceeds Australia’s entire $200 million … Australia’s decision to withdraw support came ahead of a crucial meeting in Bahrain this week, which had hoped to bring the GCF back from the …

    90

    • #
      Sceptical Sam

      Finally the Morrison Liberal/National coalition gets something right on the IPCC and the UN.

      Well done Prime Minister. Now, pull out from the Parisian farce and show us that you mean it.

      100

  • #
    OriginalSteve

    Ouch ouch ouch… a new sheriff in town……

    The emporers new clothes….and it all comes out in the wash?

    https://www.abc.net.au/news/2018-10-19/pacific-leaders-slam-melissa-price/10394944

    Pacific leaders have come out swinging after Environment Minister Melissa Price allegedly said the region was “always” seeking “cash” from Australia, with Cook Islands’ Acting Prime Minister Mark Brown comparing the remarks to something you would hear from the “flat Earth society”.

    Key points:
    * Melissa Price strongly denies making the comments
    * Anote Tong has brushed off the incident as a “distraction”
    * But other Pacific leaders have taken a much stronger stance

    Ms Price is accused of approaching former Kiribati president Anote Tong in a Canberra restaurant and telling him, “I know why you’re here. It’s for the the cash”, according to the account of Labor frontbencher Pat Dodson.

    Multiple sources told the ABC she continued by saying: “For the Pacific it’s always about the cash. I have my chequebook here. How much do you want?”

    The man at the centre of the story, Mr Tong, offered a relatively muted response, but other Pacific leaders have now hit out against the alleged comments — which Ms Price denies making.

    The Tongan Government will soon begin making repayments on controversial Chinese loans that critics have said saddled the small Pacific nation with unsustainable debt.

    Mr Brown suggested that this week’s events are a reflection of the Australian Government’s broader attitude toward the Pacific and climate change, the most pressing issue facing the island nations.”

    Is this the reason for the cash comments sensitivity?

    https://www.abc.net.au/news/2018-07-19/tonga-to-start-repaying-controversial-chinese-loans/10013996

    “The Tongan Government will soon begin making repayments on controversial Chinese loans that critics have said saddled the small Pacific nation with unsustainable debt.

    Key points:

    * Two-thirds of Tonga’s $240 million external debt is owed to China’s Exim Bank
    * China granted an extension to the loans’ grace period but refused to convert them into a grant
    * Tonga’s Chinese debt is held up by some as a prime example of China’s debt-trap diplomacy
    * Tonga’s Prime Minister Akilisi Pohiva recently confirmed his Government would start to repay the principal on two loans worth around $160 million from China’s Export Import Bank.

    The loans from 2008 and 2010 were used, in part, to rebuild the central business district in the capital Nuku’alofa after riots in 2006.

    Mr Pohiva told a press conference last week that his Government would continue to ask China to waive the debt.”

    40

  • #
    pat

    AP has a much-shortened version of this story from Providence Journal:

    16 Oct: Providence Journal: Expert tells URI (Uni of Rhode Island) crowd that climate-change research may mean ‘our survival’
    By Alex Kuffner
    NARRAGANSETT — The former head of oceanography and meteorology for the Navy argued for more funding for research to understand the impact of climate change while delivering the keynote speech at a science symposium at the University of Rhode Island on Tuesday.
    “It’s not just science at stake. It’s our survival,” Rear Admiral (Ret.) Jonathan White said to hundreds of people at the event at the Graduate School of Oceanography campus in Narragansett.

    White is president and CEO of the ***Consortium for Ocean Leadership, a Washington, D.C.-based group that advocates for ocean research, education and policy. His name was mentioned last year in connection with the top position at the National Oceanic and Atmospheric Administration, but President Donald Trump instead nominated Accuweather CEO Barry Myers (NOT YET CONFIRMED).

    Standing in front of images of the destruction wrought last week by Hurricane Michael at Tyndall Air Force Base, in Florida, and flooding around Naval Station Norfolk, in Virginia, he said that climate change is a threat to coastal military installations and, in a larger sense, to national security overall.
    “Our military, the more and more they have to deal with infrastructure and the effects of climate change, whether it’s helping others or trying to get in and out of our bases, the less ready they are going to be to go on missions … all over the world,” he said.

    It was a point that was also raised by U.S. Rep. James R. Langevin, who has pushed for an assessment of the military’s vulnerabilities to climate change.
    “The dangers to national security are real and we must support the researchers who improve our understanding of the threat and ways to mitigate it,” he said.
    The symposium’s focus was not just on security issues but on the effects of sea-level rise, more powerful storms and increased rainfall on coastal communities in general.

    The event came the week after the Intergovernmental Panel on Climate Change warned that the impacts of climate change are now expected to be more dire than previously thought…
    “We only have 10 to 20 years to solve this now,” said John King, a professor of geological oceanography at URI. “And transformational change is what we need.”

    White described potential impacts on drinking water, food supplies and ocean health caused by increased runoff and by harmful algae blooms. The key to understanding them all is science, but the projections and models need to be improved to reduce uncertainty and spur action, he argued.
    “We have to keep investing in it because the answers and uncertainty are not where they need to be,” he said. “The investments in ocean science are critical.”
    http://www.providencejournal.com/news/20181016/expert-tells-uri-crowd-that-climate-change-research-may-mean-our-survival

    Friends of NOAA: Consortium for Ocean Leadership
    Washington, DC-based nonprofit organization that represents 101 of the leading public and private ocean research and education institutions, aquaria and industry with the mission to advance research, education and sound ocean policy

    9 Feb: SpaceCoastDaily: Viera High School Win Regional Ocean Science Academic Competition for Second Year in Row
    BREVARD COUNTY • VIERA, FLORIDA – For the second year in a row, students from Viera High School won the Manatee Bowl, a regional ocean science academic competition that is part of the National Ocean Sciences Bowl…
    The NOSB, a program of the Consortium for Ocean Leadership, is building our next generation of marine scientists, policymakers, teachers, explorers, researchers, technicians, environmental advocates, and informed citizens by educating them in timely and relevant ocean science topics that are already a part of our future…
    http://spacecoastdaily.com/2018/02/viera-high-school-win-regional-ocean-science-academic-competition-for-second-year-in-row/

    from earlier Providence Jnl article:
    John King, professor, geological oceanography— “Climate Model Predictions and Trends in Observational Data for Coastal Environments.”…

    Wikipedia: James Langevin: Representative Langevin leans to the left on environmental and energy issues in Congress. Environmental issue groups have generally given him high ratings; more recently he received a 97% from the League of Conservation Voters in 2011. He has also received a rating of 100% from the Defenders of Wildlife Foundation. Conservative issue groups concerning the energy and the environment have given him very low ratings. He is a strong supporter of alternative energy from oil and coal, voting ‘Nay’ for the Stop the War Against Coal Act of September 2012 and he has supported measures for new wind farms in New England…

    21 Sept: Uni of Rhode Island (URI): URI ocean engineer: Sound from wind farm operations having no effect on environment
    But pile driving during construction could have affected marine life…READ ON
    https://today.uri.edu/news/uri-ocean-engineer-sound-from-wind-farm-operations-having-no-effect-on-environment/

    10

    • #
      pat

      10 Oct: Gizmodo: Anti-Wind Farm Activism Is Sweeping Europe And The U.S. Could Be Next
      by Michael Waters
      Rhode Island’s 6-turbine Block Island Wind Farm, which opened in December 2016 after angry locals likened it to “visual pollution,” now holds the title of the first U.S. offshore wind farm…
      https://www.gizmodo.com.au/2018/10/anti-wind-farm-activism-is-sweeping-europeand-the-us-could-be-next/

      8 Oct: CNBC: Orsted buys Rhode Island offshore wind business for $510 million
      •Rhode Island-based Deepwater Wind is responsible for the only operational offshore wind farm in the U.S.
      •Orsted says that Deepwater Wind’s portfolio has a total potential capacity of 3.3 gigawatts.
      by Anmar Frangoul
      Danish energy business Orsted has entered into an agreement with the U.S.-based D.E. Shaw Group to buy a 100 percent equity interest in its offshore wind developer Deepwater Wind…
      Orsted said that Deepwater Wind’s portfolio had a total potential capacity of approximately 3.3 gigawatts (GW), including offshore wind development projects in Rhode Island, Connecticut, Maryland and New York. Orsted’s offshore wind portfolio in the U.S. amounts to around 5.5 GW…

      Once the Deepwater Wind transaction – which is subject to clearance by U.S. competition authorities – is closed, the new organization will be called Orsted U.S. Offshore Wind. The deal is expected to close by the end of 2018, Orsted said.
      https://www.cnbc.com/2018/10/08/orsted-buys-rhode-island-offshore-wind-business-for-510-million.html

      10

    • #
      Robber

      How ironic – the science is settled, but we need more funding for research. So after 30 years we still don’t have a reliable model, and still can’t predict the next season with high confidence. From the BoM:
      The November to January climate outlook, issued 11 October 2018, indicates large parts of eastern Australia are likely to be drier than average.
      November shows a strong likelihood of drier conditions across most of the eastern two-thirds of the country. However, there are exceptions along parts of the east coast of Australia to the east of the Great Dividing Range, where there is no strong indication of either a wetter or drier month.
      November to January days are very likely to be warmer than average for most of Australia.
      But when you read their fine print: It’s the chance of exceeding the median maximum, with no information by how much.

      50

  • #
    Another Ian

    I wonder if the Hadley Centre is observing what happens when you go play in the big girl’s sandpit?

    30

  • #
    pat

    17 Oct: UK Telegraph: Is onshore wind back on? Fresh support could set new turbines spinning
    By Jillian Ambrose
    New onshore wind turbines could soon begin spinning as fresh support for onshore wind plans grows within the Conservative Party and wind-swept local communities.
    Government ministers have hinted that the block on wind farms could fall away in the case of communities which don’t oppose the plans.

    The softening stance was welcomed by an influential Tory backbencher earlier this week who said the energy source is due a rehabilitation from its toxic reputation…

    James Heappey, who also sits on the parliamentary group for energy studies, said “the enemy of green energy, in the mind of the public, is the link between green energy and subsidy.”
    It is time to “rehabilitate onshore wind”, for the communities which want them, now that the cost of wind power technology has plummeted, he told delegates of a London conference.
    The vote of support for onshore wind has emerged as fresh research shows almost seven in ten Scots living in rural areas support the use of onshore wind energy…
    (EX-BBC) Richard Black, director of the Energy and Climate Intelligence Unit (ECIU), said: “Given that onshore wind is now the cheapest form of new electricity generation bar none, and that we have a government committed to keeping energy bills low, you do wonder when exactly dots will be joined, especially since the government has just set the UK on the road to net zero emissions.”
    https://www.telegraph.co.uk/business/2018/10/17/onshore-wind-back-fresh-support-could-set-new-turbines-spinning/

    18 Oct: The Courier: Two-thirds of rural Scots back onshore wind farms, says Survation poll
    by Gareth McPherson
    Wind farm developments should be accelerated with the help of state subsidies after a poll revealed two-thirds of rural Scots support them, campaigners say.
    A Green MSP in Fife has called on the UK Government to re-introduce the cash support on the back of the survey, so more energy projects can go ahead…
    WWF Scotland said the poll is “another nail in the coffin for the myth that renewables are unpopular”…
    But Bill Bowman, the Scottish Conservative MSP, warned against using the survey as evidence the public wants to see large wind farms in places like Angus and Dundee…
    “I know that locally in Angus or Dundee there is not support for industrial-scale wind farms sprawling across the landscape.”…

    Mark Ruskell, the Mid Scotland and Fife MSP for the Scottish Greens, said the poll makes “uncomfortable reading” for those ignoring the scientific evidence.
    “With so many Scots favouring a renewables future and the cost of generation dropping, the Westminster government must think again and allow wind farms to compete for price support,” he said.
    “So many wind farm proposals have passed through planning, but are stuck without price support.
    “With a growing demand for carbon-free electricity, these schemes need to be up and running now.”

    Graeme Dey, the SNP MSP for Angus South, said the poll shows the decision to cut subsidies is “dangerously out of step with public opinion”.
    Paul Wheelhouse, Scotland’s Energy Minister, said: “We believe that, subject to ensuring the right sites are chosen, potential still exists for further wind development and for repowering of older, less efficient sites…
    https://www.thecourier.co.uk/fp/news/politics/scottish-politics/746793/two-thirds-of-rural-scots-back-onshore-wind-farms-says-survation-poll/

    18 Oct: The Courier: Press Assocn: Majority of Scots support onshore wind energy – (Survation) survey
    The poll, which questioned 1,036 people across the country between September 28 and October 2, also indicated high levels of support for other renewable energy technology, with almost 77% of those surveyed backing offshore wind energy, 80% supporting wave and tidal, 81% backing for solar energy, and 67% supporting biomass – energy obtained using organic material.
    Just under a third of Scots, 32%, think Scottish ministers should implement fracking, the controversial technique designed to recover gas from shale rock…

    Jenny Hogan, deputy chief executive of Scottish Renewables said: “The nature of Scotland’s renewable energy resource – our wind, tides, forestry and even our long summer evenings, among others – means many renewable energy developments take place in rural areas, providing jobs and economic opportunities which otherwise may not have existed…

    20

  • #
    pat

    17 Oct: Scotsman: Millions of electric car charge sites are needed
    Three million charge points will be needed at commercial and industrial sites to support widespread use of electric vehicles (EVs) in Britain by 2040, according to a new report…
    Workplaces, supermarket car parks and motorway service stations are among the areas which must provide EV facilities due to only 60 percent of households having access to private parking, a study by Aurora Energy Research found.
    This would represent a “huge expansion” of EV infrastructure.

    There are around 14,000 public charging points across the UK…

    Aurora based its analysis on the number of EVs on the road reaching 35 million by 2040. It found landowners could make a profit from the charge points if motorists paid for the electricity they used…
    https://www.scotsman.com/future-scotland/tech/millions-of-electric-car-charge-sites-are-needed-1-4816460

    About Aurora Energy Research: Aurora was founded in 2013 by University of Oxford Professors and economists that saw the need for a deeper focus on quality analysis. With decades of experience at the highest levels of academia and energy policy, Aurora combines unmatched experience across energy, environmental and financial markets with cutting-edge technical skills like no other energy analytics provider…
    Aurora’s intelligence and influence is vital to the global energy transformation.

    18 Oct: EnergyVoice: £6bn investment needed to make UK an EV nation
    by David McPhee and Neil Lancefield
    Ben Collie, project leader for Aurora Energy Research, said yesterday that £6bn would be required to achieve the three million charging points needed for total coverage in the UK.
    Mr Collie said, while the UK “had the ability to make it happen” it’s also “got to make economic sense”…
    “In terms of prices for drivers, if a typical driver spends £30 a week on fuel currently, they’ll spend about £10 a week with an EV. It would cost about £300 for charging equipment, but this would last for a long time.”
    Aurora’s head of flexible energy and battery storage Dr Felix Chow-Kambitsch said the roll-out of EVs over the next 20 years would “radically transform Great Britain’s energy system”…
    It was announced last week that Government grants for new electric and hybrid cars will be slashed…

    10

  • #
    pat

    17 Oct: UK Telegraph: Investors representing €21 trillion urge pension funds to address climate risks
    by Lucy Burton
    A shareholder group acting on behalf of investors with €21 trillion worth of assets has warned that pension funds are “unprepared” to meet fresh climate change rules and must take action.
    The Institutional Investors Group on Climate Change (IIGCC) is writing to 12 large UK pension funds which admitted earlier this year they had no plans to report on how they are taking climate change into account “to remind them of their duty to do so”.

    It is also publishing a rulebook for pension fund trustees and board members so they “understand the exposure of their investments to climate-related risks” and adapt to new rules which require pension funds to report on how climate change could impact their…
    https://www.telegraph.co.uk/business/2018/10/16/investors-representing-21-trillion-urges-pension-funds-address/

    10

  • #
    pat

    ABC goes begging for insults:

    19 Oct: ABC: Pacific leaders slam Melissa Price over alleged Canberra restaurant cash slur
    Pacific Beat By Catherine Graue
    Posted about 2 hours ago
    Pacific leaders have come out swinging after Environment Minister Melissa Price allegedly said the region was “always” seeking “cash” from Australia, with Cook Islands’ Acting Prime Minister Mark Brown comparing the remarks to something you would hear from the “flat Earth society”…

    Mr Brown suggested that this week’s events are a reflection of the Australian Government’s broader attitude toward the Pacific and climate change, the most pressing issue facing the island nations.
    “I do understand that science is a difficult subject for some people, but some of these comments coming from Australian ministers kind of resemble the comments you hear from the flat Earth society,” he told Pacific Beat…

    He believes that there has been a clear shift in Australia’s approach in recent months — coinciding with the Liberal leadership spill that saw Scott Morrison become Prime Minister.
    “Australia has been a significant proponent of Green Climate Fund and it supports efforts to try mitigate effects of climate change especially to the Pacific … these sorts of comments coming out, the PM saying he’s not going to put money into the GFC, are concerning.”

    Asked about the reports, Niue’s Premier Toke Talagi told Pacific Beat that “people say some stupid things sometimes”.
    Mr Talagi, who is not known for holding back, said he knew how he would have responded:
    “How much is she prepared to pay? I would have shot back and said, hang on yeah, $50 million, put it in my bag,” he said…

    China has an ‘edge’ over Australia with climate change
    Graeme Smith, from the Australian National University’s Department of Pacific Affairs, said politicians had been committing a “comedy of errors” when it came to Pacific relations in recent weeks…

    But Mr Smith said China was also ahead of Australia on climate change in the Pacific — and willing to provide the needed funding — which presented a “massive problem” for Canberra.
    “The irony is, of course, that when you talk to the Chinese officials they’re more than willing to stump up the cash for climate change because however authoritarian their Government might be, they still listen to their scientists and they still believe that climate change is a real thing,” he said.
    “That is one area that China really has an edge over Australia and the US at the moment in the Pacific.”
    https://www.abc.net.au/news/2018-10-19/pacific-leaders-slam-melissa-price/10394944

    Career highlights
    Appointment as senior lecturer and the University of Melbourne (2016) where I created the Little Red Podcast; appointment as a research fellow at the University of Sydney (2012); a postdoctoral fellow at UTS (2009); research associate at the ANU (2007), lecturer in Chinese Studies at the University of NSW (1999-2004). Visiting scholar at Sun Yat Sen University (2013), and the School of Environmental Science and Engineering, Tsinghua University (2001). Various academic awards, including the best article prize for The Journal of Pacific History (2013), the Gordon White Prize for the best article in China Quarterly (2011), WJ Liu Esq. Memorial Prize (2001), UNSW-Tsinghua Scholarship (2001), Chinese Government Scholarship (1998-1999).

    Researcher’s projects
    The Little Red Podcast. A monthly podcast on China beyond the Beijing beltway, hosted with Louisa Lim, former China correspondent for the BBC and NPR. The podcast is accompanied by monthly pieces in the Lowy Interpreter and the LA Review of Books, and is supported by the ANU’s Chinoiresie and the University of Melbourne…
    https://researchers.anu.edu.au/researchers/smith-gk

    10

    • #
      pat

      for Graeme Smith/ANU. lucky for you, theirABC will allow you to claim any rubbish you like about China taking CAGW seriously. what a joke:

      19 Oct: Reuters: China September coal output hits nine-month high as new capacity starts up
      China’s coal output hit its highest in nine months in September, government data showed on Friday, boosted as new mining capacity started up in the country’s northwest.
      China has been given the go-ahead for a number of big coal mines as it tries to ease concerns about fuel shortages amid a crackdown on small outdated mines and tightened emission controls.

      Miners churned out 306.01 million tonnes of coal last month, up 3.2 percent from 296.6 million tonnes in August and up 5.2 percent from the same time last year, according to the data from the National Bureau of Statistics (NBS).
      Output over the first nine months of 2018 in the world’s top producer of the commodity reached 2.59 billion tonnes, up 5.1 percent from a year earlier.
      Coal output from the region of Inner Mongolia last month jumped 11.3 percent from the same month in 2017, while Shaanxi province saw growth of 9.9 percent, according to NBS.

      By the end of June, China had a total of 3.49 billion tonnes of coal mining capacity, while another 976 million tonnes of capacity is still under construction, according to data from the National Energy Administration.
      “We expect to see more capacity being released in the coming months, which will further boost coal output in China,” said Cheng.

      Meanwhile, northern China will soon turn on coal- or gas-powered heating as temperatures drop sharply in the run-up to winter. The official heating season starts on Nov. 15.
      Benchmark thermal coal prices on the Zhengzhou Commodity Exchange CZCcv1 have risen around 6 percent from early September.
      https://www.reuters.com/article/us-china-economy-output-coal/china-september-coal-output-hits-nine-month-high-as-new-capacity-starts-up-idUSKCN1MT0E9

      20

  • #
    pat

    oh China does take CAGW seriously!

    ScienceDirect: Impacts of heat, cold, and temperature variability on mortality in Australia, 2000–2009
    Authors: Jian Cheng, Zhiwei Xu, Hilary Bambrick, Hong Su, Shilu Tong, Wenbiao Hu

    School of Public Health and Social Work & Institute of Health and Biomedical Innovation, Queensland University of Technology, Queensland, Australia
    Department of Epidemiology and Health Statistics, School of Public Health, Anhui Medical University, Anhui, China
    School of Public Health and Social Work, Queensland University of Technology, Queensland, Australia
    School of Public Health, Institute of Environment and Human Health, Anhui Medical University, Anhui, China
    Shanghai Children’s Medical Centre, Shanghai Jiao-Tong University, Shanghai, China

    We collected daily time-series data on all-cause deaths and weather variables for the five most populous Australian cities (Sydney, Melbourne, Brisbane, Adelaide, and Perth), from 2000 to 2009…
    Heat, cold and temperature variability together resulted in 42,414 deaths during the study period, accounting for about 6.0% of all deaths. Most of attributable deaths were due to cold (61.4%), and noticeably, contribution from temperature variability (28.0%) was greater than that from heat (10.6%)…

    Our findings highlight that, in addition to heat and cold, temperature variability needs to be considered in assessing and projecting the health impacts of climate change.
    https://www.sciencedirect.com/science/article/pii/S0048969718340774?dgcid=rss_sd_all

    20

  • #
    theRealUniverse

    Give up Hadley Centre, the real earth data doesnt fit your hypothesis. End of story.
    Quibbling over a few odd temp data points wont fix the reality of a cooling planet AND a wrong theory (AGW).

    102

  • #

    When I download the CRUTEM4 station data, it doesn’t seem to include APTO UTO. Does anyone know if it is actually included?

    00

    • #
      Jim Ross

      As stated on the Met Office website:
      The station records on which the gridded CRUTEM4 dataset is built are available in ascii format in this archive: CRUTEM.4.6.0.0.station_files.zip.

      File ID is 800890 (folder 80), shows temperature values up to 1987.

      00

      • #

        If you go to the bottom of this page, and look at the file listing the stations used, it doesn’t appear to be there.

        https://crudata.uea.ac.uk/cru/data/temperature/crutem4/station-data.htm

        00

        • #
          Jim Ross

          Indeed, but you are looking at an old file, just like Nick Stokes was. The clue is in the filename: as of 020611.

          Try here to see where the latest data can be found: https://www.metoffice.gov.uk/hadobs/crutem4/data/CRUTEM.4.6.0.0_release_notes.html

          Goes up to August 2017, with station data linked as per my original response (v4.6.0.0).

          00

          • #

            My understanding is that that is the raw data, not necessarily the data included in the final dataset.

            00

            • #
              John McLean

              The station data files contain data supplied by national meteorological services. It appears that the CRU calculate the standard deviations and the ‘normals’ and include them in the station metadata, which is about 20 records at the start of each station file, prior to the monthly mean temperatures.

              These files are processed to produce the CRUT4 and HadCRUT4 datasets.

              The processing includes rejecting values that are either flagged as missing or are more than five standard deviations from the monthly mean temperature. As I show above, if errors exist in the data that’s used to calculate the standard deviations then the range of accepted values increases and sometimes other obvious errors are not rejected when they should be.

              00

          • #
            Jim Ross

            Gee, not even a “thank you”. The file I pointed you to contains the station data for Apto Otu, which is the correct (current) version of the file you had been looking at. The Met Office states that “This archive contains all station records used to build the CRUTEM4 dataset.”

            File 800890 contains both the monthly values and the monthly “normals” for the Apto Otu station. Both reflect the erroneous data. Easy to check. April “normal” is shown as 27.8C. Without the erroneous value of 81.5C it would be 24.4C. So the anomaly for April 1978 is invalidly calculated as 53.7C, precisely as John shows above.

            I checked the files for the other relevant stations and confirmed John’s numbers for April 1978 that he provides in the table in his post as well as his average of +8.65C. (There is a typo in the station ID for San Antonio del Tach: should be 804470.) I have checked that all these stations belong in the same cell, but I have not confirmed that there are no others that should have been included. I am sure that you can manage to do that.

            The “final dataset”, as you call it, is the gridded anomalies file and it can be found here:
            CRUTEM.4.6.0.0.anomalies.txt.gz or CRUTEM.4.6.0.0.anomalies.nc.gz. All you have to do now is check to see if the relevant grid cell anomaly value for April 1978 is +8.65C. I’ll give you a clue. It is.

            00

            • #
              John McLean

              Thanks for finding the typo Jim. I’ll ask Joanne to change it. The ID given in the table for San Antonio del Tach was in fact for Apto Uto.

              If you are a programmer can then I suggest something. Write some software to strip out some of the details for each station and put them in rows in an Excel spreadsheet. I extracted ID, name, country, Lat., Long., first good year, end year, the period for normals and the normals and standard deviations for a couple of months. Before reading each station I set default values that I’d recognise as missing value flags. Put it all in Excel then add columns with the latitudinal cell number (36 of them) and longitudinal cell number (72), which I call CLAT and CLONG. Somewhere along the way remove all entries with ‘missing value’ for latitude or longitude or the normals or the standard deviations.

              The table can then be sorted in several different ways. In this case sorting on CLAT and CLONG together will put all stations in the same grid into successive entries in the worksheet. The table is also useful for finding a station by name and then getting the ID that is its filename. Just be aware that some station names occur more than once, sometime sin the same country.

              00

              • #
                Jim Ross

                John, You’re welcome. I hope you saw my comment below that Mosher has finally accepted (elsewhere, not here) that he “may be wrong”!

                I did do some programming in Fortran about 50 years ago using punch cards, but nothing since then. In order to check the Apto Uto data I actually did something very close to what you suggest, except that I extracted the April 1978 data manually (copy and paste from the ascii file into Excel). This is quite feasible for a single month, but not really practical for more extensive analysis. Anyway, I see that there is some free software available for handling NetCDF files so I shall be looking into that for evaluating the HadSST3-NH issue.

                00

  • #
    Steven Mosher

    There can be no doubt that the obviously flawed data for Apto Uto has been used in the HadCRUT4 dataset regardless of what the Hadley Centre says.

    Apto Uto is not in the stations USED. Primarily because it lacks the required 20 years of data in the 1951 to 1980 period.

    They also apply a 5 sigma filter prior to final calculations.

    All you have to do is check the actual code and run it.

    00

    • #
      John McLean

      I’m not sure what you are trying to say Mr Mosher. Are you admitting that the documentation from the CRU is flawed (as I think it is from what you claim it says)?

      Or are you still insisting that the Apto Uto data is not used in the CRUTEM4 and HadCRUT4 datasets despite me demonstrating that it is (and explained why the 5 standard deviation filter often fails)?

      Your notion of a minimum of “20 years of data in the 1951 to 1980 period” is wrong in several ways.

      1. The HadCRUT4 period for the long-term averages (which it calls ‘normals’) is 1961 to 1990. The period for calculating the standard deviation is 1941 to 1990. Where you get 1951 to 1980 I do not know.

      2. The first version of HadCRUT required 25 years of data, the second 20 years with at least four years in each decade, the third 15 years and now HadCRUT4 has a minimum of 14 years. Jones et al (2012) says “Monthly averages for 1961–1990 (the latest WMO normal period) were calculated from the enhanced station data set, accepting an average if at least 14 years of data are available.”

      3. Jones et al (2012) also says “To assess outliers, we have also calculated monthly standard deviations for all stations with at least 15 years of data during the 1941–1990 period.”

      Now is that clear enough for you? I think an apology would be appropriate.

      10

    • #
      John McLean

      P.S. Morice et al (2012), which deals more with HadCRUT4 (c.f. Jones talking about CRUTEM4), refers in section 3.2.1.2 to stations “with at least 14 years of data in the 1961-1990 reference period” being accepted for a calculation of “Station Climatological Normal Uncertainty”. Morice et al (2012) goes on to talk about how stations with less than 14 years of data are dealt with but I’ve seen no indication that such stations are included.

      John

      10

    • #
      Jim Ross

      I got so sick and tired of people like you Steven Mosher continuing to claim that the erroneous data for Apto Uto was not used in the “final dataset”, despite John’s extensive efforts to demonstrate otherwise, that I decided to make the effort to check for myself (see above comment to aTTP). It is. John is correct and you are wrong. He is also correct in saying that you owe him an apology.

      00

    • #
      Jim Ross

      John,

      FYI, I just checked aTTP’s blog and here are two quotes from today that you might be interested to hear:

      Steven Mosher: “Looks like I may be wrong about this one station [Apto Uto] in John’s work.”
      aTTP: “Steven, Thanks. So, that station may actually be included. It is a pity that the stations used data file is a bit old.”

      00