Explain this? Rutherglen homogenized with 17 stations including Hillston!

We’ve seen the remarkable change of the Rutherglen record as it got homogenized. This long running rural record that looks ideal apparently had “unrecorded” station moves found by thermometers miles away.  Already we have found Bill Johnston who did some work at Rutherglen who confirmed that the station did not move. The mystery grows?

Since early 2012 Ken Stewart has been asking the BOM which neighbouring stations were used.  Finally, after pressure from The Australian, the BOM has provided the 17 names, and Ken has graphed them.

Follow the chart below. Rutherglen temperatures start off in blue. The yellow line is the average of the 17 “neighbours” which are used to homogenize that blue line and transform it into the red one which somehow ends up being colder than its neighbours in 1952  and warmer than its neighbours in all the last 30 years.

See if you can figure it out?

Rutherglen starts off blue. Then the yellow line is used to homogenize that blue line into the red one.

Presumably the BOM technique would be a lot more complicated that what Ken has done, but clearly replicating that ACORN final trend is not going to be easy.

The 17 neighbouring places stretch from Beechworth in the foothills of the Victorian Alps, to Hillston on the flat plains of hot New South Wales. Hillston is about 370km by road  from the wine growing district of Rutherglen. It’s 300km direct according to this estimate. Hey but maybe their climate trends are more similar than they appear…

Hillston is the top left red dot on the map below.

74034 Corowa, 82053 Wangaratta, 82002 Benalla, 72097 Albury Pumping Station, 82100 Bonegilla

74106 Tocumwal, 81049 Tatura, 81084 Lemnos, 72023 Hume Reservoir, 82001 Beechworth

72150 Wagga Wagga, 74114 Wagga Research Centre, 80015 Echuca, 74039 Deniliquin (Falkiner Memorial)

74062 Leeton, 74128 Deniliquin, and 75032 Hillston.

The stations used in adjusting the Rutherglen record (Click to enlarge)

Here’s the temperature trends of Hillston at the BOM site. About 100km north of Griffith, NSW. Curiously from the  BOM page you’d almost think Hillston, population 1,054, had an airport in 1912. Now that would be something…

Ken Stewart is doing an excellent job. But why is publicly checking our national database something that’s left to volunteers?

9.3 out of 10 based on 112 ratings

143 comments to Explain this? Rutherglen homogenized with 17 stations including Hillston!

  • #
    lemiere jacques

    everybody supposed there is bad data…
    how to find bad data?
    fisrt strategy..finding objective reasons to conclude data is corrupted and reject it…no need to adjust…just discard bad data…
    second stategy you assume data must be geographically homogeneous…because..because well it would be logical i guess especially if everything else is homogenous similar geography environnement rainfall winds and so on ,the trends should be similar…no?
    what i love is when the say they “adjust” data…they don’t adjust data they discard it and replace by global trend and the make it more sexy by adding some lovely interranual bumps of the former data…

    the idea is for instance if something happened that malde data corrupted it corrupted only the trends nothing else…

    dont ask me why the second strategy is supposed to be more convincing than the first one.

    171

    • #
      James Bradley

      Meanwhile the reasons given by BOM for the adjustments are just shakier and shakier with more direct evidence via eye witness accounts that the the BOM’s ‘staions were moved’ were never moved…

      Hmmmmm… probably what the BOM meant was that it modelled where the stations could have been to give raw data that when homogenised would match the data from the station’s actual positions – following additional adjustments that would account for the virtual repositioning of those stations.

      That’s gotta be it – the stations were ‘virtually repositioned’ for the adjusted data to more closely resemblr the modelled result.

      Boy, those guys and gals at the Bureau of Readjustment sure do work hard…

      262

    • #
      James Bradley

      So just to extrapolate my new theory – the BOM have taken virtual data from a virtual station and adjusted the virtual data to justify homoginising real data from a real station, and when questioned about the adjustments the BOM responded that their virtual station was real but moved to its present location by means not recorded and not known – sounds like a shift in the magnetic poles to me…

      322

      • #
        ROM

        James Bradley @ # 1.2

        The correct official term for those non existent but data generating [ ??? ] stations is “Zombie stations”

        If we assume that the weather data, [ ultimately to be used as climate data,] processing algorithms are very similar around the world then the BOM’s algorithms for homogenisation and adjustments to station data are very likely to be very similar to the Continental US [ CONUS ] system of adjustments.

        They are in fact likely to be almost identical as the entire global climate modeling community [ plus aviation, plus shipping and etc ] operating under the WMO standards umbrella relies on the whole of the data coming from the same base line and having the same adjustments applied to them before they are taken up by the modellers and the GHCN.

        So when we look at the likes of the adjustments that have been applied to Rutherglen as an example, we can probably see just how those adjustments were supposed to work and to see just how utterly flawed and corrupted those adjustment algorithms actually are when in use, based on the USA experience.
        The USA skeptics of course are much further down the track of unravelling and finding the major flaws in the climate data adjustment algorithms and the implementation of the data processing then we are here in Australia. And as Australia’s data gets the magnifying glass run over it, the exact same coruption of the ideals for correcting for non standard data and to establish a common across the board data base line are showing up.

        For those who wish to find out how the Australian system is probably quite corrupted, probably not deliberately so but it is providing the confirmation bias that proves a climate warming is under way and therefore it falls into line with the beliefs of the BOM die hard warmist brigade and therefore, as is the way with all of humanity, the data from the adjustments by the algorithm is believed to be correct and doesn’t need checking as it is giving the results expected.

        Which is a lot of words to say that confirmation bias has led to circular reasoning that the station data adjustment algorithms must be correct.

        Anthony Watts, a meteorologist by trade, cut his blogging teeth on finding the reasons why and how individual stations temperatures varied due to such small items as using a white plastic or oil based paint to repainting the Stevenson Screens instead of using the historic white wash leading to a quite measurable shift in the recorded temperatures in that repainted screen.

        He has been at the forefront of taking on the very powerful establishment in the American weather and climate organisations and has consequently created a great deal of grief for them as he uncovered the innumerable flaws, some already known, a lot unknown, but which the very lax and unpreviously unaccountable American weather and climate data organisations in a fit of hubris had just glossed over as being not worth the trouble as nobody knew about those problems in any case nor were there any skeptics out there who were smart enough to figure out there were problems and what they were.

        So the climate establishment in the USA as also is now the case in Australia, just adopted the mushroom treatment for anybody who dared to question their expertise or system.

        That was until Anthony Watts with his practicing meteorological background [ he now owns a weather forecasting service ] started throwing a great deal of doubt backed by hard evidence on the veracity of the entire NOAA station data collection system.
        Then Steve McIntyre with his scathing Hockey Stick analysis of the data appeared plus an increasing array of other bloggers and diggers of climate academic dirt appeared on the scene.

        Life has been very uncomfortable ever since for the climate data collection and processing organisations such as GISS, CRU, NCDC, NOAA, NASA and etc.

        With that bit of background Anthony Watts has a couple of past posts on the adjustements used by the US data collection organisations and the totality of the grossly flawed adjustement algorithm systems the data organisations are using to “correct” the data in the USA.

        Station data correction algorithms which are likely to be almost identical to those currently being used here in Australian by the BOM.

        The scientific method is at work on the USHCN temperature data set

        &
        NOAA’s temperature control knob for the past, the present, and maybe the future – July 1936 now hottest month again

        To give some idea on what Homogenisation actually means as some may not be up to speed with the terminology or it’s implications, we can take this a bit further by quoting some passages from Zeke Hausfather’s [ Who is working with then BEST climate group ] post on Judith Curry’s “Climate etc” blog

        Understanding adjustments to temperature data

        [ 2044 comments; I’ve read the first 1000 or so comments ]

        QUALITY CONTROL>>>

        TIME OF OBSERVATION (TOBS) ADJUSTMENTS >>>>

        PAIRWISE HOMOGENIZATION ALGORITHM (PHA) ADJUSTMENTS
        [ selected quoting ]

        The Pairwise Homogenization Algorithm was designed as an automated method of detecting and correcting localized temperature biases due to station moves, instrument changes, microsite changes, and meso-scale changes like urban heat islands.

        The algorithm (whose code can be downloaded here) is conceptually simple: it assumes that climate change forced by external factors tends to happen regionally rather than locally.
        If one station is warming rapidly over a period of a decade a few kilometers from a number of stations that are cooling over the same period, the warming station is likely responding to localized effects (instrument changes, station moves, microsite changes, etc.) rather than a real climate signal.

        To detect localized biases, the PHA iteratively goes through all the stations in the network and compares each of them to their surrounding neighbors.
        It calculates difference series between each station and their neighbors (separately for min and max) and looks for breakpoints that show up in the record of one station but none of the surrounding stations.
        These breakpoints can take the form of both abrupt step-changes and gradual trend-inhomogenities that move a station’s record further away from its neighbors—-

        The PHA has a large impact on max temperatures post-1980, corresponding to the period of transition to MMTS and ASOS instruments. Max adjustments are fairly modest pre-1980s, and are presumably responding mostly to the effects of station moves. Minimum temperature adjustments are more mixed, with no real century-scale trend impact. These minimum temperature adjustments do seem to remove much of the urban-correlated warming bias in minimum temperatures, even if only rural stations are used in the homogenization process to avoid any incidental aliasing in of urban warming, as discussed in Hausfather et al. 2013.

        INFILLING >>>>
        ____>>>

        I’m actually not a big fan of NCDC’s choice to do infilling, not because it makes a difference in the results, but rather because it confuses things more than it helps (witness all the sturm und drang of late over “zombie stations”). Their choice to infill was primarily driven by a desire to let people calculate a consistent record of absolute temperatures by ensuring that the station composition remained constant over time. A better (and more accurate) approach would be to create a separate absolute temperature product by adding a long-term average climatology field to an anomaly field, similar to the approach that Berkeley Earth takes.

        Changing the past

        Diligent observers of NCDC’s temperature record have noted that many of the values change by small amounts on a daily basis.
        This includes not only recent temperatures but those in the distant past as well, and has created some confusion about why, exactly, the recorded temperatures in 1917 should change day-to-day.
        The explanation is relatively straightforward. NCDC assumes that the current set of instruments recording temperature is accurate, so any time of observation changes or PHA-adjustments are done relative to current temperatures.
        Because breakpoints are detected through pair-wise comparisons, new data coming in may slightly change the magnitude of recent adjustments by providing a more comprehensive difference series between neighboring stations.

        When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down depending on the size and direction of the breakpoint.

        This means that slight modifications of recent breakpoints will impact all past temperatures at the station in question though a constant offset.

        The alternative to this would be to assume that the original data is accurate, and adjusted any new data relative to the old data (e.g. adjust everything in front of breakpoints rather than behind them).
        From the perspective of calculating trends over time, these two approaches are identical, and its not clear that there is necessarily a preferred option.

        ____________
        If you have read to here ; Thankyou 🙂

        152

        • #
          Leigh

          “When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down depending on the size and direction of the breakpoint.

          This means that slight modifications of recent breakpoints will impact all past temperatures at the station in question though a constant offset.”
          And they wonder why people question these experts in corrupting raw data!
          An excellant post that a laymen like me could get his head around.
          It still makes it difficult to understand why every adjustment ends up lifting the planets temperature.
          Rutherglen amongst others show a cooling trend in their raw data over the last century.
          Who wrote those “corrupted” algorithms and why if they’re known to be so “corrupted”are they still being used?
          The Bom is doing itself no favors by stonewalling those that are genuinely trying to assist them in correcting what they’ve altered.
          The “tools” they are using are not fit for the job at hand.

          110

          • #
            Rud Istvan

            Leigh, one problem is the assumption behind the Meele stitching of scapeled data. FYI, scapelling says any break ( like the mythical Rutherglen station moves) starts a new record. So you have to ‘stitch’ the former onto the later. Meele assumes the most recent record is more correct. So you take the last point of the prebreak and stick it onto the first point of the post break. So if the most recent data is rising ( it was from about 1975 generally until about 2000), the seemingly logical process automatically cools the past ‘former’ by appending its trend to the low first datum of the more recent later. Hope this oversimplified explanation helps you understand the logical flaw in Meele stitching. Zhangs new paper does a better but more scientific job of showing this mathematical bias using real China stations.

            50

            • #
              Leigh

              Ah! A bit like what Michael Mann and he’s tree ring circus.
              When he did the “stitching”trick.
              And Jo, point taken and filed.
              I will try to curb my semi elderly exuberance when taking a stick to the global warmists.


              Thanks. Well done. – Jo

              60

          • #
            ROM

            Leigh @ #1.2.1.1
            Most break points entail a reduction in the recorded temperatures in the screens or the electronic MMTS [ Minimum Maximum Temperature System ]
            ____________

            [ A bit of info re the MMTS and Stevenson Screen differences in temperature measurements;
            [ quoted ]
            Early comparisons of MMTS readings with temperature measurements from the traditional liquid-in-glass thermometers [ LIG’s ] mounted in Cotton Region shelters showed small but significant differences. During the first decade, several studies were conducted and published results showed that maximum temperatures from the MMTS were typically cooler and minimum temperatures warmer compared to traditional readings. This was a very important finding affecting climate data continuity and the monitoring of local, regional and national temperature trends.____

            Subsequently the question now being asked is which is the correct temperature, that measured by the MMTS or the old, long established LIG’s?
            _____________

            Break points in the temperature records, an abrupt shift up or down in the record, can be created by a shift of the station from an steadily urbanising location out to a cooler rural locationwell removed from the urban heat island effect. [ UHI ]
            Or a cooling break point as would be just re- painting a Stevenson Screen bringing it back up to it’s original reflective standards instead of darker faded heat absorbing coloration, a re-painting which would create the possibility of a cooling break point. an abrupt shift downwards in the temperature trends from that station

            Cooling of the recorded temperatures break points are far more common than warming break points.

            Repainting with a modern plastic or water or oil based paint raises temperatures in a screen compared to re-painting with the old  original white wash as Anthony Watts discovered, leading to the probabilities of a warming break point.
            Both those last examples of cooling and warming break points don’t need anything else done to the screens but could well lead to the algorithm processing then altering all past temperatures up or down according to the direction of the break point.

            A better than I can “Break Point” explanation and why they always seem to increase the long term temperature trends can be found here;

            Why Automatic Temperature Adjustments Don’t Work

            121

        • #
          Richard C (NZ)

          ROM #1.2.1

          >”the BOM’s algorithms for homogenisation and adjustments to station data are very likely to be very similar to the Continental US [ CONUS ] system of adjustments”

          The BOM’s ACORN method basis for homogenization is Menne & Williams 2009 (MW09) which is the method for USHCN homogenization. BOM adapts M&W09 for Australia (see TR049).

          The BOM’s ACORN method for adjustment is Percentile Matching (PM95), a statistical modeling technoque. I don’t think that is the method for USHCN adjustment.

          For much much more on both methods see #38.2 in the ‘Hiding something’ thread here:

          http://joannenova.com.au/2014/08/hiding-something-bom-throws-out-bourkes-hot-historic-data-changes-long-cooling-trend-to-warming/#comment-1554309

          60

          • #
            ROM

            Thanks Richard
            My head spins in any case in trying to get an accurate grasp let alone the minutiae of the details on all of this slippery, slithering, slimy, stinking fetid monstrous apparition they call climate warming science.

            141

        • #
          Rud Istvan

          ROM, I did. And yes this has been discussed but not resolved at WUWT, Climate ETc, and so forth. That the adjustments are warming biased is clear it. Why, less so.

          Two example reasons, perhaps among several others also embedded in the code ( which could also just be buggy).. 1. The regional expectations fallacy is exposed by BEST treatment of station 166900. Is on the Curry thread, no need to repeat. Will be in the forthcoming book as footnote 23 to the chapter, When Data Isn’t, which seems to aptly also describe Rutherglen. Needless to say, Mosher went ballistic. An analogy here is Hillston in the Rutherglen comparison group.
          2. A problem with the Meele scalpeled data stitching algorithm was recently demonstrated by Zhang using Chinese stations in Theor. Appl. Climatol. 115: 365-373 (2014).

          Rutherglen provides delightful exposure of all this. Apparently a class 1 WMO-ISO compliant research station with a careful long history. So all the BOM hand waving won’t cut it. Dr. Marohasy had it right a few posts ago– heads should figuratively roll.

          90

          • #
            ROM

            Rud,
            I always read your posts wherever I find them as I find them good and interesting.

            40

          • #
            Richard C (NZ)

            >”1. The regional expectations fallacy is exposed by BEST treatment of station 166900″

            BEST’s “regional expectation” for each site is incredible. How can there be an “expectation”?

            It’s as if there’s a predetermined profile already “out there” somewhere that each site must conform to. But how was the “expected” profile determined?

            I found a similar discrepancy for Hamilton NZ in the Waikato region. There’s a station on the edge of the city (Ruakura Research) that NIWA uses in its 11SS. But does BEST pull it in? Noooo. Hamilton “data” is derived from outside the Waikato region in different climate zones, Auckland and Bay of Plenty regions.

            Needless to say but the BEST profile for Hamilton is nothing like Ruakura but a lot like Auckland.

            40

            • #
              Rud Istvan

              As we say here in the US, BINGO! Another example of the Flawed BEST regional expectations assumption. Rutherglen would seem to be yet another in a different BOM implementation.

              BTW 166900 is the South Pole Amundsen Scott research station which may have just provided evidence of gravity waves in the cosmic background radiation. The notion that any of it’s temperature readings are wrong is worse than carefully tended Rutherglen. BEST turned no trend into warming by rejecting 26 monthly lows on grounds the measurements did not meet the BEST regional climate expectation.
              The nearest Antarctic research station from which such an expectation could be derived is US McMurdo on the coast, 1300km away and 2700 meters lower!

              70

            • #
              Richard C (NZ)

              BEST’s temperature profile for the 3 different regions and climate zones, Auckland, Waikato, and Bay of Plenty, is effectively just the same profile moving up and down the y axis a bit.

              That’s what you get with “regional expectation” where the expectation for all 3 regions above is another bigger region …..somewhere ….out…..there…..(we’re surrounded by ocean).

              70

          • #
            Rud Istvan

            Menne, not Meele. To many wrong papers to keep mental track of. No time to fully research each comment. Sorry.

            40

            • #
              Richard C (NZ)

              Ahhh, Menne. (heh) Thanks for the correction Rud, I thought you may have meant Meehl.

              So does the [Menne] scalpeled data stitching [MSDS] algorithm apply to homogenization. or to adjustment, or to both somehow?

              I ask this because the basis of ACORN is Menne & Williams (2009), (M&W09), but only for homogenization. For adjustment BOM applies the Percentile Matching method (PM95) i.e. they may have discarded MSDS in favour of PM95 if the former is for adjustment.

              I intend to read M&W09 word-for-ward eventually but I can’t answer my own question yet until doing so. If you can’t answer without checking M&W09 yourself, don’t bother, I will myself, one day, soon, I hope.

              20

        • #
          Richard C (NZ)

          >”They are in fact likely to be almost identical as the entire global climate modeling community”

          Not really. BEST uses it’s own “scalpel” method, NIWA uses an ad hoc variation of Rhoades & Salinger (1993), GISS uses something else.

          And if you compare the profiles of each for specific sites, they don’t match in many cases.

          The nearby vs remote comparator contention does appear to be a common theme though.

          40

          • #
            ROM

            Richard C @ 1.2.1.4

            Thanks Richard but you have just added some further mockery with those comments on the differences in the data processing systems to the totality of the stupidity that is passed off onto the public, that what they are getting served up is some sort of peer reviewed science of the highest possible quality and therefore is unchallengable by the non climate scientist or lay person

            So I gather then that site data such as across international boundaries, processed using different algorithms can’t be used in common unless they are further corrected, ie processed by another algorithm to fall into line with the intended users own chosen data processing system?

            50

            • #
              Rud Istvan

              Terrific point, ROM. Had not thought of it. Now back to another rewrite of the forth oming book essay on this kerfuffle. Many thanks. Probably a nee footnote crediting you now on there somewhere.

              50

            • #
              Richard C (NZ)

              >”So I gather then that site data such as across international boundaries, processed using different algorithms can’t be used in common unless they are further corrected”

              I think another institution doing that would just go back to the raw data each time and give it their own treatment. Rutherglen raw for example gets different treatment by BOM ACORN, BEST, and GISS. HadCRU uses BOM’s homogenized HQ Daily which is not necessarily identical to ACORN I don’t think even though the raw is from the same dataset.

              30

              • #
                ROM

                Not your problem Richard but

                So which one of those is correct?

                And this, for the folks out there, is unchallengable peer reviewed climate science of the highest possibly scientific quality [ sarc ]

                They apparently can’t even derive an acceptable to all, global base line weather and climate data processing system despite the billions of dollars of OPM strewn in their paths for the last 25 years.

                91

              • #
                ROM

                Few will now read this post but I will note this so that it is on the record as this post of Jo’s might be referred to by quite a number in the future.

                This comment can be found in the Ice Cap blog which is a a climate skeptic news aggregation site.
                It is an added Ice Cap comment note to the report of the WUWT post on this, Jo’s Rutherglen post.

                The link here is my suggestion above that the algorithms used to “adjust” and homogenise and etc and etc are probably all very similar due to the benefits of being able to exchange data and for programming climate models.

                Richard C pointed out above that all the different climate analysis organisations BOM. CRU, GISS, NCDC, and etc appear to use different algorithms to process the weather cum climate data from around their own nations as well as probably the GHCN.

                Well now it seems and is verified by the ACORN document that sets out the Guidance for BOM’s ACORN system that one of the USA’ s most hard line rabid warmists Tom Peterson who is head of the NCDC where the real corruption and most of the suspected deliberate alterations and undoubted and admitted down grading of past temperatures first occurs,
                Similar problems which are now appearing across all the various national and global temperature records base station data as Nature refuses to co-operate and cover up the incompetency now appearing in the NSCDC and other climate analysis organisations by refusing to increase global temperatures to fall into line with the ideological stance of the most rabid warmists in climate science, Tom Peterson being one of those.

                Tom Peterson was one of the three eminent climate advising and guiding the BOM on the ACORN review panel by BOM’s own document as below
                Tom Peterson also tried to destroy Roy Spencer’s career when he accused Spencer in front of a senior climate scientist’s committee of what was basically fraud over what turned out to be a very minor mistake Spencer had made in some calculations and which he corrected in a few minutes then and there.

                Roger Pielke Sr who was present has said it was one of the most disgraceful, exhibitions he has ever seen another scientist stoop to in the manner in which Peterson attacked and accused Roy Spencer over what was a very minor matter.

                To quote the relevant part of the Ice Cap blog entry as it relates to the BOM ACORN review and along with the David Wratt NZ head of Climate Science in the NIWA [ with a combination like Peterson and Wratt advising BOM plus Jones and Karoly in there as well does any one believe that truth and integrity would ever emerge from BOM’s changes to it’s data analysis system? ] which probably needs no expanding on just how seriously NIWA has deliberately mangled and completely corrupted the NZ historical climate data.

                All of which gives some extremely good grounds to suspect that there is a good deal of underhand manipulation of data going on with a high level of the associated propaganda emanating out of BOM with the so called ACORN based recent record temperatures that weren’t any sort of record by a very long shot or at least wasn’t until the historical data had been manipulated sufficiently or had been superstitiously pushed to the back of some BOM archival warehouse where it would never be found they hoped;

                The Blogsphere
                Tuesday, August 26, 2014
                Australian scientist calls for ‘heads to roll’ over adjusted temperature data (NCDC next?)

                Icecap Note:

                Ironically the changes in the United also had the biggest change was in 1913. Here in Maine, temperatures cooled by an unbelievable 5F in 1913 after the latest changes made this spring. That early record cooling ensure, the annual temperatures will rank among the warmest every year.
                The only common player in both countries changes was Tom Peterson of NCDC who engineered GHCN and USHCN and also was on the consulting committee advising/directing Australia on their updated data set.

                ____________

                The following quoted from the BOM’s document ; ACORN-SAT guidance document

                [ quoted ]
                Three eminent climate scientists with specialised knowledge of climate data collection, management and analysis will provide technical expertise for the Review. Each is internationally recognised for his/her work and is widely published in the field.

                The technical members of the Review Panel are:

                Dr Thomas Peterson is an expert in data fidelity, international data exchange, global climate change analysis as well as the impacts of climate change. He was a lead author on the IPCC Fourth Assessment Report and served as co-chair and co-Editor-in-Chief of the 2009 report Global Climate Change Impacts in the United States. In addition to serving as the Chief Scientist at the U.S. National Climatic Data Center, Tom is currently President of the World Meteorological Organization’s Commission for Climatology.

                Dr David Wratt is Chief Scientist, Climate, at New Zealand’s National Institute of Water & Atmospheric Research. David has a PhD in Atmospheric Physics and has worked in the USA, Australia and New Zealand on climate and meteorology. He is a Companion of the Royal Society of New Zealand, and a member of the Bureau of the Intergovernmental Panel on Climate Change.

                Dr Xiaolan Wang is a Senior Research Scientist at Climate Research Division of Environment Canada. She has considerable expertise in the analysis of climate trends, extremes and variability. She is at the leading edge of development of methods for climate data homogenisation to enable more realistic assessment of climate trends. Dr Wang is a member of a Global Climate Observing System/World Climate Research Programme working group on surface pressure and a member of the U. S. National Oceanic and Atmospheric Administration’s Climate Change Detection and Data Program Panel, 2009.

                [ / ]
                ________________

                Draw your own conclusions when considering the amount of ideological commitment to a anthropogenic created climate catastrophe that lot subscribes to and who appeared to and have a well known collective history of putting the achieving their agenda above scientific integrity and honesty and openness with the public.

                10

        • #
          lemiere jacques

          you can have a problem doing homogenization because most of the terrestrial stations are corrupted by urbanization from an objective point of view ….so if the process leads to replace rural and high quality station by calcultated trend of urbanized place….it means you discard the only places where the effect of co2 on temperature can be observed.

          I do think that we have no certainty of how acurate is the trend of global temperature in the last century, it is hypothical, homogenization process is a very example of that.

          30

        • #
          aussie pete

          Thank you ROM. Bit of a heavy read for an average Joe like me, but it seems to me that what these Wallies have done is build a model that takes into account as many variables as they can think of then add an X-factor to ensure the outcome they want. Similar technique to the Climate Models. In both cases the number of variables and unknowables make the exercise futile. They may as well be trying to model next Saturday nights Lotto draw. I certainly wouldn’t be putting my hard earned their prognostications.

          20

    • #

      Why make it confusing! This is the same butcher with thumb on the scale. A tradition for 300 years.

      90

  • #
    Roland LeBel

    Fudging temperature records – right and left. we keep hearing this all the time, not only in Australia, but what about GISS where this is an ongoing event that been going on for too long. Is there enough evidence to drag the wrong-doers into court? I just do not get it!!!

    220

  • #

    But why is publicly checking our national database something that’s left to volunteers?

    Answer – mere volunteers make the mistake of comparing the data to the reality of the thermometers, whilst BOM is adjusting to the infallible climate models.

    210

  • #
    Mikky

    The BoM said they would provide details and reasons for adjustments in this document: http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Bureau_Response_WEB.pdf

    C1. A list of adjustments made as a result of the process
    of homogenisation should be assembled, maintained
    and made publicly available, along with the adjusted
    temperature series. Such a list will need to include the
    rationale for each adjustment.

    Agreed. The Bureau will provide information for
    all station adjustments (as transfer functions in
    tabular format), cumulative adjustments at the
    station level, the date of detected inhomogeneities
    and all supporting metadata that is practical. This
    will be provided in digital form. Summaries of the
    adjustments will be prepared and made available to
    the public.

    Don’t hold your breath though, as the adjustment algorithms are “complex” (which not necessarily and usually doesn’t mean clever or appropriate), large amounts of computer code are involved, and you have to get past multiple layers of “management” before getting to someone who actually knows whether the results are credible or garbage.

    Note also that the response dropped the last part about giving the rationale for all adjustments.

    160

    • #
      Richard C (NZ)

      Mikky #4, that report is very useful. Thanks.

      I had been wanting to know about the computer code (i.e. how does BOM actually implement M&W09 and PM95 methodology?) and had started asking about it. I’ve posted the report at Jennifer Marohasy’s ‘So much conversation’ to answer my own question there.

      The code relevant section is:

      C2, page 7 pdf:

      C2. The computer codes underpinning the ACORNSAT data-set, including the algorithms and protocols used by the Bureau for data quality control, homogeneity testing and calculating adjustments to homogenise the ACORN-SAT data, should be made publicly available. An important preparatory step could be for key personnel to conduct code walkthroughs to members of the ACORN-SAT team.

      Agreed. The computer codes underpinning the ACORN-SAT data-sets will be made publicly available once they are adequately documented. The Bureau will invest effort in improving documentation on the code so that others can more readily understand it.

      40

      • #
        Richard C (NZ)

        This from Jenifer Marohasy’s ‘So much converstaion’:

        The software does have a title and if you ask the authors nicely, you will probably be able to obtain a copy. The software is RHtestsV4. Its manual is available here:
        http://etccdi.pacificclimate.org/RHtest/RHtestsV4_UserManual_20July2013.pdf

        Cheers,

        Bill Johnston

        30

        • #
          Richard C (NZ)

          I’ve sought clarification at Jenifer’s ‘Rutherglen, still looking for answers’ post:

          Bill. I don’t think RHtestsV4 (as you pointed to previously) is the ACORN-SAT software. It’s a different implementation for North America I think, Quartile Matching, not Percentile Matching for a start. It’s probably a similar example but I can find no reference to RHtestsV4 at BOM or TR049 or searching ACORN-SAT RHtestsV4 on the web.

          The manual you linked to was this:

          http://etccdi.pacificclimate.org/RHtest/RHtestsV4_UserManual_20July2013.pdf

          Could you please confirm, or otherwise, that RHtestsV4 is actually the ACORN-SAT software as used by BOM with some reference, link, or quote?

          00

  • #
    Peter Azlac

    What we have operating here, and elsewhere within the other temperature series, is what we can call the `Homogenization Trick` in which long term stations with good data showing little to no warming, and even cooling, are swamped with short run stations starting in the cool period of 1940 to 1970 and so impart a substantial warming trend. We can see this in the Rutherglen situation where, out of the 17 stations referenced by BOM as being used in homogenization, ten are short term and in some cases do not even reach the present e.g Bonegilla 1968 – 1986 and Deniliquin (Falkiner memorial) 1957-1977. This ´Trick`has reached its zenith in the BEST series where many of the short term stations are created by scalping in misinterpreting climate shifts linked to ocean and other cycles for errors, as shown in papers by by Occult
    wattsupwiththat.com/2013/02/19/boulder-escapes-global-warming/#moe-80060
    nd by Courtillot
    dx.doi.org/10.4236/acs 2013.33038 published in http://www.scirp.org/journal/acs

    140

  • #
    CheshireRed

    All this data fra adjustment: it’s like some sort of Hollywood-esque drug-induced dream.

    Please someone, explain this: where are government agencies here? Where is the Aussie PM? Where is your energy minister? Police, anyone? If a pharma company rigged the results of their latest drug tests to show better than expected results they’d be in Court charged with [snip “something”], and God knows what else in no time.

    Blatant, outright manipulation, with little or no obvious justification, and everything falling ‘just so’ for agw.

    Where are your ‘leaders’?!

    70

  • #
    Roy Hogue

    Data. Data. Who has the data?

    Remember that kid’s game with a button? “Button. Button. Who has the button?” as someone went around the circle pretending to put the button into each person’s closed hands, “secretly” putting it in one pair of hands. Then everyone had to guess who had actually received the button.

    And who does have the data? I think it must have run off with the redheaded young lady on the left (any resemblance to a certain prime minister is purely coincidental), leaving a wad of excessive bills in the hands of the players taxpayers.

    150

    • #
      handjive

      Hey Roy.
      Your comment, all the way from the U.S of A, is closer to the bone than maybe you realise.

      “I think it must have run off with the redheaded young lady on the left (any resemblance to a certain prime minister is purely coincidental), leaving a wad of excessive bills in the hands of the players taxpayers.”

      Firstly, she hasn’t exactly run off, though she is still free:

      “Former prime minister Julia Gillard will appear before the Royal Commission into Trade Union Governance and Corruption in September.
      Ms Gillard will be questioned in relation to a slush fund operated by her former boyfriend, Bruce Wilson, when Mr Wilson was an official of the Australian Workers’ Union in the early 1990s.”
      https://au.news.yahoo.com/a/24835252/gillard-called-before-royal-commission/

      And as for “bills”, we have Bill the Greek as now part of the lexicon.
      Not to mention the handful of “bills” he was paid with.

      Julia’s big fat Greek reno
      http://www.smh.com.au/federal-politics/political-news/julias-big-fat-greek-reno-20120822-24mwf.html

      50

      • #
        Roy Hogue

        Handjive,

        I very carefully thought that one through. When I realized that I could make something out of, “Who has the data?” Gillard was the first thing that came to mind.

        I didn’t know about the Royal Commission but it sounds deliciously like the beginning of justice.

        80

  • #
    EternalOptimist

    I do hate to disagree with our kind hostess. But Jo, this data belongs to us, the taxpayer, not to BOM.
    BOM come up with a methodology, test it and sign it off. But its not till us, the owners sign it off that it becomes valid.
    If we dont have the enthusiastic amateurs, we get apathy instead, so more power to them.
    Thats the way it should work anyway.

    150

    • #

      We do not need enthusiastic amateurs, against professionals (paid well). Disaster! The “paid well” will disappear against overwhelming idiots with pitch forks, else they are not professional. We can only hope that the resulting riot can be contained. Please give your alternate of what is, or may be?

      31

  • #
    James Bradley

    The BOM using virtual data from virtual stations.

    Could be a great video game.

    Could call it D00M 4.

    Greens’ll love it.

    Probably play it from their Bunker.

    100

  • #
    Bruce

    Presumably for a particular site there is a measured temperature record but this record cannot be used because it has been subject to exogenous influences.

    Then by some magical procedure these data are adjusted to give the putative real numbers.

    Why am I skeptical?

    100

  • #
    Eliza

    Actually I think BOM will slowly extricate themselves from the AGW mantra. Karoly ect will be moved aside or retired. Ther’s too much at stake for their future here. They might be thinking we will be the first official BOM that is doing this there may be more professional and financial advantages than sticking to the AGW mantra?

    50

    • #
      pattoh

      Perhaps Karoly & his mates can have a much needed long break reading instruments in Antarctica or Macquarrie Island. Might be cathartic.

      70

      • #
        shortie of greenbank

        I wouldn’t want any of them anywhere near any instruments they could ‘adjust’.

        They lack the professionalism and integrity to do the job, no need to keep them. Though I would first investigate to see the level of [scientific competence] [snip] involved rather than let them walk free to muddy up another area of science turning it into a religion.

        70

    • #
      pattoh

      That should definitely include David Jones.

      60

    • #
      Winston

      Perhaps this is a defining moment in the Climate Wars, and like all wars pivotal battles, especially, turning points and routs, are given names: Thermopylae, Guadalcanal, Hastings, Long Tan, Culloden.

      Henceforth, this shall be known far and wide as the Battle of Rutherglen!

      91

  • #
    Leonard Lane

    Why does the time series graph end in 1981? Did I miss something in the treads?

    30

    • #
      Ken Stewart

      Read full details at kenskingdom.wordpress.com/2014/09/02/rutherglen-spot-the-outlier/

      I calculated monthly anomalies from 1951-1980 means, as the claimed 1966 and 1974 breakpoints are in this period. The Bureau claims their adjustments are justified by comparing Rutherglen to its neighbours, showing breakpoints in 1966 and 1974. We don’t need earlier or later data to test this.

      100

      • #
        Scott L

        Hi Ken

        I don’t disagree, I just like to see the high temp point around the 1930/40’s in the raw temp and compare it with 1998.

        every time you have it hotter or equally as hot in the raw data as 1998 it blows any global warming argument out of the water and the more it is seen the better.

        As a side point: One of my tasks as a research chemist back in the late 80’s was to develop a method to determine what the Relative Vapour Pressure (RVP) by month for all of Australia should be for the recent introduction of Unleaded Petrol which was causing vapour locking at the pump and in the car.

        I obtained a microfiche of all of the 90 percentile temperature data for Australia and worked out a model that would take this temp data and determine RVP limits for any temperature site anywhere in Australia for any given month.

        As refineries were state specific we had to work out the distribution locations and hence set quarterly limits based on the temperature of the distribution locations.

        The really interesting thing which is relevant to this discussion is that in some locations in Australia such as Adelaide to Port Pirie only 228 km nth of Adelaide had significantly different temperature profiles. The same when you crossed the great dividing range in Victoria which from one side to the other can be less than 50km.

        We created a temperature profile that looked very similar to a barometric profile, of all of Australia which showed some profiles very close together and others further apart. ie no one size fits all.

        Unfortunately I do not have the map or the original data any more as I never thought this data would ever be adjusted away. One of the greatest crimes in science is to change the data to suit your needs.

        91

        • #

          “One of the greatest crimes in science is to change the data to suit your needs.”

          I agree. If I measure, I am doing the best I can to measure “whatever” is being measured.
          If you change the numbers, you are desecrating the bestest and only measurement of “whatever” at this time and place. Adjust the “whatever”, do not adjust the numbers, else you die horribly!

          30

      • #
        Leonard Lane

        Thank you Ken. That explains it.

        20

  • #
    Robert O

    It seems that good information from a particular site as been invalidated by averaging large tracts of the countryside to give an artificial figure. If I were to be looking at temperature trends, I would list all the stations and make some stratified randomization selection of them to give a value. Computers and models are only tools; we still should think about what they are telling us.

    40

    • #
      Ken Stewart

      Worse than that. Homogenising Rutherglen results in data that is even more different from the surrounding countryside.

      100

      • #
        bobl

        Yes Ken.
        Are you sure of your calcs, checked them thrice etc, because at first glance there is an obvious mistake somewhere, it almost as if someone implemented an algorithm with a mistake, ran it and just accepted the output without question. Looking at what you have posted as a crosscheck, I’d have to conclude there is a serious mistake somewhere in the BOMs code, one would expect the homogenised output to lie between the average of the neighbours and the original readings, but it’s not!

        Can you try this boundary test (I like boundary tests because they show up errors). Plot the maximum value the 17 neighbours, against the sites homogenised output. If the homogenised output of the site is greater than the Maximum of the neighbours, where that sites raw temperature was not then there IS an error in the BOMs code.

        Also, I’m wondering whether there is a correlation between the maximum of the 17 neighbours and the amount each was adjusted? For example what you posted looks like a rotation of the mean, are the means of sites being rotated to match the mean of the maximums of the neighbours.

        80

        • #
          Ken Stewart

          Yes, checked. If someone else can cross check my work that would be ideal (I have been asking for this to be done for the past 4+ years). I just cannot understand this either.
          I only use raw vs raw comparisons (except Acorn of course). If BOM compares with neighbours’ homogenised data then we are in new territory. According to CAWCR 49 they compare with raw and homogenise with raw.

          60

          • #
            bobl

            Be happy to cross check it for you, assuming it’s not too complex. I’d also like to look at some “detectors” that is put hooks in to look at certain aspects, for example the envelope, mode vs mean, that sort of thing to try to derive a correlation with the BOMs method. It might be possible to infer a relationship.

            I think it’s very odd, unlike a lot of people I don’t think the BOM is necessarilly deliberately warming the trend, I think it’s incidental to a method that is wrong and this is feeding confirmation bias. Your analysis supports this hypothesis. On the other hand there is Karoly who is a raving loony warmist.

            30

      • #
        Bulldust

        I’m starting to get the feeling they homogenised ACORN using a certain tree in the Yamal Peninsula.

        60

  • #
    Radical Rodent

    So, reading the official blurb, all readings from the past are not to be trusted, and so have to be revised downwards, which strikes me as a desperate attempt to get the required rising slope to feed the agenda.

    On what basis is the assumption that the “Data may not have completed quality control”? Okay, “Observations made before 1910 may have used non-standard equipment” (no surprise, there, really – what might have been considered “standard” then may not be considered so, today), but, yet again, why do the readings have to be revised downwards? There is most certainly the strong odour of fish about this whole deal.

    90

    • #
      OriginalSteve

      The old propaganda principle – he who controls the past, controls the future.

      All that is happening is, in good agitprop style, “annoying” old history is being re-written to suit the coming workers paradise of glorious Fearless Leader!!

      http://en.wikipedia.org/wiki/Ministry_of_Truth

      “Minitrue plays a role as the news media by changing history, and changing the words in articles about events current and past, so that Big Brother and his government are always seen in a good light and can never do any wrong. The content is more propaganda than actual news.”

      60

  • #
    Peter Miller

    As the digging continues on BOM’s historic temperature homogenisations/manipulations, the smell will get worse before it gets better.

    81

  • #
    Diogenes

    As a real layman I am following with is debate with fascination & am trying to work into a year 12 Software Design Class (in the context of invalid data structures eg y2k) – one the kids asked me, and I hope a fellow Novan can help…

    1. Have any of the stations that were used to homogenise data been homogenised themselves ?
    2. Is there any overlap in homogenisation (ie stations that homogenised say Amberley used to homogenise other sites.
    3. If answer to 2 is yes – which was done first the more coastal or the more inland (being on the Central Coast the kids are well aware of the 1 degree temperature drop in winter on the west side of the Pacific Highway vs the eastern side, and +1-2 in summer

    tia
    Diogenes

    90

    • #
      Ken Stewart

      1. Wagga Wagga and Deniliquin. I don’t know whether the homogenised data were used in the comparison though. I used raw data because that is what the BOM explanation implies.
      2. Yes.
      3. Don’t know the answer to the million dollar question.

      60

  • #
    Richard C (NZ)

    >”This long running rural record that looks ideal apparently had “unrecorded” station moves found by thermometers miles away. Already we have found Bill Johnston who worked at Rutherglen who confirmed that the station did not move.”

    OK, and BEST appears to confirm the 1980 Rutherglen Research breakpoint was NOT NECESSARILY a station move – it is a “Record Gap” only:

    http://berkeleyearth.lbl.gov/stations/151882

    BEST also make a very large adjustment for that 1980 gap to the mean (ACORN adj is to Max and Min separately but which should coincide – ‘nuther story). BOM makes a very large adjustment but hasn’t come up with the exact reason why yet (doesn’t know yet?). It’s a record gap with some data in it but that is all that is known.

    I cannot see why, unless something known actually happened at the site during the 1980 gap, that an adjustment be made. There is some data in the gap – leave it as is.

    Data can be infilled but this is contentious because it is making up “data”. But that is a minor contention compared to the large adjustment. The gap seems to me to be an infilling exercise – not an adjustment exercise.

    Date infilling is covered in C6, page 8 pdf of this report (H’t Mikky #4)

    http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Bureau_Response_WEB.pdf

    C6. The Panel notes the intention of the Bureau to consider ‘infilling’ data gaps in a small number of stations’ data records. The Panel strongly recommends that, if the Bureau proceeds with this work, the processes should be carefully documented, and the infilled data should be flagged and maintained separately from the original.

    Agreed. There appears to be a little confusion about this issue. At this stage, the Bureau only intends to publish and disseminate the ACORNSAT data-set without infilling (i.e. as a composite and homogenised data-set). Should an infilled data product be published, the Bureau will clearly identify it as such and keep it separate from the core ACORNSAT data-set

    # # #

    BEST’s RUTHERGLEN RESEARCH series starts much later than in ACORN but BEST also use the earlier RUTHERGLEN POST OFFICE:

    http://berkeleyearth.lbl.gov/station-list/station/151882

    RUTHERGLEN POST OFFICE: 225 [mths], Jan 1903, Nov 1921

    RUTHERGLEN RESEARCH: 525 [mths], Jan 1965, Oct 2013

    Seems odd that BEST doesn’t access the long-running RESEARCH data.

    [Cross posted at Jennifer Marohasy’s ‘So much conversation’]

    60

    • #
      Richard C (NZ)

      Forgot to subscribe to comments – this time ticked.

      20

    • #
      Richard C (NZ)

      >”BEST appears to confirm the 1980 Rutherglen Research breakpoint was…..”

      I’m wrong here, confusing with Amberley where there was a big adj to 1980 in Max.

      Still, begs the question: why does BEST make such a large “Record Gap” adj for 1980 (turning cooling into warming) but BOM doesn’t?

      00

  • #
    Ursus Augustus

    Methinks there might just be a teeny bit of extra weighting applied to a certain sub set of the stations, the “prescient subset”, that show the “correct” trend behaviour rather than that of a simple averaging. This is absolutely as it should be because where would we be if we did not pay more attention to such presient data? The others stations should of course be referred to as the “denier subset”.

    Nothing more to see now, it has all been fully explained, logically and with better than 97% CI. Move along Jo, move along Dr Marohasy. This is the Church of BOM we are talking about here.

    /sarc off

    100

    • #
      Ken Stewart

      Checking this today.

      60

      • #
        Rud Istvan

        Ken, if there is something like that, it could be hidden in ‘gridding’. That is the process where geographic subdivisions are assigned singular values. It is an implicit rather than explicit geographic weighting. Bet that Hillston was overweighted since represents sparser areal station coverage. Bet Rutherglen was underweighted because a number of very near stations. Seems logical until one realizes the regional climate expectations fallacy. hillstons climate envelope is different than Rutherglen. No wine grapes at Hillston is itself sufficient evidence.

        Changed gridding is at least partly how the US portion of GHCN was changed to nearly double CONUS warming from the 2013 version Drd964x to the 2014 version nClimDiv. The announcement about the change also said homogenization procedures would be upgraded, but there is no public (that I can find after months of searching for an essay in mymforthcoming book) record of what might have been done. The gridding changes were explicitly announced, but of course not the details. 40 of 48 US states were given a new significant warming trend compared to the 2013 version of the same database. (CONUS excludes Alaska and Hawaii).
        Regards to your and Jennifer’s fine sleuthing, and to Jo’s fine reporting.
        In Texas, they say remember the Alamo. OZ can say, remember Rutherglen.

        50

      • #
        Ken Stewart

        Different stations at different times. Wagga wagga, Deniliquin, Echuca, Hume Dam look closer, but vary from year to year.

        30

        • #
          ROM

          Rud ;

          Cotton is grown at Hillston and on quite a scale under irrigation

          Cotton at Rutherglen?
          Anybody who thinks that Rutherglen is comparable with Hillston and therefore they can grow cotton at Rutherglen would have to be bloody joking or out of his cotton picking mind!
          Errr!

          Cotton likes it hot so fortunately the right hand doesn’t always know what the left hand is doing in outfits like the BOM.

          Twelve-monthly mean maximum temperature for Australia

          and Jan to Mar 31st ;

          30

          • #
            Rud Istvan

            Yes. Cotton and grapes show that the ‘regional climactic expection’ is wrong. I got the grapes. Your Cotton is a beautiful extreme other from the comparison stations. Great natural spot. QED.
            You folks in OZ are rocking. And I mean OZ in the most positive sense. Have only been to Sidney and Melbourne. Both magical in different ways. Cannot wait to get back and do some outback.

            40

    • #
      OriginalSteve

      I cant help but think someone was looking at all the data from sites and came across Rutherglen and decided they had a problem – a perfect location that was providing good raw data, and was therefore more problematic than an honest politician

      50

  • #
    Ross

    Sorry this is OT but it should be headline news in the MSM. From Bishop Hill

    Emergency supplies of electricity are being sought by the National Grid for this winter because of the threat of shortages of output from the UK’s coal, gas and nuclear power stations.

    National Grid said on Tuesday it was extending its search for additional sources of temporary supplies, blaming emergency shutdowns at two nuclear power stations operated by EDF of France and unexpected fires at two key coal-fired stations during recent months – Ironbridge in Shropshire and Ferrybridge in Yorkshire.

    http://www.bishop-hill.net/blog/2014/9/2/climate-emergency.html

    70

    • #
      ianl8888

      Yes, this situation has been developing throughout the NH summer. Now coal/nuclear is blamed for probable UK winter shortfalls – the “meeja” has not yet worked out a spin to this, is all, but it will. Until it does, though, reporting will be quite muted

      This is actually quite interesting in a macabre way. The UK is conducting a real-time experiment in frigging around with the national power supply. We are watching actual results within a relatively short time frame. Ghastly, but good scientific procedure

      90

      • #

        There’s actually something really scary about looking at the power generation totals for the UK (shown at this link) and be aware that this current page is for around 4AM, and in fact, that’s the whole point of why it is looking so scary.

        I couldn’t care less about wind power. It’s useless to add to any grid, because of its variability.

        However, look at the other dials there, the three main ones, Coal, Nuclear and CCGT (Combined Cycle Gas Turbines) Note the percentages there, and they add up to 85%.

        That’s the point at 4AM, the time of lowest daily consumption, that Absolute Base Load Requirement, that term so roundly laughed at by friends of the dirt greenies.

        Everywhere else, the US, Australia, and most other already Developed Countries, that absolute base load is supplied by full time run plants, coal, and nuclear, and here in the UK, you have pretend baseload, CCGT adding to that. However, it still only adds up to 85%.

        The bulk of the rest comes from those tiny dials across at the right, the Interchanges for France and Holland, almost 12% and both dials are at their maximum delivery.

        So, the UK is using France and Holland for its Base Load, not for the daily peaks (7AM until 10PM) when these interchanges are traditionally used for topping up purposes.

        With both France and Holland delivering their maximum, then if anything happens to them, like what will be happening for France’s Nukes, then the UK does not even have enough to cover its Base Load.

        Summer, and early Autumn are fairly benign for power consumption, traditionally lower, and it ramps up in mid Autumn and then into Winter, traditionally 10GW higher, and to check that, look at the Yearly Demand chart, at the bottom left of that page.

        If even a couple of coal fired plants go down, and France has problems, then you tell me what’s going to happen.

        It won’t be pretty.

        The Government will be doing all it can to NOT allow that to happen, at what will only be a huge monetary cost. Do anything to not allow the power grid to go down, anywhere, because if that happens, it’s curtains politically, not just for the Government, but for all politics, no matter who they are, because they are all to blame here.

        Meanwhile here in Oz, we’re lucky because we have so much existing coal fired power, all of it now getting older and older, with no replacement in sight.

        Replace coal fired power with renewables.

        Huh! That’ll work. (/sarc)

        Tony.

        80

      • #
        Greg Cavanagh

        We here at Jo Nova, and Bishop Hill, have been expecting this sort of thing to happen eventually.

        The details of what would cause the energy deficit were unknown, but it was obvious the UK would eventually get to a point where they didn’t have sufficient power.

        40

      • #
        pattoh

        Just read Archibald’s “Twighlight of Abundance”
        p186:-

        “In March of 2013, the United Kingdom came within six hours of running out of natural gas.”

        The future of the old folks does not look good in coming winters.

        40

  • #
    redress

    This is so wrong on many fronts….

    “Since early 2012 Ken Stewart has been asking the BOM which neighbouring stations were used. Finally, after pressure from The Australian, the BOM has provided the 17 names, and Ken has graphed them.”

    073128 is Wilkinson St, 1858…still recording rainfall, but has been closed for temperature recording since 2003.

    074039 is Deniliquin, Falkiner Memorial Field Station..1957 to 1977…..has not been active since 1977.

    So the BOM used two stations 20 km apart, and not the airport official station 074258, which commenced recording in 1997, and which is only about 3km from Wilkinson St.

    Wilkinson St and the official airport station ran in tandem for 6 years, both recording temperature for Deniliquin.
    Wilkinson St and Falkiner memorial Field Station ran in tandem for 20 years…

    One would have thought that a meaningful direct comparison could be made to see whether homogenization was necessary….

    120

  • #
    Neville

    Good post Jo and please keep up the pressure.
    BTW here’s the latest from Robyn Williams and Oreskes.

    Robyn Williams and Oreskes try to better his 100 metres SLR by 2100 claim. This would now require about 3 feet feet 8 inches SLR per year by 2100 to fulfil his stupid forecast.

    http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/naomi_orekes_tells_robyn_100_metres_williams_an_even_taller_tale/

    You have to read this concocted garbage and understand just how idiotic and clueless Williams, Oreskes and their ABC really are. You wouldn’t expect this level of stupidity from a retarded 5 year old child. And this fool is the compere of their ABC’s science show?????? Unbelievable

    131

    • #
      Lord Jim

      Under pressure the claims get more and more extreme. We’ve seen it in the past.

      Ironically, science, a discipline that was once exemplified as dispassionate and objective (‘give me facts, nothing but facts’) is now the playground of partisan advocates and scare scaremongers.

      60

    • #
      FarmerDoug2

      Moderator
      Please snip the middle sentence, last paragraph, from Neville’s post
      He has ruined an otherwise good (though OT) post.
      Doug

      60

    • #
      Stan

      Well, one of the problems is that the likes of Williams, Oreskes, Karoly etc are not stupid or clueless. They know exactly what they are doing and they are doing it very well. Witness these complex homogenisation algorithms to produce warming. How else could they enrich themselves massively at OUR EXPENSE, given their lack of any productive skills or ability to earn a living in the real world.

      40

    • #
      LevelGaze

      “You have to read this concocted garbage and understand just how idiotic and clueless Williams, Oreskes and their ABC really are.”

      Neville, I don’t think “idiotic” and “clueless” are quite the right words here.
      “Malevolent”, “self-seeking”, “subversive”, “human-hating” and “pathological” do however seem to fit.
      There are lots more adjectives in the same vein.

      30

  • #
    Yonniestone

    This story today indicates our Chief Scientist is concerned with the quality of science taught in primary schools http://www.skynews.com.au/news/top-stories/2014/09/03/science-teacher-in-every-primary-school.html?cid=BP_RSS_SN-TOPSTORIES_5_Scienceteacherineveryprimaryschool_030914

    A noble idea but why isn’t Professor Ian Chubb concerned with defending the correct scientific methods of collecting and keeping our data in our own Bureau of Meteorology?

    According to this description http://www.chiefscientist.gov.au/about/the-chief-scientist/ Australia’s Chief Scientist should be very concerned in upholding standards he undoubtedly knows through over 50 years of higher education.

    40

  • #
    Graeme No.3

    Oh for the talent of a Tom Lehrer

    First you get down on your knees,
    Fiddle with your anomalies,
    And homogenise..homogenise,

    90

    • #
      Graeme No.3

      Obviously not me!

      I leave it to someone else

      First you get down on your knees,
      Fiddle with the anomalies,
      Treat the data without respect,
      And homogenise, homogenise, homogenise.

      Take whatever steps you want, for
      They’ll be cleared by Al Gore.
      Every station must show the same
      Just use a different station name
      When you’re doing the warming game.

      Get ahead in the profession,
      Never make any concession,
      If the trend transitional
      Still looks like the original,
      Then it’s time to play it safer,
      Introduce a dislocation
      Two, four, six, eight,
      Time for a relocation.

      110

      • #
        Annie

        Hahaha! Love it.

        I’m a great fan of Tom Lehrer; still play him from time to time. His brilliance occupies some of the time on a long haul Emirates flight. I hope it’s still there in their playlists in future.

        Anyone else to take the well deserved micky out of the agw scam?

        20

  • #
    TdeF

    This is straight culling of cool data and data substitution to cover the hole!

    Why bother using an average of the adjoining sites to create a mythical set of values for the site, when the next step is to add this average back into the set? It will make no difference. In plain terms, you are just eliminating the site data completely and over all time. Why? Is it a total coincidence that this site was the coolest of the set? If this data culling was widespread and unjustifiable as it appears, it was a careful and deliberate shifting upwards of the National average by eliminating all cooling minima. That is lying, not science. This scandal just gets worse.

    So why not publish the original graphs pre homogenization, those mentioned in the Australian yesterday, the ones which showed the need for urgent data adjustment. Then perhaps we can all understand the motivation, the miracle of hidden trends and the incredibly complex mathematics of calculating averages.

    70

    • #
      Lord Jim

      With apologies to Shakespeare:

      A few years from now…

      Bishop:
      Hear me, my most reverenced scientist.
      What need you two hundred and twenty, one hundred, ten, or five weather stations,
      to follow in an organisation where twice so many may be got by the homogenisation of one?

      Pope Scientist:
      What need we one?

      50

  • #

    Be nice to see all the raw data on that graph – i realise that it will be a mess but it would give an idea of the range of values; the average of 17 graph is next to meaningless without an indication of range/error. One or 2 SD would be nice too.

    This is important as we can’t tell from the first graph for example whether in 1952 there were not a substantial number of stations showing temperatures below Rutherglen’s. If the algorithm uses a weighted mean (weighting by distance would seem one logical input) then the comparison of the mean of 17 stations is even more meaningless. Following this logic, I’d predict those 3 or 4 more distant stations, and therefore producing data that is down weighted, dragged the mean up as graphed.

    Please note that the paragraph above is a thought experiment to explain why the mean is a poor comparison – it is not meant to be an attempt to explain the actual process of homogenisation.

    40

    • #
      Ken Stewart

      Distance weighting is not mentioned in CAWCR 49 for detecting discontinuities or homogenising.
      See the spaghetti graph at kenskingdom.wordpress.com.

      30

      • #

        Thanks Ken – your site’s Fig 2. is much better than the one chosen above.

        10

      • #

        So the zero value on this graph is each station’s anomaly plotted against its own zero?

        00

      • #

        Final question since you mentioned CAWCR49. What have you gleaned from technical note 50? Do you see anything wrong with the approaches outlined in section 2? The WNDC for instance?

        and what about this statement?

        There is no a priori reason to accept or reject homogeneity adjustments that result in an amplification or diminution of the diagnosed warming. There are, however, very well established reasons to reject the use of raw or unadjusted data to characterise climate variability and change.

        20

        • #
          Ken Stewart

          The quoted statement contradicts the comparison of Acorn with AWAP and WNDC in CAWCR 50.

          AWAP and WNDC are sort of ‘raw data’ but all 800 or so stations. Acorn has NOT been compared with raw data at the same 104 stations.

          “So the zero value on this graph is each station’s anomaly plotted against its own zero?” Each station’s zero anomaly.

          10

  • #
    TdeF

    Weighting by area perhaps? I would suggest half the distance in any direction to the next station, a polygon. Remember there is no such thing as a national temperature anyway. Or a global temperature, except as a hot body radiator. We only seek a representative measure of temperature and all you need is a consistent simple method not incredible rigor which shifts temperatures by as much as 2 degrees. Given the data and positions, this calculation should be a very simple and fast thing to do and it has probably been done for years and years.

    So BOM, just show us the results. Why argue? You have the graphs already. They were mentioned yesterday. What’s the bet they show cooling? What’s the bet we were asked to bring our data into line with the rest of the world, the Northern Hemisphere as our 25% was producing a cooling planet? A nip here. A tuck there. A bit of homogenization. Some muttering about variance and then discarding the first half of the twentieth century because they used obsolete Farenheit thermometers and eyesight. Australians are generally very obliging when requests come from overseas and it all seemed harmless enough, except the BOM works for the people of Australia. We want the graph of temperature, unabridged and unmodified. Show us the problem, not the solution.

    50

    • #
      Rud Istvan

      Clearly also done in CONUS GHCN 2013 compared to 2014(above in thread ). Good intuition, but since the details were never published, harder to nail down. And I have been trying.

      20

  • #
    Mark

    If I didn’t know better I would say the graph has had a graphic rotation pivoting around the year 1966/7. This certainly looks that where the mean of the seventeen “local” sites equals the Rutherglen site…a particularly hot few years from my memory of Wangaratta and Myrtleford at that time…I would guess the adjustment would be linear rather than a step by the look of the graph.

    20

  • #
    Liberator

    Seriously people like John Cook cannot be supporting the BOM on this fiddling? Probably not quite the same but similar – They tried this with our rainfall, 100 ml in private gauges over 24 hours, but the local AWS was not working so they wanted to use another stations data – 30 ks away – they had 60 mm of rain over the same period of time. They could have used the private gauges data and just noted it wasnt offical. Suddenly our rainfall records are short 40mm. If they smooth the temperatures across the three sites that are close to me – depending on the way they go… we’ll either get hotter or get cooler….

    50

  • #
    Anto

    clearly replicating that ACORN final trend is not going to be easy.

    Yes, but it shouldn’t be. The publicly funded BOM should just release its code, rather than create a Phil Jones-style Easter egg hunt.

    http://blogs.nature.com/climatefeedback/2009/08/mcintyre_versus_jones_climate_1.html

    50

  • #
    thingadonta

    When the pigs in Animal Farm had to make changes and corrections to figures that didn’t show things the way they expected or wanted, they were referred to as ‘adjustments’.

    30

  • #
    JohnM

    Please read BoM document CTR_049. The averaging is weighted according to the correlation (coefficient?) andthe inverse of the distance to the site.

    Yes, Hillston might be included, but its weighting might make it negligible.

    (I’m not trying to stick up for the BoM, but it does a decent job of describing the method.)

    10

    • #
      Ken Stewart

      Not for finding discontinuities, and only if insufficient suitable neighbours:-
      An adjustment method was also defined using monthly data, for use in method evaluation (see
      below), and for cases where insufficient neighbours existed with available daily data for the PM
      algorithm to be used.
      page 53)
      This does not apply to Rutherglen.

      30

  • #
    sillyfilly

    I have problems with this sort of nonsense:
    “The 17 neighbouring places stretch from Beechworth in the foothills of the Victorian Alps, to Hillston on the flat plains of hot New South Wales. Hillston is about 370km by road from the wine growing district of Rutherglen. It’s 300km direct according to this estimate. Hey but maybe their climate trends are more similar than they appear…”
    Why did you not plot Cabramurra and Sale on this map (stations that Ken utilised), oh sorry that wouldn’t suit purpose perhaps.

    212

    • #
      Ken Stewart

      Sorry sillyfilly, you should read more carefully. These 17 sites are the ones the bureau uses, so they’re the ones I compared. I did NOT use Cabramurra and Sale in this analysis.

      111

      • #
        sillyfilly

        You did use it your previous analysis however, true? So why does Jo complain about about Hillston?

        112

        • #

          Silly, this post is about the 17 stations the BOM use. BOM wouldn’t release those names for 2.5 years. Ken had to guess what the BOM might have done and Ken didn’t homogenize Rutherglen records did he?

          Is this the best you can do?

          101

      • #
        sillyfilly

        Sorry Ken, does your ACORN data still include Sale and Cabramurra?

        Who cares SF? When Ken gets paid by the taxpayer and homogenizes Rutherglen with Sale, come back to us. – Jo

        113

    • #
      Heywood

      Wow, three days in a row for the stupid horse.

      The collective must be paying overtime.

      121

    • #
      Rud Istvan

      Sillyfilly, a personal suggestion. if you do not have any data to add to the excellent discourse here, stay away.
      You debase yourself ever more inRe the time that you do not.
      Which is unfortunate and Silly. Whether you are also a filly remains scientifically indeterminant.

      40

      • #

        When I read comments from sillyfilly, I’m reminded of a joke I first read in the mid 60’s and never forgot, now probably meaning less than it did in those days, as it mentioned the word kibitzer, a specifically Yiddish term.

        For current younger readers, a kibitzer is someone who looks on from the sidelines and keeps interrupting needlessly. The joke goes like this:

        Link, and look for Morris the card player joke.

        All evening long four cardplayers had been pestered by Morris, a self-proclaimed genius who commented on everyone’s poker hand and style of play. When Morris went out of the room for a moment, they hit on a plan to silence him.

        “Let’s make up a game no one ever heard of,” one of them said. “Then he’ll have to shut up.”

        The busybody Morris returned. The dealer tore two cards in half and gave them to the man on his left. He tore the corners off three cards and spread them out in front of the man opposite him. Then he tore five cards in quarters, gave 15 pieces to the man on his right and kept five himself.

        “I have a mingle,” he said. “I’ll bet a dollar.”

        “I have a snazzle,” the next man announced. “I’ll raise you two dollars.”

        The third man folded without betting, and the fourth, after much deliberation, said, “I’ve got a farfle. I’ll raise you five dollars.”

        Morris shook his head vehemently. “You’re crazy,” he said. ” You’re never going to beat a mingle and a snazzle with a lousy farfle!”

        Sorta sums sillyfilly right up, doesn’t it?

        Tony.

        90

    • #
      the Griss

      Ah… the dulcet tones of a braying donkey. !

      Sort of like a very dopey version of Eeyore.

      30

  • #
    Rohan

    I just hope that the BoM are never allowed to homogenize Rutherglen’s excellent wines. They wrecked the temperature records with homogenization and devious phantom station relocation, so if they ever stick their fingers into the wine making process, it’ll be a complete disaster.

    70

  • #
    A C of Adelaide

    This is just the science version of Rotherham.

    I dont know why anyone is surprised.

    40

  • #
    john robertson

    I expect the next idiocy from BOM will be this, weather is what the man at the weather station recorded, in degreesC or F with an instrument and eyeball error of approximately =/_ 1 degree.
    Climate is what we corrected the weather data to reflect.
    Sarcasm I hope, but never underestimate a cornered bureaucrat.

    50

  • #
    Ray Derrick

    It seems fairly obvious to anyone with a functioning brain that we will never be able to know with any degree of precision what global temperatures were at any time prior to satellite measurements, so I don’t understand why we even bother exploring this line of argument. No disrespect to Ken who has done an excellent job exposing these rather curious “adjustments” by the BOM.

    Whilst we do know that CO2 has the properties of that has been termed a “greenhouse gas”, surely all that really matters is whether CO2 (at least the human produced proportion) actually behaves like a greenhouse gas in the atmosphere and therefore if it has any effect whatsoever on global temperatures. So far I have seen no documentation of any experiments undertaken to prove that CO2 does indeed behave in this fashion in the atmosphere, nor any experiments that prove that any such action is amplified or diminished by the existence of feedbacks (if they in fact exist). The whole theory of AGW seems to be purely based on conjecture and the arguments in its favour purely based on temperature trends. That makes no scientific sense whatsoever.

    If the IPCC had spent most of it’s existence trying prove by experiment wherther CO2 has any effect on temperature rather than trying to prove that global temperatures have risen “dramatically” over the past 150 years, they may have come to very different conclusion. But that’s not why they were established the first place, is it.

    30

    • #
      Graeme No.3

      Ray :
      The IPCC started with the assumption that CO2 caused warming and that it would be amplified by 3-3.5 times. It is the rationale for their existence.

      They will never check that assumption by experiment because if the result isn’t positive then their existence would be unnecessary. Too many snouts in the trough to risk finding a fact.

      40

  • #
    pat

    apologies if this has already been posted, but i haven’t seen it:

    3 Sept: Australian: Graham Lloyd: Heat off Bourke after Bureau of Meteorology revision
    THE removal of a longstanding temperature record at Bourke of 125 degrees Fahrenheit (51.7C) set in 1909 was the result of a critical 1997 paper that revised a string of records and brought Australia’s hottest recorded temperature into the second half of the 20th century.
    Until the paper by Blair Trewin, who is now a leading climate scientist at the Bureau of Meteorology, Australia’s hottest recorded temperature was 53.1C at Cloncurry on January 16, 1889.
    But after revision, the record has been accepted as the 50.7C recorded at Oodnadatta, South Australia, on January 2, 1960.
    The Cloncurry record was erased because the temperature was taken with old technology and “the thermometers were probably overexposed to direct sunlight or radiant energy”.
    At Bourke, however, the temperature record was taken with a near-new Stevenson screen and clearly documented in the official record and monthly audit.
    Nonetheless the Bourke temperature was discarded from the record as an “observational error” because it was logged on a Sunday, a day that temperature records were generally not taken…
    http://www.theaustralian.com.au/news/heat-off-bourke-after-bureau-of-meteorology-revision/story-fn7x79y7-1227045560295

    40

  • #
    PeterS

    I wish I could use a similar technique as BOM’s to “homogenize” my salary downwards as I don’t live in a rich suburb so that I can get a huge tax return.

    40

  • #
    pat

    explain this, BOM. nice to see anthony get a mention:

    3 Sept: UK Daily Mail: Victoria Woollaston: ‘Global warming has been on pause for 19 years’: Study reveals Earth’s temperature has remained almost CONSTANT since 1995
    •Professor Ross McKitrick studied land and ocean temperatures since 1850
    •He also compared this to satellite data from 1979 to 2014
    •Trends in this data revealed global warming has been on pause for 19 years
    •And it has been on hiatus for between 16 and 26 years in the lower troposphere – the lowest portion of the Earth’s atmosphere
    •This is longer than the 15 years previously predicted by the IPCC
    Climate change expert Anthony Watt recently illustrated these findings by using Hadcrut4 data available online…
    http://www.dailymail.co.uk/sciencetech/article-2740788/Global-warming-pause-19-years-Data-reveals-Earth-s-temperature-remained-CONSTANT-1995.html

    30

  • #
    pat

    should have added it’s a pity they got anthony’s name wrong, but love how he is called a “climate change expert”.

    50

  • #
    pat

    way O/T, but also so way over the top, i must post it:

    2 Sept: Bloomberg: Lyubov Pronina: Islamic Debt Seen Funding U.K. Wind Farms to Rail Tracks
    Islamic debt could become a source of funding for U.K. infrastructure projects from wind turbines to high-speed trains and airports as Britain cements its position as the first sukuk market in a non-Muslim nation…
    Prime Minister David Cameron has pushed for sukuk sales to help establish London as a global capital for Islamic finance alongside Dubai and Kuala Lumpur, and tap into an industry that PricewaterhouseCoopers LLP estimates will increase to $2.6 trillion by 2017. There is investor appetite for more sales that could help fund almost 400 billion pounds of planned infrastructure projects, said Mansur Mannan, executive director of DAR Capital in London…
    Supporters include Roger Gifford, a former Lord Mayor of London, who said last year that Islamic finance should be as British as “fish and chips.”…
    The U.K. government envisages 377 billion pounds of infrastructure projects in the coming years, with most of it financed privately or part-privately. Major projects include a high-speed railway link between London and Birmingham, airport developments and the construction of wind turbines…
    http://www.bloomberg.com/news/2014-09-01/islamic-debt-seen-funding-wind-farms-to-rail-tracks-u-k-credit.html

    20

  • #
    pat

    reality at last by Edward P. Lazear, chairman of the President’s Council of Economic Advisers (2006-09) and head of the White House committee on the economics of climate change (2007-08), & professor at Stanford University’s Graduate School of Business and a Hoover Institution fellow.

    2 Sept: WSJ: Edward P. Lazear: The Climate Change Agenda Needs to Adapt to Reality
    Limiting carbon emissions won’t work. Better to begin adjusting to a warmer world.
    Were China to continue at this pace for 27 years until it reaches today’s U.S. GDP per capita, it would emit 99 gigatons of carbon in 2041 alone, or three times the world’s current emissions…
    Unless an economical low-carbon source of power generation becomes available, it is unrealistic to expect that countries, especially developing ones, will accede to any demand to produce power in a higher-cost manner merely to emit less carbon…
    Proponents of strong anti-carbon measures seem to believe that even considering an alternative to mitigation will weaken the public’s willingness to bear the costs of mitigation.
    Carbon math makes clear that without major effort and a good bit of luck, we are unlikely to control the growth of emissions enough to meet the standards that many climate scientists suggest are necessary. It is time to end the delusions and start thinking realistically about what can and will be done.
    http://online.wsj.com/articles/edward-p-lazear-the-climate-change-agenda-needs-to-adapt-to-reality-1409700618

    30

  • #
    Krudd Gillard of the Commondebt of Australia

    How much of BOM’s funding is provided for activities related to “the cause?”

    10

  • #
    Andrew Smith

    I’m a 53 yo Rutheglen winemaker with a great interest in history.
    Real history, not rewritten stuff or opinions. Which is why I am at a loss to explain any logic for not using the raw data. RRS( Rutherglen Research Station) in the late 1880s and 1890s was at the heart of the Victorian Colonies Viticulture Industry. Battling and defeating the scourge of the vine killer phylloxera.
    it would have been equipped with the best of everything including staff …..who recorded way more than just 9am temp and 3 PM or max temp as some of those outlying stations would have done.
    I’m guessing that some of those weather stations are not raw data used for homogenizing but computer generated…from you guessed it..other stations.
    Now that would be a massive double dip!
    Seriously though Beechworth is on top of the big hills nearby. Corowa was taken at the post office which had concrete and buildings way back in the late 1880’s ( it was the major NSW port linking to Victorian railways across the Murray river at Wahgunyah.Federation was signed at Corowa in 1901.
    hillston? absolutely hilarious. why not Darwin?
    Homogenization is [snip “a corruptible process”].
    The choice of weather stations used just makes this sloppy [snip].
    they knew water boiled at 101325 KPa at 100 deg C and froze at 0deg C. they even knew the freezing point of various salt solutions. Ie multipoint calibration of thermometers.

    These people insult the intelligence and hard work of those who collected the data.
    Then again, for these people, maybe computer modeling is so much better…better to manipulate to get the desired result!
    battle of Rutherglen eh? Well I’m nailing my colours to the mast, running up the Jolly Roger and saying Count me IN!

    40

  • #
    Andrew Smith

    Jo, pretty sure there is a great book on the Rutherglen Research Station(aka Rutherglen State Farm) . nearly guarantee the Rutherglen or Wahgunyah historical society will put you onto a copy.
    Bet it talks about the data recorded and the date of the instillation of the Stevenson screen etc. Real history!
    not computer generated propaganda.

    30

  • #
    Andrew Smith

    Sorry, just noticed the graph. Data starts in 1951? Wow. Just wow.
    It’s an old well established area going back to the paddle steamers . As a scientist of sorts ( B.app Sci Oenology) I smell a really big rat. There was a string of cool vintages( summers) in the 50s culminating in the big wet of 56. when draft horses got bogged.
    hmmm..wonder how that coincided with solar activity.?

    40

  • #
    Andrew Smith

    Wow, snipped for saying homogenization is errr inappropriate in looking at Rutherglen climate data.
    OK…1st principles. Rutherglen is subject to 2 major weather systems.
    the general west to east weather patterns and the tropical weather patterns that bring heavy storms and major changes from the north.
    Rutherglen regularly is around the southern extremity of those tropical weather patterns which break up and go up the valleys and river systems giving rise to major temp variations and rainfall over very short distances..way shorter than the map you gave us.
    Where are some of those homogenized sites? Err, up the very areas we expect to see massive variations from RUtherglen Research station.
    It’s not all bad, those summer storms allowed vines to be grown on a 550 mm rainfall without supplementary water.
    Hmmm.what happens when the “change comes through”.?
    temp cools off.
    gee…I get back to my original point.
    Homogenisation in this case is inappropriate science.
    Ps…love your work

    40

  • #

    Love this comment at The Conversation just now…
    Gary Luke
    thoroughly disgusted

    In reply to David Menere

    It’s all been inflated so much. The small problem is that the necessary homogenisation for a handful of sites doesn’t seem to comply with BOM’s published methods. For instance, anyone who has spent time at Rutherglen and Hillston would question why the temperature records at one of their sites is used to amend the records of the other. It’s probably just the sort of error we all make when handling masses of data. It shouldn’t need all this carrying on for BOM to just double check it and issue a correction.

    Go and leave your own comment here… https://theconversation.com/how-to-become-a-citizen-climate-sleuth-31100

    20