BOMs new data set, ACORN, so bad it should be withdrawn (954 min temps larger than the max!)

When independent auditors found errors, gaps and deep questions about the HQ (High Quality) dataset for the official record of Australian temperatures, the BOM responded by producing a completely new set called ACORN in March 2012. But this set is also plagued with errors. One of the independent auditors, Ed Thurstan writes to me to explain that though the BOM says it aimed for the “best possible data set” and specified that they check internal consistency of data (one such check is to make sure that the maximum on any given day is larger than the minimum) when Thurstan double checked ACORN he found nearly 1000 instances where the max temperatures were lower than the minimums recorded the same day.

This raises serious questions about the quality control of the Australian data that are so serious, Thurstan asks whether the whole set should be withdrawn.

Why are basic checks like these left to unpaid volunteers, while Australian citizens pay $10 billion a year to reduce a warming trend recorded in a data set so poor that it’s not possible to draw any conclusions about the real current trend we are supposedly so concerned about.  — Jo

 

The BOM goes to great lengths to assure us it’s high quality, peer reviewed, and rigorously checked, but with a days work, independent audits find major flaws

Anomalies  Errors in ACORN_SAT Data

Ed Thurstan

July 14, 2012

Ever since the documentation for ACORN-SAT was released, I have had doubts about the ability of the Australian Bureau of Meteorology to honour their published intention to release all software that generated the ACORN-SAT data. ( I might amplify that thought later.)

In March 2012 the BOM released the report

“Techniques involved in developing the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) dataset CAWCR Technical Report No. 049  Blair Trewin

This specifies in great detail both the background to the development of the database, and the checks applied to the data. As Blair Trewin writes in the Abstract of this report:

“The purpose of this data set is to provide the best possible data set to underlie analyses of variability and change of temperature in Australia, including both analyses of annual and seasonal mean temperatures, and of extremes of temperature and other information derived from daily temperatures.”

I decided to take that document as a Program Specification, and write code to perform those data checks.

The very first check specified in section 6.1 of the above report is

“1.   Internal consistency of daily maximum and minimum temperature

 Since the temperature recorded at the time of observation (09:00 under current practice) is an upper bound for minimum temperature on both the day of observation and the following day (i.e. Tnd ≤ T0900,d and Tnd+1 ≤ T0900,d), and a lower bound for maximum temperature on both the day of observation and the preceding day (i.e. Txd ≥ T0900,d and Txd-1 ≥ T0900,d), daily maximum and minimum temperatures must satisfy the relationships:

Txd ≥ Tnd

Txd ≥ Tnd+1

If one or both of these relationships was violated, both maximum and minimum temperatures were flagged as suspect unless there was strong evidence that any error was confined to one of the two observations.”

In testing my code for the first of the two conditions specified above (which says simply that the maximum temperature recorded on any day must be greater than the minimum temperature recorded for that day), I found violations of this condition in the BOM data.

The following are extracts from the full violation log. The errors occur in many different sites and are spread across many decades:

In total, the ACORN-SAT database released in March displays about 1,000 (one thousand) violations of that simple rule that for any day

The Maximum Temperature must be greater than the Minimum Temperature.

This is a blindingly obvious type of error which should not have escaped quality control. It throws serious doubt on the whole ACORN-SAT project. In my opinion, these violations indicate that the entire ACORN-SAT database is suspect, and should be withdrawn for further testing.

Ed Thurstan

thurstan AT bigpond.net.au

July 16, 2012

 

BACKGROUND:

Threat of ANAO Audit means Australia’s BOM throws out temperature set, starts again, gets same results

8.6 out of 10 based on 80 ratings

420 comments to BOMs new data set, ACORN, so bad it should be withdrawn (954 min temps larger than the max!)

  • #
    Annie A

    Dear Jo,

    This is not directly related to this post – was not sure how else to contact you.

    I have come across this article. Have you read it or heard about the study?

    If the above data is erronous, I wonder where this study aquired its data from?

    Just wondering what you think of it?

    Cheers

    Annie A

    Link

    00

  • #
    Annie A

    Opps,
    This is the link:
    http://www.sciencealert.com.au/news/20120907-23544.html

    I hope this works!

    Annie A

    00

    • #
    • #
      memoryvault

      .
      A report on REAL state of current “climate” in the Arctic, by somebody who lives there, not some Melbourne based latte-sipping inner city dwelling professor with his mouth firmly clamped on the public teat, looking for his next “research grant”.

      http://www.alaskadispatch.com/article/warming-making-alaska-more-extreme-dont-count-it?

      10

    • #
      Jaymez

      Hi Annie A, If you follow Arctic Sea Ice you would note that there can be a great deal of variability from year to year and even within a season. Sea Ice can be at record highs on one side of the Arctic and at record lows on the other. Factors such as deep ocean Oscillation, prevailing jet streams and recent volcanic activity can all play a part in the short term. In the long term I don’t think anyone should be surprised that the trend in ice coverage should be declining following the Little Ice Age even if it appears 2007 we reached a particular low.

      The main stream media, love reporting ‘dramatic declines’ but I’ve never seen a report of dramatic increases, yet they happen too. This year the winter ice coverage was at modern historical norms, but melting has been quicker than usual. That happens. This is a good site which gives a more balanced viewpoint. http://weatherdem.wordpress.com/tag/antarctic-sea-ice-extent/

      I read the article you referred to but not the paper. I have no doubt the long term trend for Arctic ice should be declining since the Little Ice Age through natural climate variability until we enter our next glacial phase. What the paper you indicated and so many like them do not do is:

      a) Provide any proof, (only assumptions), that a decline in ice coverage is due to anthropogenic causes.

      b) Delineate between any decline which is classified as natural variability versus which is calculated as anthropogenically caused.

      c) Explain why their theories of anthropogenic causes are melting Arctic sea ice meanwhile total ice volumes in Antarctica are increasing. If the cause was human greenhouse gases, then the effect should be relatively uniform around the globe. See: http://en.mercopress.com/2012/03/31/height-of-antarctica-ice-sheet-increasing

      00

    • #

      Hi Annie A. In strict geological terms, we’re actually living in an ice age, the Quaternary, but fortunately an interglacial period of it. Some gradual warming is to be expected but if we’re heading back into a glacial period, and some people think we might be, then it’s time to buy some woolies.

      http://thepointman.wordpress.com/2012/07/06/worried-about-climate-change-meh/

      Pointman

      00

    • #
      Annie A

      Dear cohenite, memoryvault, Jaymez and Pointman,

      Thank-you for all of your responses regarding the article. Most informative.

      Greatly appreciated!

      Many Thanks,

      Annie A

      00

  • #
    Dave N

    Seriously beyond belief.

    It exemplifies the notion that if you’re spectacularly bad at something, you can always apply for a job in government.

    In the corporate world, one would last less than a minute making these kinds of blunders, and investors basing decisions on climate data would dispatch ACORN (and probably HQ) to the shredder.

    00

    • #
      Adam Smith

      In the corporate world, one would last less than a minute making these kinds of blunders, and investors basing decisions on climate data would dispatch ACORN (and probably HQ) to the shredder.

      Hang on a second. Are you seriously saying the solution to finding suspect data is to just delete all the data?

      How does that make any scientific sense?

      It seems the country is fortunate you don’t work for the BOM.

      00

      • #
        Mark D.

        So you are saying we should keep the flawed data? How does this make any “scientific sense”?

        Too bad that BOM is chock full of Adumb Smith types.

        It seems the country is fortunate that YOU don’t work in private enterprise.

        00

        • #
          crakar24

          Too bad that BOM is chock full of Adumb Smith types.

          In light of recent events you seem to have tapped into some kind of mind reading/projectionism abilities there Mark……………do you do climate predictions by chance?

          00

          • #
            Mark D.

            do you do climate predictions by chance?

            I say with a high confidence (>95%) that we are going to have another ice age.

            As for mind reading, I tried that on ASmith. Guess what I found?

            00

      • #
        Graeme No.3

        The errors in the ACORN data are so elementary that it is quite legitimate to condemn the BOM.

        No one is suggesting that the original data should be disposed of, indeed many feel it would be a better guide to temperature history in Australia that an incompetent error riddled set of “adjusted” but politically correct figures.

        Unfortunately just relying on raw data makes predictions about runaway warming claims seem as authoritative as waffle about Julia (as PM) addressing the next G20 meeting.

        00

      • #
        Manfred

        Thought your Collective didn’t ‘do’ science – not a scientist and all that?

        00

  • #

    Never mind the quality; feel the width!

    I suspect that homogenization has independently elevated minima so that the averages fit the model.

    00

    • #
      justjoshin

      This looks more like they are lowering historic temperature maximums, which would have the effect of introducing (or at least exagerating) a warming trend.

      What would be most telling would be if they plotted the frequency of these occurences over the complete time period. If the incidence of these “anomolies” increases further into the past, we can pretty much prove that they have introduced or exagerated a warming period.

      Where can we find this complete data set? Is there a public DB we can write custom queries on? I’ve got some queries I’d like to run against this data.

      00

      • #

        Agreed. I was suffering from too much blood in my caffeine circulation system.

        00

      • #
        cohenite

        The code and data have been requested; the reply was:

        >

        Thank you for your interest in our ACORN-SAT work and the Bureau of Meteorology’s homogenised climate datasets more generally.

        >

        >

        >

        > The Bureau has a strong commitment to the scientific process and to providing useful information to the public. Acknowledging community interest in our climate data sets, the Bureau committed to the provision of a full package of information and data, as a main component of the ACORN-SAT development project.

        >

        >

        >

        > Almost all of this material was made available on the Bureau’s website on the 23rd March 2012, i.e. on:

        >

        > http://www.bom.gov.au/climate/change/acorn-sat/. This link includes overview descriptions of the project, methodologies and all of the homogenised data, as well as technical reports and material associated with the international peer review of the ACORN-SAT development.

        >

        >

        >

        > The Bureau has also committed to releasing analysis code used in the development of ACORN-SAT. We are currently in the process of preparing that code for public release. This includes improving documentation on the code so that others can more readily understand it. The operational code is currently not user friendly, being heavily interactive with Bureau of Meteorology computers, data bases and file systems. We will release the code to the ACORN-SAT website in the next three months as this process is completed. At the time of release we will arrange for a copy to be sent to you.

        >

        >

        >

        > It should be noted that the methodologies used for the development of ACORN-SAT, including the algorithms for detecting and adjusting for inhomogeneities in the raw data (i.e. those that were coded up for the analysis), are fully described in the CAWCR Technical Report, which is available in the methods section of the webpage linked to above, http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

        >

        That was 2 months ago.

        00

        • #

          The Bureau has also committed to releasing analysis code used in the development of ACORN-SAT. We are currently in the process of preparing that code for public release. This includes improving documentation on the code so that others can more readily understand it.

          Code used by professionals for professional purposes should always be documented as though somebody else had to read it. Even the “self-documenting” computer languages need rigorous documentation to explain why things are being done.

          The best reason for it isn’t that others can read it, but that it re-states clearly the rationale of why and how things are being done in the analysis for those writing the code; which is much quicker than reverse-engineering code to continue programming after a long weekend spent at a party, pickling neurons.

          00

  • #
    Ross

    The timing of this could not be better. The Australian BMO audited the NZ NIWA temp. data set. The High Court case against the accuracy of the NIWA data started today in Auckland !!

    00

    • #
      cohenite

      Hi Ross; any links or details?

      00

      • #
        mikemUK

        If you visit the “Global Warming Policy Foundation” site, they’ve currently got a headline piece on this NZ case.

        (As at 10am. – UK time, 16/7/12)

        00

    • #
      Rereke Whakaaro

      Ross,

      New Zealand has several “High Courts”, and I presume that you meant the Wellington High Court (NIWA is based in Wellington), but I can find no reference to a proceeding being Gazetted.

      00

      • #
        Ross

        Cohenite and Rereke

        I tried to post the link on the previous thread but it did not come on the thread ( must be in the spam folder).

        It was a small item on the Yahho News saying :

        The accuracy of NIWA’s official temperature records are being challenged in the High Court in Auckland.

        The New Zealand Climate Change Education Trust is disputing the method NIWA uses to measure the temperature.

        At the heart of the case is whether temperatures have risen to the degree NIWA data claims and whether that warming is predominantly caused by human activities.

        The trust wants New Zealand’s official temperature records to be declared invalid and prevent NIWA using that data to advise the Government and public.

        It’s also seeking an order that NIWA produces full and accurate temperature data.

        I stand to be corrected but I think this is the first serious legal challenge to a national temperature record, so it is significant.

        The fact that BOM can make such fundamental errors in their own records , as outlined in this thread does not aid them being a very credible “witness” in the case,as a previous auditor of NIWA’s systems and data.

        Rereke –I think the NIWA Head Office maybe now in Auckland
        ( gotta spend our tax dollars wisely !?! )

        00

  • #
    Tristan

    Haha. GJ BOM

    I’d suggest that most of us who have had to cross-reference annual reports (APS or otherwise) would have similar blunders to relate.

    I suspect that homogenization has independently elevated minima so that the averages fit the model.

    Ever have the feeling that they’re all against you?

    00

  • #
    cohenite

    Extraordinary.

    00

  • #
    elva

    I wrote this on a previous topic, but this is on topic. The Brisbane airport data is puzzling.

    It only starts at 1949. Before this, readings were taken from on top of a hill in the CBD up to 1988. In 1988, the site was taken over by a developer so new readings for Brisbane were taken from the newly opened airport in 1988.

    After a few years another new site was started closer to the CBD. So the temp’s went up a little from the lower temp’s near the sea at the airport.

    You could say that the 1949-2010 readings are fairly consistent because from about 1949 the Brisbane airport (Eagle Farm) has been the main airport. The Americans had concrete runways there and the old main grass airport remains at Archerfield.

    What puzzles me somewhat is a ‘slight’ error in latitude given. The data set has the Brisbane ap at 27’39” south. But on the radar it shows up as 27’23″s which, to me, seems most correct. Because the radar shows 27’39″s as being Logan City way to the south of Brisbane.

    I know BOM says not to regard radar as being pinpoint but I know the CBD is close to 27’30″s. Since the airport lies N.E. of the CBD then it has to be north of 27’30″s.

    00

    • #
      Truthseeker

      Having lived for a time in Brisbane, near the new airport, I can vouch that it is a fair distance from anything. They have used an area that borders the sea and has no significant buildings in the vacinity. Of course, the thermometer is probably on a building next to an air conditioning motor rather than in the significant open areas away from anything else …

      00

  • #
    Jaymez

    Didn’t BOM get ACORN signed off as “World’s Best Practice”? I can’t argue with that if the IPCC, CRU, NIWA, NASA etc are all “World’s Best Practice”.

    00

    • #
      Rereke Whakaaro

      That depends on which “world” they are talking about. It is probably orders of magnitude better than any other system on Mars.

      00

      • #
        NigeW

        No, it’s just an extraordinarily low bar to reach “Worlds Best Practice”, and yet the BOM still manages to trip over it and fall flat on its face…

        00

  • #
    Neville

    Don’t forget we Aussies have just introduced the planet’s most expensive co2 tax, partly because of the BOM temp record.
    The EU ETS is only a third the size of our tax yet they have at least 12 times our population.
    Our tax won’t change the temp or climate by a jot but will waste billions $ for decades into the future for a guaranteed zero return.

    00

  • #
    inedible hyperbowl

    In summary, the collectors and guardians of OUR temperature data are unable to either collect OUR data accurately or unable to keep the records without the records suffering from data corruption.

    The question that must be answered (not by the BoM) is whether this is simple incompetence or a deliberate attempt to corrupt the data (or both)?

    Your MP allocates YOUR money to the BoM so that they can provide corrupted data. Let them know that this is intolerable.

    00

  • #
    Siliggy

    Jaw dropping! After this find how could anyone take the ACORN data seriously at all.
    How much data would fail a less dramatic test like 50% of raw min max difference must be exceeded?
    This test could have just shown the tip of the iceberg enough to suspect the whole thing is hopeless!

    00

  • #
    Dave

    .
    O/T but some interesting facts for Tony – he probably is aware of it already!
    Largest Hydro generator ready” from Shanghai Times!

    00

    • #

      Thanks Dave,

      I’m trying to make an effort not to distract from the Posts here by going off topic, but I will answer things addressed to me.

      This indeed is interesting.

      This new Generator will produce 800MW.

      At the moment, China is leading the World in installing new technology power generation, in virtually every area.

      Hydro Power in China is the second largest method of power generation and makes up almost 25%, and is increasing all the time as huge new schemes come on line.

      Consider this.

      The whole of the Snowy Mountains scheme’s total power from its seven power stations can be generated by just 5 of these new generators, with 200MW to spare.

      Now consider the Three Gorges Hydro Scheme, on just one dam. It has 32 X 700MW generators. The actual power it delivers to surrounding grids, some as far afield as 1,000 Miles, is 44% of the total power consumed in all of Australia from every source.

      For some technical information on the Three Gorges Dam, I have a link to a one psrt of a 4 part series I did back in August of 2008.

      The Three Gorges Dam (Part 3)

      Incidentally, the average age of those power stations that make up the Snowy Hydro Scheme is 48 years.

      Tony.

      00

  • #
    ad

    Extraordinary. Extraordinary that people don’t seem to understand why the max can be less than the min. You people make me ashamed to be a skeptic sometimes.

    00

  • #
    Adam Smith

    I’m not sure what this post is about. The BOM points out that the data was analysed to test for higher Min than Max temps in order to evaluate the quality of the data.

    It seems that’s what they did. The very first one in the excerpt has the values reversed, the second one is suspect.

    What more should the BOM do?

    And just look at a lot of the data. It is numbers that people were writing down in log books, some as early as 1910, is it surprising that some of it will be erroneous? To only find 1000 instances of ‘suspect data’ out of hundreds of thousands of values is actually quite astonishing.

    00

    • #
      Dave

      .

      This is actually quite an astonishing analysis.

      I’m not sure what this post is about.

      It seems that’s what they did.

      What more should the BOM do?

      To only find 1000 instances of ‘suspect data’

      One Gold Star for the new Adam Smith!

      00

    • #

      Numbers are hard for you aren’t they Adam?

      If it only took unpaid volunteers a day to write a piece of code showing the supposedly double-checked, hot diggedy, high quality, peer reviewed data set has gaping flaws, imagine what a proper audit would find? Obviously the BOM have not done even the most passing of quality controls. All their PR about “expertise” and standards is crock, and anyone (apart from the innumerate fans of FeelGood green policies) knows that a max is supposed to be bigger than a min.
      The very most basic point the BOM should start at is with honesty. They can’t accurately calculate the “trends.” Their data sets are a mess. The uncertainties are huge. They can’t justify their adjustments, and they have not been acting as high standard scientists ought.

      00

      • #
        Adam Smith

        Numbers are hard for you aren’t they Adam?

        Well no, but [SNIP quit starting your replies with admonishments.] ED

        All their PR about “expertise” and standards is crock, and anyone (apart from the innumerate fans of FeelGood green policies) knows that a max is supposed to be bigger than a min.

        Well explain this to Ad at post 14 because he seems to appreciate how a min thermometer can sometimes show higher temps than the max thermometer.

        I would also point out that potentially human error can be involved. The person in 1914 who wrote down the figures could’ve got them reversed. Or the person who looked through that log book sometime in the 1980s in order to put the data in a computer could’ve reversed them too.

        The very first example in your sample seems to have been reversed because if you search the database now, you find the figures are reversed.

        You also claim that the errors are spread across numerous decades, but in that excerpt you presented the vast majority of the ‘errors’ are from before 1950 when all temperature data was collated manually with lower quality thermometers which creates greater opportunity for the introduction of mistakes such as a person mixing up the min and max in the log book.

        Finally, I reiterate my earlier point that if there’s only about 1000 ‘suspect data’ points out of the millions in the database then, that’s not very many.

        Suggesting that the entire data set is useless because of some suspect data doesn’t strike me as particularly scientific. But if you think that 1000 suspect data points means it is impossible for us to make any judgements about the temperature in Australia over the last 100 years, then we will just have to agree to disagree.

        00

        • #
          bobl

          Hang on a minute there, yes, these things could happen, but the BOM said that this dataset is cleaned of such instances and it isn’t. Whether is can happen, isn’t the issue its the fact that it is supposed to be dealt with, and its clearly NOT

          00

          • #
            Adam Smith

            Well this makes no sense because 952 ‘suspect data’ points out of perhaps 5 million is hardly anything.

            If you actually care about this issue you should read chapter 6 of this document which goes into extensive detail on how BOM tries to ensure the quality of its data.
            http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

            A further issue with data quality control is that there are two possible types of error – that of
            accepting an observation that is in error, and that of incorrectly rejecting an observation that is
            correct. The second type of error is particularly important in climate extremes as some of the
            most extreme events will trigger some checks typically used for detection of errors in data
            quality control systems (e.g. a check against the highest or lowest values on record), and
            incorrectly rejecting valid extremes will create a bias in the analysis of extremes.

            I would point out this passage that notes as big an issue of bad data is another threat of mistakenly rejecting valid data just because it doesn’t seem right.

            00

          • #
            BobC

            Adam Smith
            July 16, 2012 at 9:47 pm

            “A further issue with data quality control is that there are two possible types of error – that of accepting an observation that is in error, and that of incorrectly rejecting an observation that is correct. The second type of error is particularly important in climate extremes as some of the
            most extreme events will trigger some checks typically used for detection of errors in data quality control systems (e.g. a check against the highest or lowest values on record), and incorrectly rejecting valid extremes will create a bias in the analysis of extremes.”

            I would point out this passage that notes as big an issue of bad data is another threat of mistakenly rejecting valid data just because it doesn’t seem right.
            [My emphasis]

            I don’t know how BOM defines “wrong seeming” data, but NASA is OK with rejecting ocean temperature readings (ARGO) simply because they are colder than the models say they should be. Of course, this has the effect of insuring that the “data” doesn’t deviate too far from the model.

            This is the kind of data fudging that gets you a failing grade in Sophomore physics lab, yet the “experts” in AGW “research” actually brag about doing it.

            Of course, to know if BOM is actually doing something as stupid as this, you would have to look at their “correction” code — something they haven’t yet allowed.

            00

          • #

            Adam, it’s not about whether these errors themselves affect the trend, it’s about whether the BOM have done what they said they did, and described their own work accurately.
            There is no point in quoting their “theoretical” standards, it’s obvious that unless any given BOM standard is audited that we cannot assume that the BOM meet their own standards.

            We now know that the BOM make false claims.

            00

    • #
      Ian Hill

      The post is about the BOM being caught out with data which is nonsense. It’s almost as if two committees were assigned to “adjust” the raw data, one each for maximum and minimum, without talking to each other. Clearly the 1000 days were those in which the diurnal range was small to start with, which eliminates most of the “hundreds of thousands of values” Mr Smith is referring to.

      In 1910 temperature was written down as degrees fahrenheit. Ed’s log is the computer output from his program, not the observer’s note book.

      00

      • #
        Adam Smith

        Clearly the 1000 days were those in which the diurnal range was small to start with, which eliminates most of the “hundreds of thousands of values” Mr Smith is referring to.

        Sorry? The data is from 113 weather stations going back as far as to 1910. That implies millions of bits of data.

        In 1910 temperature was written down as degrees fahrenheit. Ed’s log is the computer output from his program, not the observer’s note book.

        Sure, but it ultimately came out of a log book!

        00

        • #
          Adam Smith

          [Sorry? The data is from 113 weather stations going back as far as to 1910. That implies millions of bits of data.]
          My bad, it is 112!

          00

          • #
            The Black Adder

            Adam Smith.

            You really are a piece of work.

            I thought Tim Flim Flannery was Australia`s biggest Twat, but you take the cake!

            How did we manage to build the Pyramids thousands years ago if we could not record Data correctly.

            How did we manage to navigate the world`s oceans hundreds of years ago if we could not record Data correctly.

            How did we invent the modern steam engine if we did not record Data correctly.

            You are implying that anyone born more than 100 years ago was inept and incapable of recording data correctly!!

            I am implying that anyone associated with this ACORN-SAT is inept and corrupt!!

            Are you corrupt Mr Smith?

            Why would you support this?

            Get a life…

            00

          • #
            KinkyKeith

            Mr Black

            yeah!!!!

            00

          • #
            BobC

            Adam Smith
            July 16, 2012 at 7:52 pm · Reply
            [Sorry? The data is from 113 weather stations going back as far as to 1910. That implies millions of bits of data.]
            My bad, it is 112!

            You seem to be missing the point, Adam. It is a trivial thing to run a test program on even trillions (10^12) of data bits. (The $700, 8-core processor, 12Gb RAM, 2 year old computer I’m typing this on could run a max/min check on a million records in about .05 seconds, or a trillion records in less than a day.) The volunteer auditors had no problem at all finding the bad data. The obvious conclusion is that BOM “data checking” is junk.

            00

        • #
          cohenite

          Adam; they got 1000 days wrong; that’s nearly 3 years; but wait; we don’t know if it was the minimum or the maximum which was wrong, or both; so that 1000 could be 2000 readings which are wrong.

          And that is only one aspect of the temperature record which may be wrong. BoM has admitted that metrification ‘possibly’ added 0.1C to the trend.

          Then ACORN has a warming trend which is greater than the HQ, which had a 40% warming bias after homogenisation which Della-Marta noted could not be replicated.

          You are joking, right?

          00

          • #
            Adam Smith

            Adam; they got 1000 days wrong; that’s nearly 3 years;

            How do you know that both figures are wrong for all days?

            And gee it sounds serious when you say it is 3 years of data that is wrong, but that’s out of something like 12000 ‘years’ worth of data!!!

            but wait; we don’t know if it was the minimum or the maximum which was wrong, or both; so that 1000 could be 2000 readings which are wrong.

            Wow! 2000 thousand suspect readings out of – at a conservative estimate – 5 million data points!!! That’s a suspect data rate of 0.0004% and that is based on my CONSERVATIVE estimate of the data points!

            And that is only one aspect of the temperature record which may be wrong. BoM has admitted that metrification ‘possibly’ added 0.1C to the trend.

            That’s a TINY error when you consider that the types of thermometers commonly used from the 1910s to 1950s had a margin of error of +/- 0.5 degrees.

            But again, just because there are margins of error DOESN”T MEAN THE DATA IS USELESS. It is the best data we have and if you are careful with how you analyse it then you can get good information out of it.

            00

          • #
            cohenite

            It is the best data we have

            No. the satellites do a better job. But, that aside the point you are ignoring is that it is not the data it is the adjustments to the data which are the problem; these adjustments add warming; do you know why or how that result occurred?

            00

          • #
            memoryvault

            .
            Team Smith are, of course, obfuscating around the actual issue of the obvious glaring errors in the recorded temperatures as presented above.

            I have lived in places like Marble Bar, Alice Springs, Longreach and Bourke. If ever there had been a time when the difference between daily maximum and minimum temperatures – regardless of which way around – had been less than one degree – and that in the balmy range of 18 to 24 degrees C, it would be the stuff of legend.

            Ballads and poems would have been written about it, and the local pubs would have copies of framed newspaper cuttings on their walls.

            I’m with Ian Hill @ 15.3 on this one. Two teams “homogenising” data to fit a predetermined result; one on maximums, one on minimums, with insufficient communication between the two groups.

            .
            But then, I’ve worked in the public service too.

            00

        • #
          Rereke Whakaaro

          Wrong. The data was checked … you can read the procedures here

          The procedures say what should be done. The don’t say what was done. And there can be a lot of difference between the two.

          00

        • #
          Rereke Whakaaro

          Sure, but it ultimately came out of a log book!

          Have you ever seen a log book from the days before computers?

          I have, I have done research that required us to go through thousands of such books. And the thing that always strikes you is the careful handwriting, and the consistent writing style. There was pride in the way the data was presented. The measurements were treated with reverence.

          And your defence is that the BOM stuffed up because the logbooks were wrong? Sorry mate, it is you that is wrong, the raw data will be right, and right to one tenth of one degree Farenheit.

          00

          • #
            Adam Smith

            And your defence is that the BOM stuffed up because the logbooks were wrong? Sorry mate, it is you that is wrong, the raw data will be right, and right to one tenth of one degree Farenheit.

            What?

            Are you seriously proposing that there was NEVER a SINGLE error when writing down results into the log books?

            Are you seriously proposing that there was NEVER a SINGLE error when entering that data into a computer sometimes 60 or 70 years after it was written?

            And your assertion that the data was accurate to 0.1 of a degree fahrenheit was funny because the thermometres only have a tolerance of +/- 0.5 degrees. This is on the first page of this document:
            http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Observation_practices_WEB.pdf

            You also don’t realise that the earliest automatic thermometers used in Australia could only send data in whole degrees “due to limitations in the data transmission software”. See page 25 of this document:
            http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

            So again you seem to be arguing about perfect accuracy of the data most of which was simply written down which creates opportunities for mistakes perhaps including mixing up temps, perfect accuracy of the data entry, but you seem to be insinuating that once the data gets there the BOM screws it all up so we can’t trust any of it!

            That is a completely unscientific view that you can’t back up with a single piece of evidence.

            00

        • #

          The problem is the original raw data passes the min lower than max test, it is the new “homogenized* data set that is a fuc**rd up. What was originally recorded in the log books years ago has passed the smell test at the time the readings were entered into the digital data base, they have since been bugg**d up.

          00

      • #
        AndyG55

        yep, a simple blogger can write a simple program to check for data consistency,

        but BOM, CAN’T !!!!

        The data SHOULD have been checked, but obviously wasn’t !!

        go figure. !!!

        00

        • #
          Adam Smith

          Wrong. The data was checked, but it seems you are too lazy to read the document explaining that fact. If you have a day when you aren’t feeling so lazy you can read the procedures here:
          http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

          00

          • #
            AndyG55

            So, you admit BOM are incompetent, then. Thanks.

            00

          • #
            AndyG55

            Or lying..

            take your pick

            00

          • #
            BobC

            Adam Smith
            July 16, 2012 at 9:38 pm · Reply
            Wrong. The data was checked, but it seems you are too lazy to read the document explaining that fact. If you have a day when you aren’t feeling so lazy you can read the procedures here:
            http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

            Logic again, Adam — the fact that BOM claims the data was checked is not the same thing as the data being actually checked.

            1) The referenced document says that data was checked for the min temperature being greater than the max temperature.

            2) The processed data, when independently checked for the same thing, showed more instances of the error.

            This is a very simple check — one of the simplest ones done to the data. If BOM can’t get that right, how can one have any confidence about all the other “adjustments”, many requiring (according to the document) “well-informed decisions” (judgement calls) made by unnamed individuals? Yeah, no chance for any bias to slip in there!

            *******************
            Given the fuzzyness of the processing descriptions in this document, I suspect that they don’t want to release the code because it is chock full of ad-hoc adjustments and un-justified parameters.

            00

          • #
            J.H.

            Adam Smith said:

            Wow! 2000 thousand suspect readings out of – at a conservative estimate – 5 million data points!!! That’s a suspect data rate of 0.0004% and that is based on my CONSERVATIVE estimate of the data points!

            No Adam….. All the data is wrong and these 1000 readings are just the ones that stand out like sore thumbs…. Whatever they did in “adjusting” has ruined the database…. That is the only conclusion that a scientist could draw from this. I would get the raw data, and just start again…..

            00

    • #
      pouncer

      In theater criticism, the three opening questions are: what was the production attempting?; did they succeed?; and, was it worth the effort?

      In this instance of climate science, the attempt was specified to include an effort to FIND errors where Tmin > Tmax.

      Apparently, the process broke down at stage two. Either the professionals, repeatedly, failed to find the specified errors; or, having found them, they had no specified protocol for dealing with them. If they DID have a method to correct, omit, or adjust the errors found, they failed to specify that method.

      The actual arithmetic or statistical significance of the number of instances found has about as much to do with the success of the science as the number of notes in a Mozart symphony (“too many notes” as the King says.) or the cost of a Andrew Lloyd Weber West End musical production. Not the numbers — the intent. Did they accomplish what they set out to do?

      It appears not.

      What other academic or government discipline allows itself to operate this way? Understand, for instance, a taxing authority cannot possibly audit every single taxpayer every year. Necessary, the vast amounts of data collected must be audited by sampling. Suppose the auditors look for a defined condition where, for instance, capital gains income in year N exceeds the amount over the average of five years prior. (Or something — I don’t care, it’s a hypothetical.) But having defined a particular test to be sampled and found numerous data from payers that fit the profile of those who might be intending deception — should the tax collectors then take no other action? Then, taking no action, declare how amazingly highly honest the investors of the nation are?

      If Tmin > than Tmax is a trivial error, not worth correcting, why was it promoted as a valid test to demonstrate the quality of the data? Is 1000 errors, out of 1,000,000, better than the same measure in a comparably long and vast data set in Central Europe, say? If so, why is that comparison not drawn explicitly?

      What were they trying to accomplish? Did they do it? It seems to me they were TRYING to BUILD PUBLIC TRUST. Not actually do science, but boost their P.R. And it seems to me they failed.

      Now, was it worth doing?

      That is, I believe, a question I’d like you to address, Mr Smith.

      00

      • #
        Adam Smith

        The actual arithmetic or statistical significance of the number of instances found has about as much to do with the success of the science as the number of notes in a Mozart symphony

        WTF? Saying that millions of data points are all wrong because you find 952 that are suspect is just laughable!

        Especially when many of those data points are from the period with less accurate equipment and less standardsiation of procedures.

        Your assertion that we should just throw up our hands and say that the data is unusable is completely unscientific. That’s the data we’ve got we should use it the best we can to get the most information out of it we can.

        ———————-

        Note Ads avoided answering the question or acknowledging the point. He just keeps repeating the same theme. — Jo

        00

        • #
          Dave

          .
          Adam – YOU TALKIM LONGLONG:

          That’s the data we’ve got we should use it the best we can to get the most information out of it we can.

          HIMBAD! Could you rewrite this sentence so I can understand it please!
          Shocking english you have there – WONTOK!

          00

          • #
            KinkyKeith

            Dis pella smit, em i gat lik lik brain bilong im – em i bugerup tumas.

            00

          • #
            Dave

            .
            Brillant KK!

            Dis pella Smith, MattyB, JB –
            3 numba lik lik rat no got eye to CAGW – Him hot lon time!

            00

          • #
            KinkyKeith

            Thanks Dave

            Should translate that: This smith bloke he’s got a small brain, his thinking is totally stuffed.

            But

            Wun pel smith OK

            tupela smith nogut.

            Tuplea smith em i go pus pus.

            Dis gettim planti lik lik smithpela.

            smithpela tumas.

            🙂

            00

        • #
          AndyG55

          Idiot.. your lack of scientific knowledge is really showing through

          You know some of the data is most probably incorrect.

          You DO NOT know how much of the other data is also incorrect.

          You CANNOT just accept that the rest is correct.

          Using information that cannot be guaranteed to be accurate is very close to scientific fraud, particularly if you pretend it is correct.

          Understand ?????

          00

          • #
            Adam Smith

            Idiot.. your lack of scientific knowledge is really showing through

            Your reversion to [SNIP quit starting your replies with admonishments.] ED

            You know some of the data is most probably incorrect.

            Oh dear, at a CONSERVATIVE estimate there’s 5 million numbers written down, you really think there wasn’t a mistake somewhere along the line, either back in 1910 when a number was written down? Or when the data was copied from a book into a computer? Or when it was taken from one computer format and added to another?

            The question isn’t about 100% accuracy of each individual number, it is about over all integrity. For you to say that ONE bad number means ALL of the data is useless is just stupid and completely unscientific.

            Y

            ou DO NOT know how much of the other data is also incorrect.

            Either do you mate, but you’re making baseless assertions about the usefulness of the data anyway simply because you don’t want it to be used!

            You CANNOT just accept that the rest is correct.

            Oh right, but you think you can just call me an idiot and that will convince me to accept your belief that all the data is wrong?

            WIth that sort of argument I have no problems accepting the BOM’s approach with handling this data.

            Using information that cannot be guaranteed to be accurate is very close to scientific fraud, particularly if you pretend it is correct.

            Hang on a second, so your entire argument is that I’m an idiot therefore you are correct? That’s not exactly scientific mate.

            ———-

            People are getting annoyed that the most dominant commentator refuses to get a point raised many times (that the BOM said they checked, but they didn’t). Having provoked them into being annoyed at his refusal to acknowledge the obvious – Ad-s calls it a victory that he has provoked rational people into expressing their annoyance. This is not contributing – Jo

            00

          • #
            Dave

            .
            Adam,

            You ask?

            so your entire argument is that I’m an idiot therefore you are correct?

            No one would ever do this to you Adam Smith unless they were certain that all the data was 100% correct – even 1000 errors in a couple of million in your rules could deem you an IDIOT – which would be an injustice!

            Don’t you agree?

            00

          • #
            Mark D.

            Adumb, He calls you an idiot because there is a lack of evidence to the contrary. The more you type, the worse it gets.

            Go ahead and loudly defend bad data. See how much that helps your cause.

            00

          • #
            AndyG55

            “your belief that all the data is wrong?”

            not only scientifically illiterate, but can’t read either.

            NOWHERE did I say the data is wrong.

            READ AGAIN, and PLEASEEEEEEE try to understand !!!

            00

          • #
            KinkyKeith

            Spacer Statistics

            Team Smith has now managed to SPACE out 3,238 lines in the last two weeks.

            Interestingly when the other members of Dr Smith’s group using pseudonyms to hide their smithness the total comes to 3,687 line spaced.

            The only use of this statistic , of course, is to show how tolerant this blogg is of people who are just here to put space between the genuine comments.

            This blogg should be proud of approaching 4,000 lines spaced in the latest effort by TS and his Pseudo-Smiths as it shows that along with honest inquiry that occurs here, there is also tolerance for the less able scientifically or politically inspired mischief makers.

            🙂

            00

        • #
          pouncer

          I infer that I have not made myself clear. I apologize.

          I am saying that the effort, as described, was to clean the data. The claim was was data was reviewed for particular kind of error, and that error eliminated. The effort and the claim was NOT to, for instance, sample a subset of several thousand measurements out of many million and estimate the percentage incidence of the flaw.

          If the effort was to accomplish one specific requirement (write a symphony, cleanse the data) but the result was otherwise (bunch of random notes, bunch of unnoticed errors) then the effort, I say, was wasted.

          Even if the mistake was in describing the effort, the situation is now worse than before. If the actual effort was to sample and estimate the error rate but the claim was exaggerated, it is apparent that with minimal effort the entire dataset (not a mere sample) could have been reviewed, and all the errors could have been found and removed. Finding these errors is an existence proof of the technique that could, and should, have been applied. The failure to apply such a technique is the failure. The actual incidence of the detected flaws is, I argue, not relevant to how many such flaws are identified — once it is shown that detection is possible and even trivial. Again, it’s as if a concert were to be performed without the orchestra bothering to tune up. How many actual discordant chords result is not the point. The point is that without valid preliminary effort to do it right, the resulting effort put into making the music will be less than satisfactory.

          All clear now?

          00

      • #
        Professor Rupert Holmes Esq

        Adam Smith,

        I wont name call but you do really ask for some of it.
        Their stated aim was to refine out mistakes, recording mistakes or input mistakes. Two ways they can do this in stats.

        1. is find outliers (junk) and then bridge the data using previous few days and latter few days, averaging if it’s clerical, but they have to state they did it and the events where it occurred and how it impacts end results, sounds tedious but hey no one promised science wasn’t going to be boring.

        2. In input Outliers easy peasy just check and re input.

        A simple max min process, compared to the other techniques they use to homogenise and smooth and every other torture they put the data to, is child’s play with modern computers, something novices would do, under grads, forget post grads.

        It’s not hard it’s the most basic thing in stats. Pre computer there would be an excuse but now its inexcusable.

        A thing you may not know, that we all know, team Hansen and others BOM included have bridged data and adjusted by grouping in actual different climates, where there were gaps in data periods or station coming on or offline. I will leave it to others to point out some beauties.

        You do not know much about manual record keeping of temperature and rainfall in Australia these are family activities, these men and women who kept the temperature and still keep the rainfall take pride in their duty.

        Anyway as usual when they cock up, your team, team Uranus go to the old reject FOI strategy by delay and denials, its’ politics 101 not Science.

        They have credibility issues.

        00

  • #
    Sonny

    There once was a scientist named Tom,
    Who got a nice job at the BOM.
    … Please continue … 😉

    00

    • #
      Dave

      .
      They were always going miss,
      Because the captain was Smith,
      .
      They got a 1,000 days wrong,
      but it was approved by Penny Wong,
      .
      They didn’t think they could fool YA,
      But it was OK’d by Julia,
      .
      ……..

      00

    • #
      Wendy

      @ Sonny 😀

      There once was a scientist named Tom,
      Who got a nice job at the BOM,
      When asked about data, to think,
      Why their analyses surely did stink,
      He hemmed and hawed and blamed it all on Joe Romm

      00

  • #
    ChrisM

    The NIWA court case in Auckland will probably be reported in the local rag daily. Here is day 1: http://www.nzherald.co.nz/nz/news/article.cfm?c_id=1&objectid=10819971

    00

  • #
    spangled drongo

    …His task was to raise
    The temps of later days
    And hope no one asked where they came from.

    00

  • #
    Adam Smith

    cohenite
    July 16, 2012 at 8:35 pm

    No. the satellites do a better job.

    OMG!!!!! How many weather satellites were there in 1910 mate?

    We have a lot of very useful historical data, for you to say we should just ignore it or not use it because it isn’t as accurate as what we can get off a satellite today is simply laughable!

    But, that aside the point you are ignoring is that it is not the data

    Oh OK so now you are moving on to other issues because I have demonstrated that 1000 suspect data points out of a several million is hardly any.

    it is the adjustments to the data which are the problem; these adjustments add warming; do you know why or how that result occurred?

    There are many reasons for the adjustment of data. I am surprised you are asking me this because you seem to present yourself as someone who knows a lot about this issue. But here is just ONE reason why data needs to be adjusted. The types of enclosures used at weather stations has CHANGED over time. Different types of enclosure have different biases based on their structures.

    Here is one example:

    A wide variety of instrument exposures existed prior to the introduction of the Stevenson
    screen. Probably the most common alternative exposure was the Glaisher stand, which was not
    fully enclosed like the Stevenson screen
    , but had no floor and an open front that faced
    southward. However, there was little standardisation either within or between states, with many
    instruments in locations such as improvised wall-mounted screens, underneath verandahs, or in
    unheated rooms indoors. In general these alternative exposures (except the indoor ones) were
    substantially warmer than a Stevenson screen for summer maximum temperature
    , with smaller
    differences for winter maximum temperatures and for minimum temperatures throughout the
    year (Parker, 1994). A 60-year set of parallel observations at Adelaide (Fig. 3) showed a warm
    bias in maximum temperatures measured using the Glaisher stand relative to those measured in
    the Stevenson screen
    (Nicholls et al., 1996), that ranged from 0.2 to 0.6°C in annual means, and
    reached up to 1.0°C in mean summer maximum temperatures and 2-3°C on some individual hot
    days, most likely due to heat re-radiated from the ground, from which the floorless Glaisher
    stand provides no protection. Minimum Glaisher stand temperatures tended to have a cool bias
    of 0.2-0.3°C all year, and the diurnal temperature range thus has a positive bias.

    This quote is from the top of page 7 of this document:
    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    So there you go, no conspiracy, just a careful consideration of the ways that different weather station enclosures can effect the temperate results.

    It seems if you were in charge of the BOM you’d just throw up your hands and say we should just forget about all our old data rather than trying to come up with logical ways to use it.

    Just asserting that since 1000 data points out of millions are suspect we should just ignore all the data is much easier than actually examining the complexity of this issue in order to get the most information possible out of the historical data.

    ——–

    Now interestingly Ads who said at the start of this thread that he did not “get” this topic, has dominated discussion and presumably “emailed” a friend (someone at the BOM?) and come back with new info, more excuses, still fails to acknowledge the BOM said they checked this, and failed, and still keeps trying to bring up the irrelevant point that it’s only a small percent of errors — if that’s the case, why did the BOM pretend that they had checked it,and that it was worth checking? _ Jo

    00

    • #
      cohenite

      Is the data the raw or adjusted data Adam?

      I did not ask why the raw data needed adjusting, I have read Torok table 2.2; what I asked is why the adjustments to the data produced a warming trend!?

      00

      • #
        Adam Smith

        It is the homogenised data.

        That link doesn’t work for me. It says the page has timed out.

        One reason the data shows a warming trend could be the fact the average temp in Australia has increased.

        00

        • #
          ExWarmist

          One reason the data shows a warming trend could be the fact the average temp in Australia has increased.

          Is that warming trend present in the raw data?

          00

          • #
            Adam Smith

            Is that warming trend present in the raw data?

            What are you really saying?

            Are you seriously suggesting that we should draw conclusions about long term climate trends in Australia WITHOUT first accounting for the different thermometers, weather stations, location of weather stations, changes to the surroundings of weather stations, data collection methodologies, data conversion procedures and anomalies (e.g. malfunctioning equipment) in the data?

            Are you seriously suggesting that all these variables should simply be ignored and we should simply shove the raw numbers in a spreadsheet and make a graph out of it?

            Because if that’s what you are suggesting, you should be aware that you are NOT being scientific!

            00

          • #
            Mark D.

            Are you seriously suggesting that all these variables should simply be ignored

            Are you seriously that dense? You have read the blindingly obvious that suggest that BoM didn’t follow their OWN rules on how data is to be handled and they have not supplied their methods. This is what is unscientific.

            00

        • #
          cohenite

          Try this link, click onto digital file and then part2 page 59.

          Then this:

          One reason the data shows a warming trend could be the fact the average temp in Australia has increased.

          The homogenised data shows a warming increase of 40% over the original data; adjustments should be neutral; the adjustments are not neutral. This is the point: the adjustments, themselves have created a 40% part of the warming trend; that combined with the admitted warming effect of metrification creates a trend which is 50% due to artifacts.

          That makes any policy based on AGW being proved by the BOM temperature record scientifically doubtful.

          The further problems with max and min discovered by Ed as outlined by Jo in this post are in addition to the above problems.

          00

          • #
            Adam Smith

            The homogenised data shows a warming increase of 40% over the original data; adjustments should be neutral; the adjustments are not neutral.

            Excuse me!? Why should the homogenisation of the data not reveal a particular trend?

            The whole point of homogenising the data is to account for all the variables in measurement equipment and procedures and changes to location over time. After you do that as carefully as possible, if there is a trend then that’s that.

            For you to assert that the homogonised data should simply reflect exactly the trend (or lack of a trend) present in the raw data implies that you are starting off with a particular conclusion in mind and are just going to massage the homogenisation procedure until you get your expected “neutral” result!!!

            That is NOT SCIENTIFIC! That is by definition COMPLETELY BIASED! That is starting with a conclusion and then forcing the data to fit!

            —Here Ads completely misinterprets the sentence he quotes. The quote is about adjustments being neutral. Ads comments on the homogenisation being neutral. Is it deliberate or is it just that his email-adviser was not available? In any case, this material dilutes the thread. The false bravado, THE ALL CAPS, is misplaced. Timewasting – Jo.

            00

          • #

            Adam, it’s not scientific to find that data sets need adjustments which all increase the “trend” of the data.

            Your “misinterpretation” of this point is becoming a bore. The raw trend could be anything, but the adjustments should have a random effect (both up and down) on the trend, and the BOM has even told us they are “neutral” when clearly they are not.

            00

          • #
            crakar24

            If the adjustments were neutral then what would be the point, Adumb Smith (Reg TMK to Mark D) has once again seized on a non topic and is attempting to create a debating point where there is none.

            The fact is the only “adjustments” that can be made are:

            #1 When the time of day changes for the observation ie instead of reading the temp at 9 am you read it at 9:30 am (This would tend to create a higher average if you read it at 8:30 am then a lower average)

            #2 When the site has been moved, this will render all previous data useless from a trend perspective unless you come up with a fudge factor (either up or down) to compensate but how exactly do you do this with any degree of accuracy?

            #3 When the station becomes enveloped into the UHI umbrella (as a rural town gets bigger). Once again this would tend to create a higher average.

            Changing of equipment (old for new) should not change a thing as both old and new equipment are calibrated to read zero when it is zero or 23.4 when it is 23.4.

            No Adumb you are wrong and to prove it to yourself pick a station and look at the “raw data” if the adjustments are neutral as the BOM says then the processed data should have the same trend. I think you will find that a majority of sites wil have raw data that show zero or very little trend but the processed data will show a very large positive trend which can only mean the trend is created via adjustments…………..or are you now going to claim the raw data cannot measure the AGW signal?

            00

          • #
            Adam Smith

            The homogenised data shows a warming increase of 40% over the original data; adjustments should be neutral

            This statement makes no sense at all!

            You can’t do a statistical analysis on the raw data that proves anything! The reason the data needs to be homogenised is to account for all the variabilities that result in temp observations changing over time!

            So to say that there is a trend in the raw data that is different in the homogenised data is meaningless.

            00

  • #
    Adam Smith

    Team Smith are, of course, obfuscating around the actual issue of the obvious glaring errors in the recorded temperatures as presented above.

    The numbers presented above aren’t the recorded numbers.

    I have lived in places like Marble Bar, Alice Springs, Longreach and Bourke. If ever there had been a time when the difference between daily maximum and minimum temperatures – regardless of which way around – had been less than one degree – and that in the balmy range of 18 to 24 degrees C, it would be the stuff of legend.

    The Min and Max temperatures don’t measure “temperate difference”.

    Ballads and poems would have been written about it, and the local pubs would have copies of framed newspaper cuttings on their walls.

    So anecdotes are better than actual data? Whatever.

    I’m with Ian Hill @ 15.3 on this one. Two teams “homogenising” data to fit a predetermined result; one on maximums, one on minimums, with insufficient communication between the two groups.

    You have no evidence to backup this claim.

    .

    But then, I’ve worked in the public service too.

    Irrelevant.

    00

    • #
      memoryvault

      The numbers presented above aren’t the recorded numbers.

      Irrelevance. A reader of even average intelligence would know what was inferred. But then, we are talking about Team Smith.

      The Min and Max temperatures don’t measure “temperate difference”.

      I have no idea what “temperate difference” is supposed to mean. (Whatever Team Smith suppose it to mean I guess). I’m pretty sure it also would have little meaning to the people of Marble Bar, Alice Springs, Longreach and Bourke.

      However, I am pretty sure that the people of Marble Bar, Alice Springs, Longreach and Bourke would be aware of any period they had ever been through where the temperature throughout an entire day varied by less than one degree, and remained in the comfort zone of 18 to 24 degrees.

      So anecdotes are better than actual data? Whatever.

      Well the article itself establishes that the “actual data” (as supplied by the BoM) is total crap. So it is reasonable to presume that even fairy stories are better than the “actual data”. As an aside, I suggest sometime in your up to now worthless existence, you get yourself out to some of the pubs in these outback towns and learn what is actually recorded in pictures on their walls. You never know, you might just learn something.

      You have no evidence to backup this claim.

      So now I need “evidence” to support the suggestion that I support an opinion?
      What next, my comments need to be peer reviewed?

      Irrelevant.

      Why? Are you suggesting people in the public service learn nothing from the experience?

      00

    • #
      Ian Hill

      MV said:

      I’m with Ian Hill @ 15.3 on this one. Two teams “homogenising” data to fit a predetermined result; one on maximums, one on minimums, with insufficient communication between the two groups.

      and then Adam Smith said:

      You have no evidence to backup this claim.

      Of course it didn’t happen like that. I was just putting forward a plausible explanation for the nonsense data. One team would have worked on it all, which makes their incompetence even worse.

      Above siliggy quoted the raw data for Kalumburu for Dec 3, 1975 in Deg C:

      Min 25.0
      Max 25.5

      which were adjusted to 24.5 and 23.8 respectively.

      The same numbers for the day before that:

      Dec 2, 1975, raw min 25.5, max 29.3 which were adjusted to 25.1 and 27.6 respectively.

      For both days the maximum was adjusted down by 1.7 degrees C while the minimum was adjusted down by 0.4 and 0.5C respectively. Trouble was, on Dec 3rd 1975 there was very little difference between the min and max …. oops!

      Note that the 1975 observations are from Kalumburu Mission (Station 1021) whereas the ACORN Station is Kalumburu (#1019).

      00

  • #
    Dave

    .
    ?????

    Are you in a rainforest Adam?

    “temperate difference”

    Is that a noun or a verb? or both? or either? or YES!

    Or maybe just calmly indifferent????

    I think maybe you could have lost the plot to this whole discussion on BOMs new data set?

    00

  • #
    Adam Smith

    I’ll just quote this in full because it explains why sometimes the minimum can be higher than the maximum:

    4.3 Observation time standards and changes over time
    The current Bureau of Meteorology standard for daily maximum and minimum temperatures is
    for both to be measured for the 24 hours ending at 0900 local time 9 , with the maximum temperature being attributed to the previous day. This standard has been in place since 1 January 1964 (except at some automatic weather stations – see below), although the data
    suggest that it was not until late 1964 or early 1965 that the standard was adopted across the bulk of the network, with a small number of sites not changing until the late 1960s or early
    1970s. The situation in the period from 1932 to 1963 was considerably more complex, both through the existence of multiple standards and the varying ways in which they were implemented. As a general rule, sites adopted the following procedures:
    • Bureau-staffed sites, and a few others (mostly lighthouses and similar) that made observations around the clock, used a nominal midnight-midnight day. In practice, in most cases the thermometers were still reset at 09:00 or 15:00, then read at midnight,
    with the maximum or minimum read at midnight substituted for the value read earlier in the day if it surpassed it. In practice this meant that the ‘midnight-midnight’ minimum would actually be for the 33 hours ending at midnight, although the impact of this longer observation period is minimal as the number of occasions when the 33-hour minimum differs from the 24-hour value (i.e. where the temperature falls lower between 15:00 and 00:00 than it does in the succeeding 24 hours) is negligible.

    Sites that made observations at 09:00 and 15:00, as most co-operative sites did, reset their maximum thermometers at 09:00 and their minimum thermometers at 15:00. In effect this resulted in minimum temperatures being for the 24 hours ending at 15:00.
    Maximum temperatures were measured from 09:00 to 09:00, but observer instructions (e.g., Bureau of Meteorology 1925, 1954) were to revert to the maximum measured for the 6 hours from 09:00 to 15:00 if the 24-hour maximum measured at 09:00 the following day was ‘close to’ the current temperature.

    Page 24 of this document:
    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    So when you actually look into the subject, as Ad says, it is possible to see how sometimes the minimum can be higher than the maximum, but it is also true that these events are very rare.

    00

    • #
      Adam Smith

      So I point out a methodological reason for why some minimums very occasionally exceed maximums but no one will reply [snip you don’t have patience] ED

      00

      • #
        The Black Adder

        I am very sceptical of you Mr. Smith.

        Why do you have an excuse for every single mistake that is continually made by Green Ideologues around the World.

        Why do they make such mistakes??

        Can you explain the mistake that brought us the Hockey Stick Illusion?

        00

        • #
          Adam Smith

          Why do you have an excuse for every single mistake that is continually made by Green Ideologues around the World.

          If you spend 15 minutes reading a document you can LEARN why sometimes mins exceeds max temps because the readings are taken over different periods.

          In this instance I did all the hard work for you and presented you with a link and even copied the relevant quote!

          Now you may consider this an “excuse”, but it is actually a rational explanation that explains how these counter intuitive results can occur.

          But I’m still curious why 1000 ‘suspect’ data points, many of which may not actually be wrong, somehow invalidates all 5 million data points.

          That doesn’t make any sense to me and strikes me as extraordinarily unscientific.

          ———————————
          Ads has gone to great lengths to show that sometimes Mins might exceed maxes, but if that is the case why did the BOM insist that their maxs were larger than the mins, and say they’d checked it when they hadn’t? And then, having missed the point that matters (yet again), he uses the term “extraordinarily unscientific.” This from a man who cannot name evidence to back up his belief but refers to consensus. But he is an interesting study in false bravado and methods for distracting people from the point of the thread. – Jo

          00

          • #
            Siliggy

            Adam Smith says:

            In this instance I did all the hard work

            Oh did you? Did you check to see if the BOM raw data ever backs your theory?
            I did for the first three above and found you to be wrong with them. So now do some of the piss easy work and show us the instances in which your theory is supported by the real data!
            Kalumburu 19751203 MAX 25.5 Min 25.0
            Halls Creek 19140517 Max 21.2 Min 19.4
            Marble Bar 19420610 MAX 20.1 Min 17.9

            00

      • #
        memoryvault

        .
        You haven’t come anywhere near to explaining why daily maximum and minimum temperatures at places like Marble Bar, Alice Springs, Longreach and Bourke, would be within a one degree range, AND within the maximum comfort range of 18 to 24 degrees.

        But then, since you have never lived in these places, or in fact I suspect, anywhere more than 20 kilometres from a MacDonalds, you ignorance is to be understood.

        Not accepted, but understood.

        00

        • #
          Adam Smith

          You haven’t come anywhere near to explaining why daily maximum and minimum temperatures at places like Marble Bar, Alice Springs, Longreach and Bourke, would be within a one degree range, AND within the maximum comfort range of 18 to 24 degrees.

          Well if you bothered to read post #22 you would see that I HAVE explained why mins can exceed maxes and why mins and maxes can be very close or even identical.

          It relates to the PROCEDURE of how the temp data was collated and the fact the MIN could be recorded over a longer duration.

          But again, it seems you are feeling self content with a simplistic lie rather than reading and understanding the complexity of how temps were actually measured and how this effects the way the data is collated.

          00

      • #
        Mark D.

        Why does BOM make it a practice to eliminate these readings Adumb? You are arguing against the venerable BOM. Are you turning skeptic?

        Where is your specific evidence that these suspect records ARE correct?

        00

        • #
          Adam Smith

          Why does BOM make it a practice to eliminate these readings Adumb?

          Excuse me, where does the BOM say it just omits these data?

          Try reading Chapter 6 of this document before again embarrassing yourself again by asserting something that isn’t true:
          http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

          00

          • #
            Mark D.

            Eliminates, Adumbass, as in “excluded” them:

            “As a result of these follow-up investigations, 18,400 individual observations and 515 blocks of observations of three or more days were flagged as suspect and excluded from further analysis or amended,…….

            Maybe you should read your own links?

            00

          • #
            Brian of Moorabbin

            Ooo, nice burn MarkD

            00

      • #
        cohenite

        So I point out a methodological reason for why some minimums very occasionally exceed maximums but no one will reply

        This is serious, isn’t Adam?

        The issue of max and min temperatures was looked at by Lowe in 2009.

        Lowe concludes that often the miniumum temperature was during the day within the period of up to an hour after sunrise. Would a 33 hour rotation as described by BOM catch this and explain the min>max conundrum? Would the asymmetrical resetting, 9.00 for max and 15.00 for min also contribute to a legitmate reason for min>max?

        If we accept that the coldest part of the day is within the sunrise period then the answer would be no because neither of the methods are less than 24 hours and would therefore always catch the minimum.

        I must say this is strange:

        Maximum temperatures were measured from 09:00 to 09:00, but observer instructions (e.g., Bureau of Meteorology 1925, 1954) were to revert to the maximum measured for the 6 hours from 09:00 to 15:00 if the 24-hour maximum measured at 09:00 the following day was ‘close to’ the current temperature.

        So, if on day 1, say Tuesday, a max temperature from 9.00-15.00 of say 70F is achieved and at 9.00 next day, Wednesday, a temperature of 69F was achieved then the 70F was reverted to. On the face of it why would anyone do otherwise? Or are we saying the 70F is for Tuesday but not Wednesday, or vice-versa?

        00

    • #

      From the document’s disclaimer:

      CSIRO and the Bureau of Meteorology advise that the information contained in this publication comprises general statements based on scientific research. The reader is advised and needs to be aware that such information may be incomplete or unable to be used in any specific situation. No reliance or actions must therefore be made on that information without seeking prior expert professional, scientific and technical advice.

      (emphasis mine)

      Msrs Smith: Is your assertion that the records where (Min > Max) are valid based on them being collected by co-operative sites (with manual readings)?

      The table above shows records for Canberra. Two of them from times when the data collection was automated (digital – measured every minute) and probably done directly by BoM.

      Table 1 of the document has errors e.g. it says that Perth has been collecting data since 1910 but gives the 2011 location. The Perth weather station was in a very different place in 1910.

      The 2011 coordinates put the station at Perth Airport – the actual site name for ID 009021. Which BoM documents as starting in 1944.

      PERTH REGIONAL OFFICE (009034) commenced in 1876 and operated until 1992 which would have documented the growing UHI quite nicely, being in the Perth CBD near Langley Park.

      The present PERTH METRO station is on the gravel in Mt Lawley.

      Yes; I do understand that procedures mean that we won’t know today’s maximum temperature even by midnight; we have to wait until 9 a.m. by which time a chilly day could be reported to have a maximum several degrees warmier than recorded at any time on the day on which it purports to report. Such surreal figures are one reason why the data set doesn’t account for much.

      Taking just the arithmetic mean of two extremes of temperature doesn’t give a valid indication of the average thermodynamic state of the climate system at that location on that day.

      Homogenization of data sets is equally surreal.

      The only way to detect “global warming” is by a measurement of total system enthalpy. Temperature is only an “intensity”; it’s not a quantity of anything. Any min/max temperature records, no matter the “quality”, are therefore useless for the nominal purpose.

      Persisting in their misuse indicates either a substantial lack of competence at BoM/CSIRO or a deliberate act of deception.

      00

  • #
    Adam Smith

    The Black Adder
    July 16, 2012 at 8:42 pm
    Adam Smith.

    You really are a piece of work.

    I thought Tim Flim Flannery was Australia`s biggest Twat, but you take the cake!

    How did we manage to build the Pyramids thousands years ago if we could not record Data correctly.

    How did we manage to navigate the world`s oceans hundreds of years ago if we could not record Data correctly.

    How did we invent the modern steam engine if we did not record Data correctly.

    You are implying that anyone born more than 100 years ago was inept and incapable of recording data correctly!!

    I am implying that anyone associated with this ACORN-SAT is inept and corrupt!!

    Are you corrupt Mr Smith?

    Why would you support this?

    Get a life…

    WOW! Your reversion [SNIP quit starting your replies with admonishments.] ED

    Now if you don’t think people make mistakes when reading, copying, and entering data, please do the following experiment.

    1 Get some pens and a lot of paper.
    2 Write down all of the ACORN data by reading it off a computer screen onto pieces of paper.
    3 When you have finished this step, cross check your work from your paper ‘copy’ against the data in the computer text files.
    4 Report back concerning how accurate it is, expressed as a percentage.
    5 Once this is finished. Take your paper copy of the data and enter it into a spreadsheet, Excel format should be fine.
    6 Once you have finished this. Cross check your spreadsheet against the text files on the ACORN website
    7 Report back on the accuracy of your data regarding its accuracy expressed as a percentage.

    After you’ve done all that you can then tell me if people don’t occasionally make mistakes when handling data with on paper and on computers.

    00

    • #
      AndyG55

      The Acorn data set exists on computer.

      They say it has been checked.

      A blogger find simple data inconsistencies, easily.

      So, someone at BOM goofed: either they didn’t run that specific test, or they were incompetent at running that test.

      00

      • #
        Adam Smith

        The Acorn data set exists on computer.

        WOW REALLY!???

        And explain to me how most of the data gets into a computer? (HINT: the answer isn’t “magic”)

        Oh, and while you’re at it, explain to me how it gets from a regular thermometre into a computer?

        They say it has been checked.

        Sure, but did they say it would be 100% free of any errors whatsoever? Do you think it is possible to make that claim with such a huge data set?

        A blogger find simple data inconsistencies, easily.

        Actually these are not necessarily inconsistencies as I explained in post #21, but of course that post has been ignored because it is inconvenient to the prominent theme of this thread.

        So, someone at BOM goofed: either they didn’t run that specific test, or they were incompetent at running that test.

        Actually this has not been demonstrated in this thread. There are reasons why occasionally minimums can exceed maximums because at some locations for some period the min and max is taken at different times that cover a different time period.

        00

        • #
          Jesus saves

          Oh dear, it’s happened again, the bloggers here have once again had their pants thoroughly pulled down by Team Smith. Nary a coherent reply, this is most entertaining!

          [Useless spacer post. Do better or get snipped] ED

          00

        • #
          Heywood

          Actually Adam I think you missed Andy’s point..

          I think he was referring to the point that although it had been transcribed from paper records, the BOM could have run a simple algorithm, which an unpaid volunteer has done, and found these errors before making the data set public. The fact that they did not do even this simple check, could mean they didn’t do more complex error checking, and places a level of doubt on the validity of the remaining data.

          At the very least, there should be a review of the error checking procedures, if not a complete audit of the data.

          It’s not like it needs to be accurate, they only formulate (usually crap) government policy using it…

          00

          • #
            Adam Smith

            I think he was referring to the point that although it had been transcribed from paper records, the BOM could have run a simple algorithm, which an unpaid volunteer has done, and found these errors before making the data set public.

            Sorry mate, but you are ignoring my post #22 where I copy / pasted the explanation for how the BOM’s procedures created the potential, in extraordinary circumstances, for the min temp to exceed the max temp because the min temp was only measured once every 33 hours compared to once every 24 hours for the max temp.

            I also pointed out that the very first value in Jo’s excerpt for this post has actually been changed. If you look it up in the database now you find out that the values are reversed, which suggests that this is an example of a data entry error that has been fixed.

            But I appreciate it is just easier to point and laugh rather than actually read the BOM’s documents in order to learn about all the complex factors that went into recording the observations, getting them digitised and then analysing them.

            00

          • #
            BobC

            Adam Smith
            July 16, 2012 at 11:53 pm

            [From Heywood:]
            “I think he was referring to the point that although it had been transcribed from paper records, the BOM could have run a simple algorithm, which an unpaid volunteer has done, and found these errors before making the data set public.”

            Sorry mate, but you are ignoring my post #22 where I copy / pasted the explanation for how the BOM’s procedures created the potential, in extraordinary circumstances, for the min temp to exceed the max temp because the min temp was only measured once every 33 hours compared to once every 24 hours for the max temp.

            Heywood’s point seems to just have whizzed right over your head. The fact (of which you seem to be so proud to have brought up) that bad proceedure — conjoining Tmax and Tmin measurements, when made in non-overlapping time periods — might sometimes result in Tmin > Tmax is completely irrelevant to BOM’s claim to have checked the data and corrected such occurances, yet their released data contains hundreds of still uncorrected instances.

            You seem to be the only one here who can’t grasp this simple fact (as AndyG55 put it):

            So, someone at BOM goofed: either they didn’t run that specific test, or they were incompetent at running that test.

            Exactly HOW Tmin>Tmax errors arose is irrelevant to the fact that the BOM claims to have corrected them, but didn’t.

            00

          • #
            Heywood

            “Sorry mate”

            Adam, please don’t assume for a minute that I am your mate.

            I find your carpet bombing of this blog extremely rude and annoying and would rather have a beer with Tristan, MattB or even Johnny Brookes before calling you a mate.

            you are ignoring my post #22 where I copy / pasted the explanation for how the BOM’s procedures created the potential, in extraordinary circumstances, for the min temp to exceed the max temp because the min temp was only measured once every 33 hours compared to once every 24 hours for the max temp.

            No, I am not ignoring it. I am not arguing the validity of your “copy/paste” at #22, I am merely saying that there is a possibility that the data checking was insufficient, and that looking into it further may be appropriate.

            Are you against re-auditing the data? Do you think the BOM has something to hide?

            00

        • #
          AndyG55

          Its called “post checking”. Its done AFTER the numbers are in the computer and after all their manipulations.

          And they can’t even do that right ! DOH !

          and you wonder why I used the term … IDIOT !!!!!!!

          someone please find me a plank of wood to explain to.

          00

    • #
      The Black Adder

      Poor Mr Smith.

      No, your arguments are not correct.

      How about you check Accuracy with Charles Sturt, from the previous post.

      He was able to record Data accurately from 1828 that showed higher temps than now!!

      Obviously ole Sturty Boy did not have to report to BOM or Team Smith… !!

      00

      • #
        Adam Smith

        Poor Mr Smith.

        No, your arguments are not correct.

        Well thank you for asserting this, but generally backing claims up with evidence is more persuasive.

        How about you check Accuracy with Charles Sturt, from the previous post.

        He was able to record Data accurately from 1828 that showed higher temps than now!!

        Um hello! One really hot day years ago proves what exactly? There have been other 50 degrees in parts of S.A. over the last 100 years too. They are very rare, but they do occasionally happen.

        You seriously want me to believe that you’re right because of one extraordinary temp measurement, but apparently 5 million conducted under more rigourous procedures are all wrong?

        Obviously ole Sturty Boy did not have to report to BOM or Team Smith… !!

        Nice rhetorical flourish, but clearly it is meaningless tosh.

        00

        • #
          Dave

          .
          Team

          You seriously want me to believe that you’re right because of one extraordinary temp measurement, but apparently 5 million conducted under more rigourous procedures are all wrong?

          Well, what ratio will you accept? 1:5,000,000 or 1,000:5,000,000?

          That’s a big margin you operate in Adam Team Adam!

          What’s BOM’s? ANSWER: Maximum is less than minimum will do – 1:1 – negative?

          Also what’s your middle name?

          00

          • #
            Adam Smith

            What’s BOM’s? ANSWER: Maximum is less than minimum will do – 1:1 – negative?

            Hi Dave!

            It seems you didn’t read my post #21 that explains why some mins can exceede maxes, so I’ll copy paste it for you here:

            I’ll just quote this in full because it explains why sometimes the minimum can be higher than the maximum:

            4.3 Observation time standards and changes over time
            The current Bureau of Meteorology standard for daily maximum and minimum temperatures is
            for both to be measured for the 24 hours ending at 0900 local time 9 , with the maximum temperature being attributed to the previous day. This standard has been in place since 1 January 1964 (except at some automatic weather stations – see below), although the data
            suggest that it was not until late 1964 or early 1965 that the standard was adopted across the bulk of the network, with a small number of sites not changing until the late 1960s or early
            1970s. The situation in the period from 1932 to 1963 was considerably more complex, both through the existence of multiple standards and the varying ways in which they were implemented. As a general rule, sites adopted the following procedures:
            • Bureau-staffed sites, and a few others (mostly lighthouses and similar) that made observations around the clock, used a nominal midnight-midnight day. In practice, in most cases the thermometers were still reset at 09:00 or 15:00, then read at midnight,
            with the maximum or minimum read at midnight substituted for the value read earlier in the day if it surpassed it. In practice this meant that the ‘midnight-midnight’ minimum would actually be for the 33 hours ending at midnight, although the impact of this longer observation period is minimal as the number of occasions when the 33-hour minimum differs from the 24-hour value (i.e. where the temperature falls lower between 15:00 and 00:00 than it does in the succeeding 24 hours) is negligible.
            • Sites that made observations at 09:00 and 15:00, as most co-operative sites did, reset their maximum thermometers at 09:00 and their minimum thermometers at 15:00. In effect this resulted in minimum temperatures being for the 24 hours ending at 15:00.
            Maximum temperatures were measured from 09:00 to 09:00, but observer instructions (e.g., Bureau of Meteorology 1925, 1954) were to revert to the maximum measured for the 6 hours from 09:00 to 15:00 if the 24-hour maximum measured at 09:00 the following day was ‘close to’ the current temperature.

            Page 24 of this document:
            http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

            So when you actually look into the subject, as Ad says, it is possible to see how sometimes the minimum can be higher than the maximum, but it is also true that these events are very rare.

            Also what’s your middle name?

            I don’t have one.

            00

          • #
            cohenite

            We saw your explanation Adam to the min>max issue; there were a number of replies; I repeat mine:

            The issue of max and min temperatures was looked at by Lowe in 2009.

            Lowe concludes that often the miniumum temperature was during the day within the period of up to an hour after sunrise. Would a 33 hour rotation as described by BOM catch this and explain the min>max conundrum? Would the asymmetrical resetting, 9.00 for max and 15.00 for min also contribute to a legitmate reason for min>max?

            If we accept that the coldest part of the day is within the sunrise period then the answer would be no because neither of the methods are less than 24 hours and would therefore always catch the minimum.

            I must say this is strange:

            Maximum temperatures were measured from 09:00 to 09:00, but observer instructions (e.g., Bureau of Meteorology 1925, 1954) were to revert to the maximum measured for the 6 hours from 09:00 to 15:00 if the 24-hour maximum measured at 09:00 the following day was ‘close to’ the current temperature.

            So, if on day 1, say Tuesday, a max temperature from 9.00-15.00 of say 70F is achieved and at 9.00 next day, Wednesday, a temperature of 69F was achieved then the 70F was reverted to. On the face of it why would anyone do otherwise? Or are we saying the 70F is for Tuesday but not Wednesday, or vice-versa?

            00

          • #
            BobC

            cohenite
            July 17, 2012 at 9:42 pm
            We saw your explanation Adam to the min>max issue; there were a number of replies; I repeat mine:

            Really now, cohenite! Adam Smith didn’t present an argument, he just copied and pasted some text, as he admits:

            Hi Dave!

            It seems you didn’t read my post #21 that explains why some mins can exceede maxes, so I’ll copy paste it for you here:

            I’ll just quote this in full because it explains why sometimes the minimum can be higher than the maximum:

            Now you are expecting him to be able to engage in a logical discussion about it? Nothing in our experience with Adam so far suggests he is capable of any such thing.

            00

        • #
          Sonny

          Can you imagine the alarmist frenzy whipped up by climastrologists if we got temperatures over 50 degrees in the summer of 2013?

          They would be chomping at the bit. Headlines would read
          “Hottest day on record and it’s all your fault.”
          Thankfully one hundred years ago the world was not infected with the same pseudoscientific stupidity as we see exemplified by “Adam Smith”

          00

          • #
            Adam Smith

            Well rather than [SNIP quit starting your replies with admonishments.] ED
            you may of liked to ask me about my opinion on this matter?

            If you did I I would’ve said that individual days don’t prove or disprove climate change or global warming.

            It is the long term trend that counts.

            So the previous thread about a 52 degree day 150 years ago doesn’t prove anything and if it happened to get to 52 degrees in the same place tomorrow that wouldn’t prove anything either.

            But a data set of 5 million or so (that’s my rough guess) numbers that shows a trend is more persuasive evidence about climate change in Australia, but of course doesn’t mean the whole globe is warming.

            00

          • #
            Sonny

            Yep “it’s the trend that’s important”. That’s why climastrologists do everything in their power to ensure the trend is alarming. (omg the world has warmed between like 0.5 and 1 degree in 100 years!)

            Hoe about the trend of no global warming for over ten years. Oops I forgot that’s not important.

            00

  • #
    DougS

    The terrifying thing about this and similar shockingly egregious errors committed by climastrologers is that there will be no comeback. It will simply be changed, hidden or ignored.

    Will anyone be sacked? – No

    Will anyone apologise? – No

    Demotion(s)? – No

    Back of hand slap? – No

    Nothing!

    00

    • #
      The Black Adder

      Doug, doug, doug….

      they get promoted to be Chief Climate Commissioners and the like … 🙂

      00

  • #
    inedible hyperbowl

    The thing we know about authoritarian personalities is that they are inclined to accept whatever “authorities” say as the unquestionable truth. Jo has clearly upset the troll by questioning an authority (how dare she?).
    I will leave it to others to ponder why the left (in general) so readily accepts the “word” of the authority, be it the BoM, CSIRO (economists and lawyers), UN dept. xxx etc. Their word you will note is the unassailable truth.

    00

    • #
      Adam Smith

      I will leave it to others to ponder why the left (in general) so readily accepts the “word” of the authority, be it the BoM, CSIRO (economists and lawyers), UN dept. xxx etc. Their word you will note is the unassailable truth.

      Woah! I’m calling straw man on this argument too. There’s lots of authorities that I don’t trust, such as News Ltd., various big banks, the Liberal Party of Australia, the leaders of several mining companies, notably Twiggy Forrest and Gina Reinhart.

      There’s lots of authorities I don’t trust at all.

      00

      • #
        Tristan

        To be fair, I’m not sure that the ALP is that much less obfuscatory, deceptive or likely to abandon commitments than our lovely Libs.

        00

    • #
      Rereke Whakaaro

      I will leave it to others to ponder why the left (in general) so readily accepts the “word” of the authority …

      … when it comes from left of centre entities.

      As demonstrated by Adam in his response:

      There’s lots of authorities that I don’t trust, such as News Ltd., various big banks, the Liberal Party of Australia, the leaders of several mining companies

      … which are right of centre entities.

      And yet he calls the original comment a straw man – weird logic.

      00

  • #
    Sonny

    Well done Adam Smith.
    I am convinced you have demonstrates that 1000 data points with max greater than min does not in and of itself nullify the theory of anthropogenic global warming.

    I’m waiting for Gergis new hockey stick to come out for that.

    00

    • #
      Tristan

      On one hand I wouldn’t have expected Aus to generate a hockey stick, given that WA temps have been pretty stable for the past century.

      On the other, word on the street is that the corrected Gergis paper won’t be much different from the original.

      We’ll see I guess. Being too smarmy about any pronouncements risks a hat-eating scenario. 😉

      00

      • #
        Sonny

        Phew what a relief,
        The original showed “unprecedented” warming that was within the error band. I.e. Statistically no warming at all. But that didn’t stop the lame stream media from hyping it up…
        Gergis is a scientific fraud.

        00

  • #
    Adam Smith

    Dave
    July 16, 2012 at 10:17 pm
    .

    No one would ever do this to you Adam Smith unless they were certain that all the data was 100% correct – even 1000 errors in a couple of million in your rules could deem you an IDIOT – which would be an injustice!

    Where has the BOM argued that their data is 100% accurate? I think you just made that up.

    Well actually the BOM doesn’t say the data is 100% correct. What it does claim is this:

    The error rate in temperature observations is low – experience with operational quality control procedures at the Bureau of Meteorology in recent years suggests that it is in the order of a few tenths of one per cent…

    Taken from page 30 of this document
    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    But hey, I appreciate that it is easier arguing against a straw man than engaging with what the BOM actually says and does.

    But I’m sure you’ll reply telling me that you could collate better data using a Commodore 64 that would of course be 100% free of errors.

    00

    • #
      Rereke Whakaaro

      Taken from page 30 of this document …

      … which is just a report. They can actually put anything they like in a report.

      What is important are the technical details surrounding accuracy of reporting of official records as defined in their governing legislation, operating charter (or equivalent), or their agreement with the Minister, depending on which sources take precedence in law.

      I agree, that it will not be 100%, but it will be something a lot more objective, and considerably more precise, than the word “low”.

      If you want to argue that the quality is within specifications, then find the appropriate metric in the governing document.

      When you have done that, you will have a solid mathematical basis for your point of view.

      00

  • #
    Heywood

    I’m confused….

    I thought this was the Jo Nova blog…

    But what I have come across is the Adam Smith blog… and it’s boring the sh*t out of me…

    The point I think he/them is missing, is if that nearly 1000 of the records show the min > max temperature, and this is a pretty big error, what confidence do you have that the remaining data doesn’t contain less obvious errors?

    I’m not saying we just dump the data and start again, but maybe it could do with a thorough audit..

    “but generally backing claims up with evidence is more persuasive.”

    I am also looking forward to a warmist, any warmist, backing up the claim, with empirical evidence, that CO2 drives CAGW. I wont hold my breath for that either.

    00

    • #
      Adam Smith

      The point I think he/them is missing, is if that nearly 1000 of the records show the min > max temperature, and this is a pretty big error, what confidence do you have that the remaining data doesn’t contain less obvious errors?

      Well thank you for demonstrating that you don’t understand anything about this issue.

      If you bothered to read post #22 you would see the explanation for why very occasionally minimum temps can exceed or equal maximum temps.

      But clearly you jumped in before bothering to spend 15 minutes learning about the complexity of BOM’s measurement procedures and how they have changed over time.

      00

      • #
        Heywood

        “If you bothered to read post #22”

        That provides one possible reason for the anomolies, but another possibility is that the data checking was insufficient.

        Surely another look at the data, and maybe even an audit wouldn’t hurt.

        It would only make it more accurate, wouldn’t it?

        00

      • #
        BobC

        Heywood
        July 16, 2012 at 11:16 pm · Reply
        I’m confused….

        I thought this was the Jo Nova blog…

        But what I have come across is the Adam Smith blog… and it’s boring the sh*t out of me…

        You aren’t the only one, Heywood. On the other hand, “Adam Smith” demonstrates his mental disability every time he responds. For example, the reasonable conclusion you make:

        … if that nearly 1000 of the records show the min > max temperature, and this is a pretty big error, what confidence do you have that the remaining data doesn’t contain less obvious errors? I’m not saying we just dump the data and start again, but maybe it could do with a thorough audit..

        Is “responded” to by the bot clone A.Smith thusly:

        Adam Smith
        July 16, 2012 at 11:45 pm · Reply

        Well thank you for demonstrating that you don’t understand anything about this issue.

        If you bothered to read post #22 you would see the explanation for why very occasionally minimum temps can exceed or equal maximum temps.

        Talk about not understanding anything!
        Despite a half-dozen people trying to explain it to him/it, A.S. still hasn’t grasped that the issue is not HOW errors might have gotten into the raw data — the issue is that the BOM specifically claims that the Tmin > Tmax error is checked for, flagged, and corrected; Yet there are MORE such errors in the “checked” data than in the raw data. As you pointed out, this doesn’t give one a warm feeling about their general competence, and raises reasonable doubt about the state of the rest of the data. The fact that they won’t release their code just reinforces the doubt.

        ****************
        I really think that a minor hack of the old ELIZA bot could do a better job here than Smith.

        00

        • #
          ExWarmist

          Despite a half-dozen people trying to explain it to him/it, A.S. still hasn’t grasped that the issue is not HOW errors might have gotten into the raw data — the issue is that the BOM specifically claims that the Tmin > Tmax error is checked for, flagged, and corrected; Yet there are MORE such errors in the “checked” data than in the raw data. As you pointed out, this doesn’t give one a warm feeling about their general competence, and raises reasonable doubt about the state of the rest of the data. The fact that they won’t release their code just reinforces the doubt.

          Well said BobC – this is the nub of the argument.

          The fact that these errors could be found so easily demonstrates beyond any reasonable doubt that,

          [1] The BOM did not follow it’s own procedure, OR,

          [2] The BOM incorrectly implemented it’s own procedure.

          Either error was not discovered during a Quality Control Audit (presumably one was held (maybe?)) post the process of data homogenisation/adjustment.

          Once again the Warmist position is revealed to rest on data of questionable providence.

          And warmists expect us to accept such a lack of rigour in silence.

          00

        • #
          Adam Smith

          You aren’t the only one, Heywood. On the other hand, “Adam Smith” demonstrates his mental disability every time he responds. For example, the reasonable conclusion you make:

          The fact you had to revert to calling me mentally disabled demonstrates that you have lost this debate.

          But it is OK, I will be willing to debate other issues with you in the future.

          (No he SHOWED why he can justifiably insult you otherwise we snip baseless unsupported insults) CTS

          00

          • #
            Adam Smith

            (No he SHOWED why he can justifiably insult you otherwise we snip baseless unsupported insults)

            Well this moderation makes no sense because I’m not “mentally disabled”, so it is by definition a “baseless unsupported insult”.

            (Thank you for your absurd and clueless reply.Thank you for supporting my moderating decision) CTS

            00

          • #
            BobC

            Well this moderation makes no sense because I’m not “mentally disabled”,

            That would depend on whether a complete inability to follow logical arguments is considered a “disability”. I think that it is, but perhaps it shouldn’t be so considered if there is a demonstrated ability to improve. I would be glad to discuss this point with you.

            00

    • #
      Jesus saves

      I find it really interesting and informative

      [SNIP. You really shouldn’t tempt me Jesus. You get snipped when you bring nothing to the debate. I tried to objectively consider your post…….still nothing] ED

      00

  • #

    This has got me wondering what else is being Acorned …
    Australian Baseline Sea Level Monitoring Project perhaps?
    I need to keep an eye on two or three tide gauges up here in NQ for the purpose of checking habitable floor and trench invert levels against AHD and HAT respectively. Haven’t kept any older records unfortunately. Didn’t see the need as the changes were minimal and not always positive. Then I noticed discrepancies creeping in, eg tide predictions issued by Maritime Safety Queensland factoring in a 1.2mm rise, well exceeded by ABSLMP’s 3.8 and 4.8 for the two BoM tide gauges in Queensland. The only coincident site appears to be Rosslyn Bay: MSQ’s short-term sea level rise 2011 = 2.425mm/yr (?!), BoM’s = 3.8mm/yr.
    Ok – there are differences between tide gauge equipment (radar, laser ranging etc) and purpose (eg keel clearance, storm surge calculations, sea-level change). MSQ lists 44 tide gauges: Ports – 4, MSQ – 2, AMSA – 5, DERM – 24, NTC (BoM) – 2. Have any comparisons been done?

    troppo19 at gmail.com

    00

  • #
    Adam Smith

    BobC
    July 16, 2012 at 11:24 pm

    I don’t know how BOM defines “wrong seeming” data…

    Well why not read their document? If you read the area I quoted you would see this

    The second type of error is particularly important in climate extremes as some of the
    most extreme events will trigger some checks typically used for detection of errors in data
    quality control systems (e.g. a check against the highest or lowest values on record), and
    incorrectly rejecting valid extremes will create a bias in the analysis of extremes.
    The Bureau of Meteorology currently uses a computerised Quality Monitoring System (QMS)
    for climate data. This system subjects data to a series of checks, including checks for internal
    consistency, spatial consistency and limit/range checks, with data flagged by those checks then
    being referred to quality-control staff for further investigation
    , before being either accepted or
    flagged as suspect. The QMS has been used operationally in its present form since 2008.

    That’ quote is from page 30

    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    Of course, to know if BOM is actually doing something as stupid as this, you would have to look at their “correction” code — something

    What on earth are you talking about? If you read the above document you can see in exhaustive detail how the data was homogenised, They go through every step and explain why each modification was made!

    You are basically saying that because you aren’t aware of something then it didn’t happen!

    00

    • #

      http://joannenova.com.au/2012/07/boms-new-data-set-acorn-so-bad-it-should-be-withdrawn-954-min-temps-larger-than-the-max/#comment-1087244

      Richard Holle
      July 16, 2012 at 11:34 pm · Reply

      The problem is the original raw data passes the min lower than max test, it is the new “homogenized* data set that is a fuc**rd up. What was originally recorded in the log books years ago has passed the smell test at the time the readings were entered into the digital data base, they have since been bugg**d up.

      I will learn to paste relevant info later in the thread, as I get used to the thread bombing tactics here.

      00

      • #
        Adam Smith

        The problem is the original raw data passes the min lower than max test, it is the new “homogenized* data set that is a fuc**rd up

        Excuse me? If you read Post #22 you will see that it is possible for mins to exceed maxes in some very rare circumstances. For a period of over 30 years, the minimum temp was only taken once every 33 hours whereas the max was taken once every 24 hours.

        If the max was taken followed soon after by a warm spell that led to a hot night, it is quite possible that by 9 AM the following day it could be warmer than it was at 3 PM on the previous day. Of course mornings are cooler than afternoons, but this isn’t always the case, and it isn’t hard to think of a scenario where this could occasionally occur when you are talking about millions of data points.

        So I am astonished that the raw data doesn’t feature any higher mins than maxes when that IS possible thanks to the BOMs old measurement methodology, which was used in some places into the early 1970s.

        But I apologise if this is all confusing, I appreciate that a complex truth is harder to understand than simplistic nonsense.

        00

        • #

          It is not confusing to me at all I have bought the raw data set, and prior to the instigation of the HQ data set process there were only a couple of examples of mins higher than the max.
          I preformed Qa tests of the data before I tabled the values to generate my forecast for Australia, and it shows the step change in the errors. I have not done the reevaluation of the the newer ACORN data set as did the mentioned blogger who is the source of this thread.

          00

          • #
            Adam Smith

            It is not confusing to me at all I have bought the raw data set, and prior to the instigation of the HQ data set process there were only a couple of examples of mins higher than the max.

            What? First you say that the raw data passes the min / max test, but now you are saying it doesn’t.

            I’ll just leave you alone so you can argue against yourself.

            00

          • #
            Robert

            So the raw data passed with minor discrepancies then AFTER the HQ data set process there were more discrepancies?

            Apparently the Smith creature can’t grasp that, or does but rather than acknowledge it wants to redirect and obfuscate things with the inane comment he followed with.

            I’ll just leave you alone so you can argue against yourself.

            This looks like an example of the Smith creature’s claim that everyone is resorting to name calling or insults because they have no argument. So apparently the Smith creature has no argument and is guilty of that which he accuses everyone else of. It is not as though that will surprise anyone.

            00

        • #
          BobC

          Adam Smith
          July 17, 2012 at 12:21 am · Reply

          “The problem is the original raw data passes the min lower than max test, it is the new “homogenized* data set that is a fuc**rd up”

          Excuse me? If you read Post #22 you will see that it is possible for mins to exceed maxes in some very rare circumstances. For a period of over 30 years, the minimum temp was only taken once every 33 hours whereas the max was taken once every 24 hours.

          First: Your reply has no relevance to the fact that the post-processing increased the number of errors. Bad measurement methodology can account for some max/min inversions in the raw data — but only bad (or poorly implemented) algorithms can account for creating more of these inversions in the post-processed data. This is a logical point, so we’re not surprised you missed it.

          Second: If (for some data) the maximum temperature readings were spaced by 24 hours, and the minimum temperature readings were spaced 33 hours, then they are different data sets and cannot be directly merged. It might be valid to compare sufficiently long running averages of both. If that is what was done, you would expect there would be fewer examples of max/min inversion, not more.

          The fact that the BOM’s post processing increases the number of inversions is prima facie evidence that their processed data set is less likely to reflect reality than the raw data.

          Of course, the government-funded scientists promoting AGW have been using ‘clever’ post-processing to make the case for ‘unprecedented’ warming for a long time now — Mann’s “Hockey Stick”, based on an algorithm that produced ‘hockey stick’ graphs from random ‘data’; NASA’s manipulation of the ARGOS data (referenced here); Hansen’s continual re-adjustment (with no valid explanation) of 100 year old thermometer readings, etc.

          00

          • #
            KinkyKeith

            Top comment BobC

            Computerised “repairs” to the data set are ridiculous.

            The only computerised processing should be The Statistical Analysis; to actually change the data is obscene.

            The only data changes that might be valid are of two types:

            1. To reverse the data on the assumption that the reader had simply put the night reading in the day column. Maybe .

            2. To completely eliminate those figures as “Faulty” because they are incorrect. There’s no argument about that unless we have had a lot of prolonged eclipses.

            KK

            00

          • #
            KinkyKeith

            forgot to add that all data should be included as they are.

            The incorrect or stupid ones will then show up as being obviously incorrect.

            These errors in statistics are called OUTLIERS.

            00

          • #
            Mark D.

            After reading the description of the “homogenization” process I wonder how many times one site readings were adjusted and then that new adjusted temperature used as a “neighboring site” compounding any assumed values.

            One thing seems clear to me; the use of the historic records will at best only provide interesting results. There is ample opportunity for unintended bias in the assumptions made during the homogenization process and also ample opportunity for intentional bias.

            Others have touched on this but to me it is suspect when trends “found” in homogenized temperature data seem to be ALWAYS upward. If it is true that the results are 40% higher than raw data, then this alone should cause neutral researchers to question the methods used and redouble efforts to detect human influence.

            In the end though, the results should only be used to provide clues about how we might redesign future sensing networks. The atmosphere is too large to assume that surface temperatures are enough or even a reasonable way to measure climate change.

            00

        • #
          cohenite

          ‘Adam’ is the future under Finkelstein; a sausage factory of Orwellian contradictions and oxymorons: maximums are minimums, minimums are maximums, hotter is colder, colder is hotter.

          The fact is the max<min issue is unforgiveable; it is symptomatic of the lack of scientific rigour by BOM; that 'Adam' defends this shows that he is, at base, prepared to lie to defend his ideology.

          Apart from this not once has 'Adam' addressed the trend issue caused by the BOM's adjustments. I bet he would claim the BOM temperature record is a true and accurate one.

          00

    • #
      BobC

      Adam Smith
      July 16, 2012 at 11:41 pm · Reply

      “BobC
      July 16, 2012 at 11:24 pm

      I don’t know how BOM defines “wrong seeming” data…”

      Well why not read their document? If you read the area I quoted you would see this


      The Bureau of Meteorology currently uses a computerised Quality Monitoring System (QMS)for climate data. This system subjects data to a series of checks, including checks for internal consistency, spatial consistency and limit/range checks, with data flagged by those checks then being referred to quality-control staff for further investigation, before being either accepted or flagged as suspect. The QMS has been used operationally in its present form since 2008.

      If you think that that is a description of an algorithm, Adam, then you obviously have never successfully programmed a computer. This is PR: “We do really complex, great stuff!” — not anything that could be used to recreate and test the code. Do you really not understand the difference?

      What on earth are you talking about? If you read the above document you can see in exhaustive detail how the data was homogenised, They go through every step and explain why each modification was made!

      No, I guess you don’t. The claims they make for their (unreleased) code is not the same as what the code does. This is not my opinion, it is a fact — if you disagree, prove me wrong by writing out the ‘specified’ code.

      What we do know is that their code increases the number of max/min inversion errors w.r.t. the raw data. There is no way that can be spun as anything except incorrect processing. (And, since you think their description is so complete, why don’t you tell us what part of their processing is responsible?)

      You are basically saying that because you aren’t aware of something then it didn’t happen!

      Now what on Earth are you talking about? I’m talking about demonstrated bad data processing by the BOM — we know it’s bad because it increases the number of errors. We don’t know what other errors they may introduce, since they won’t release their code.

      00

  • #
    Robert

    You are basically saying that because you aren’t aware of something then it didn’t happen!

    Actually that is one of your methods of operation as can be documented by your numerous spam comments throughout this site.

    00

  • #
    Sonny

    That’s it! I’m never trusting the BOM again!
    I bet that their data for global temperatures showing that there has been no global warming for over a decade (despite highest increases in CO2 in modern times) is wrong as well.

    http://www.bom.gov.au/cgi-bin/climate/change/global/timeseries.cgi

    I mean, If it’s true that there has been no global warming for over a decade why havnt we heard about this in the MSM?

    00

  • #

    It is the HQ and the ACORN data sets that are having problems because of the sloppy way the 40 nearest neighbor homogenization process has washed out the cooler stations giving the warming trend to stations in the ACORN and HQ data sets that were not there to begin with.

    The Fast and sloppy method used to crank out the ACORN set to avoid the investigation into the problems of the HQ data set is what is responsible for the min=/=max problems that are the context of this thread. I will not be arguing with myself, I have the raw data to look at.

    00

  • #
  • #
    Adam Smith

    [It seems everybody is doing the same thing why not the BOM?]
    What’s up with the website that your name links to?

    I’ve put in some dates for next summer and it says the min for where I live is going to be around 0 and the max around 50 Fahrenheit.

    If that comes true I’ll eat my hat!

    To put this in perspective, we are in the middle of winter and the max is usually between 15 and 17 celsius, that’s around 60 farenheit at the low end! If you think the temp in summer is going to go down, you don’t understand Australian summers!

    In fact I found it quite amusing when some U.S. states last week were astonished about having days over 105 farenheit. We get at least 7 days like that nearly every summer. A few years ago we had, from memory, 11 days that reached at least 105 in a row!

    00

  • #
    Slabadang

    ABC!

    Ofcorse The bias ABC they will do a report on this? (sarc unneccesary)

    00

  • #
    Adam Smith

    There are two temperature scales for our international visitors upper set of numbers are Centigrade and the bottom Fahrenheit the cross over at 0 and 32 might have been a clue, but I do need to label better, thanks for the reminder.

    Well whatever the scale is, I would put the chance of the minimum temp being 0 Celsius in summer where I live at about 0.0000000001%. You don’t seem to understand exactly how hot it gets in some parts of Australia. It is rare for a mid summer night to get below 15 let alone below 3 or 4, which only rarely happens in winter.

    The coldest it has been this winter near me was 1.5 Celsius about a week ago.

    If you think those sorts of temps are going to be replicated in summer you are dreaming. In fact in the middle of summer isn’t uncommon for the minimum temp to hover around the mid 20s and some horrible nights will stick in the high 20s or low 30s so if you want to sleep you have to leave the A/C on all night.

    But the charts are quite pretty and the webpage is well laid out.

    00

    • #

      The cold spots you see on the maps are for the mountains to your east, the UHI for the area around you in the center of Adelaide will be much warmer, one of the traits of the maps will be the high definition of the temperature contours due to elevation and UHI changes over short distances, such as in your case there close to the Victoria Square.

      00

      • #
        Rereke Whakaaro

        Well done Richard, I tip my hat to you.

        Do you also know Adam Smith’s real name by any chance? 🙂

        00

        • #

          Not yet but there were two machines being used at the time of the above 3:58 interchange one a Mac with screen resolution 1280X800 running Chrome ver 20.0.1132.47 and a second a PC running windows with a screen resolution of 1680X1050 also Chrome browser but ver 20.0.1132.57.

          refinement of the ISPN of the modem access suggests a location just East of Victoria Square where there are a couple of uni building coordinates roughly -34.927844,138.601136 on google maps but could just be the server location not the students location.

          00

          • #
            KinkyKeith

            Great stuff

            The wonders of modern technology.

            And skill.

            And perseverance.

            🙂

            00

          • #

            If Jo adds DNS tracking stat retrieval to here server software it will give you the total number of Adam Smiths, it would then be a simple process to quarry their servers, desktops, or laptops for social media network connections, facebook accounts, friends, twitter accounts etc. I have social media friends who could do it. I don’t think publishing the results is legal though.

            (There is only ONE Adam Smith in evidence and he is from Australia) CTS

            00

          • #

            It’s not my blog, so this is just an expression of a preference. Let’s give Adam Smith(s) full freedom of expression, and, please, no police work, unless it’s really necessary and confidential to the mods.

            It’s not that Richard does not have a point, it’s just that I think we should err on the side of freedom. Just so people know I’m not saying this because I use an alias here, my name is Robert Townshend.

            Of course, it’s not my blog, and I know that a busy and very successful site like this has to concern itself with traffic and other issues not apparent to me. I’ve had to pull up “Adam Smith” on the issue of shredding what I had written into many quotes, so argument becomes a confusion of I-said-then-you-said quotes and re-quotes. But I’d rather live with the stunts and spin than censor, even if I’m dealing with a tag-team.

            Lastly, I think censorship is for the Left, and the New Left, the GetUp/Green urban elites, are proving worse than the old style secret-ballot Laborites in this regard.

            So, if possible, I’d like full freedom of expression for Adam Smith – and even for a tag-team of Adam Smiths.

            00

          • #

            I am not for censorship or banning either, I just hate bandwidth waste, and meaningless inflation of thread size. Clarity should be a goal, toward increased meaningful discussion, that produces positive results, tools are meant to be used effectively, not wasted or left to rust, or BSed to death.

            00

          • #
            KinkyKeith

            Hi Mosomo

            Undoubtedly in absolute terms you have a point.

            Censorship in not good.

            On the other hand, that concern is mainly about governments and the total denial of alternative means of expression.

            If team Smith was banned from here they could go to any one of a number of blogs with names like “Stop Carbon Pollution Now” and get their message across.

            It has been very obvious that the whole point of team Smith is not to contribute to the discussion by to disrupt it and draw the energy away from useful progress.

            To highlight this, at times, the term Spacer has been inserted to remind people that maybe some thought should be given to the benefits or otherwise of having TS present here. They are not here to learn

            They are not here to put a point. They are simply reactive to whatever is put up by others to soak up their time and energy.

            Just discussion.

            KK

            00

          • #

            I am just new here so things you have put up with for a long time, are sharp stimuli to my patience or lack there of for thread bombing, just saying in this day and age of technology nobody is hidden when on line. Even and old retired production floor machinist, wheat farmer from Kansas like me, can access a lot of things just by knowing where and how to look. I’ll give it a rest now that that my viewpoint is understood.

            00

          • #
            crakar24

            TS is here for nefarious reasons and on that basis alone should be banned however all the other warmbots that frequent this site fall into the category of misguided delusionists and should therefore have free and unfetted access.

            00

          • #
            KinkyKeith

            hi Richard

            Keith from Newcastle Australia.

            Kansas?

            Your point of view is great because we can get a little to caught up in doo-gooderisms for our own health at times.

            The Smiths have played their presence here as a game the means they know they have imposed on Jo and the rest of us here.

            Any sensible person would understand that when you go out of your way to annoy someone you betta be careful.

            I think that exposing their location is useful because it tells them they can be outed. At least people working with them could possibly ID them with the computer info you have already published.

            Why am I not sure whether to believe you when you say you don’t have names???

            Ha ha

            KK

            00

          • #
            Adam Smith

            Not yet but there were two machines being used at the time of the above 3:58 interchange one a Mac with screen resolution 1280X800 running Chrome ver 20.0.1132.47 and a second a PC running windows with a screen resolution of 1680X1050 also Chrome browser but ver 20.0.1132.57.

            Wow, Richard has discovered that I own more than 1 computer!

            If Jo adds DNS tracking stat retrieval to here server software it will give you the total number of Adam Smiths,

            There’s one, he just has more than one computer.

            You seem to know a bit about computers. I suggest you put this skills to use by correcting your weather maps so they don’t claim that it is going to be 0 degrees Celsius in the Adelaide hills next summer.

            00

      • #
        Adam Smith

        The cold spots you see on the maps are for the mountains to your east, the UHI for the area around you in the center of Adelaide will be much warmer, one of the traits of the maps will be the high definition of the temperature contours due to elevation and UHI changes over short distances,

        Err what? It doesn’t get down to 0 celsius in the hills in summer!!!!

        The highest peak is Mount Lofty. The lowest temp this winter was 1.6 Celsius on June 10th:
        http://www.weatherzone.com.au/station.jsp?lt=site&lc=23842&list=ds&of=of_a&ot=ot_a&mm=06&yyyy=2012&sub=go

        Your assertion that it is going to get down to 0 Celsius in the Adelaide hills in Summer next year is just craziness. I give you that it is often a bit cooler in the hills, but zero degree minimum in summer! That’s very, very, very, very, very, very, very, very, very, very unlikely.

        The lowest minimum for mount lofty on record is 4.5 in 1996:
        http://www.bom.gov.au/climate/averages/tables/cw_023842_All.shtml

        such as in your case there close to the Victoria Square.

        My local weather station ain’t Victoria Square mate!

        00

        • #
          crakar24

          What the hell would you know Smith? Please dont tell me you and i live in the same town!!!!!!!!!!!!!

          There is nothing in the quoute you mention which suggests the person you quoted made any such claims as to the temp during “SUMMER”, suggest you sack “that Smith” and get a new one. If you must know the Adelaide hills is routinely colder than the city at anytime during the year.

          00

          • #
            Adam Smith

            There is nothing in the quoute you mention which suggests the person you quoted made any such claims as to the temp during “SUMMER”

            Keep up mate! We are discussing THIS MAP:
            http://www.aerology.com/?location=Australia&mapType=Tmin&date=1%2F1%2F2013

            It is Mr Holle’s PREDICTION of what the MINIMUM temperature is going to be all over Australia on 1 January 2013.

            Depending on the shade of green you are looking at, Mr Holle thinks that some areas in S.A., including some in the Adelaide metro area, are going to get down to as low as -5 celsius with many others around 0 during summer.

            It is my suggestion, to Mr Holle and yourself, that this is an extremely unlikely thing to happen in the real world.

            But the predictions get even weirder than that. Mr Holle’s maps are predicting that Adelaide will experience some light snowing on 1/1/13 too. See for yourself here:
            http://www.aerology.com/home?location=Australia&mapType=Snwd&date=01/01/2013

            I’m not a climate science expert, but I do know that it very rarely snows anywhere in S.A., let alone in Adelaide, let alone in Summer.

            That same map also suggests there is going to be a violent summer snow storm in the Port Macquarie area of NSW, so everyone should stay well away from that area on new year’s day.

            Regarding today’s weather, Mr Holle’s map predicts the temperate in Adelaide today will reach the low to mid 30s:
            http://www.aerology.com/?location=Australia&mapType=Tmax&date=7%2F17%2F2012

            It is currently 17, but there’s still time for it to warm up of course, so I am not discounting the possibility of it reaching, perhaps 32.7 by 4 pm or so.

            If you must know the Adelaide hills is routinely colder than the city at anytime during the year.

            WOW! Did you plagiarise my post? You know where I wrote:

            I give you that it is often a bit cooler in the hills, but zero degree minimum in summer! That’s very, very, very, very, very, very, very, very, very, very unlikely.

            If you [SNIP] ED

            00

          • #
            crakar24

            This is a different Smith than the one who posted before, (Snipped out the comment over a persons mental health status as this is going too far and not useful to the topic) CTS

            Nope plagiarisim is where i steal one of your original ideas you merely made a statement based on an observation i can reword that many times if i like.

            As for the rest of Mr Holle’s predictions i must agree with you Smith *THUMP* (falls off chair) i doubt very much we will ever get snow in Adelaide be it summer or winter. We did get snow in my town one year, well it wasnt really snow i suppose just a light dusting of white stuff. It can get quite cold in the mornings even in summer but not to the extent you veleive Holle claims (cant look at his website from here which lead to my intial confusion).

            Cheers

            00

        • #
          KinkyKeith

          Just a point of order before GeeAye gets to it.

          He is back supervising spelling and grammar so be careful.

          Should be “these” skills or “that” skill.

          KK

          00

          • #
            Gee Aye

            hey… I am not the one who gets “ANGRY” about neighboUr.

            And I also agree with you and Mosomoso that publishing details about people without their consent and promoting the possibility that you could link to Facebook friends etc is a very poor way to promote open discussion.

            00

      • #
        Adam Smith

        Hi Richard

        I note your map for new year’s day shows that it will snow in Adelaide:
        http://www.aerology.com/?location=Australia&mapType=Snwd&date=1%2F1%2F2013

        Can you tell me which day next year will feature the most snow in South Australia?

        My understanding is it hasn’t snowed in the Adelaide metro region (note, the Adelaide Hills where it does snow once or twice a year aren’t in the metro area) since the early 1950s. And that only lasted for two days.

        00

  • #
    crakar24

    I am surprised Darwin is not on that list and not due to errors. During the wet season it can be 38C at noon and 38C at midnight.

    00

  • #
    • #
      Angry

      That has to be a JOKE surely !!

      ELECTION NOW !!!

      00

      • #

        No just standard PR and advertizing MO, state the opposent of the actuality of the situation to counter the bad reality, other wise why would they bother to promote the PR position if there were no need to try to contain negative reality exposure.

        Also by a intensively detailed explanation of the symptoms of a problem, can be found the steps to resolve the problem. With CAGW they has not done any detailed work, that shows they can PROVE work that can/needs to done.

        00

  • #
    handjive

    The BoM is at it again:

    July 11, 2012

    The Bureau of Meteorology has predicted a fresh El Nino pattern will set in in the latter stages of winter or during spring, bringing with it warmer and drier conditions than the territory has experienced in the past two years.

    Just a reminder of the BoM’s last effort:

    February 17, 2009

    VICTORIA is likely to come under the influence of another El Nino within the next three years, exacerbating the drought and the likelihood of bushfires, a senior Bureau of Meteorology climate scientist says.

    2010
    2011
    2012

    And, here is the result & cost of failure from the BoM and their ‘ junk science-cO2 forced predictions‘.

    2012: What is a farmer to do?

    VICTORIAN grain and lamb producer Murray Horne has his fingers crossed that the innocent-sounding “little boy” weather pattern of El Nino isn’t about to haunt his family farm in Victoria’s western district any time soon.

    But he is feeling nervous and wary after last week’s predictions by the Bureau of Meteorology that by August the eastern half of Australia will again be in the grip of the drier weather patterns the grim El Nino brings.

    As yet, the Cressy farmer has not seen the local tell-tale signs of dry conditions ahead that his fifth-generation farming family has come to know only too well – bitter winter frosts and westerly weather fronts with no rain.

    “I don’t know what to think now, except I do know from living here all my life that the warning signs are when we get dry fronts coming through and bad frosts in June – and we haven’t seen either of those yet.”

    Indeed, after failed BoM claims like, “Perhaps we should call it (drought) our new climate(2008),” said the Bureau of Meteorology, how could you trust the BoM?

    00

  • #
    val majkus

    David Archibald’s Sydney speech is now online:

    http://newsweekly.com.au/article.php?id=5257

    00

  • #
    • #
      Pat

      Indeed. I recall sometime in 2000/2001 there was a stuff.co.nz article by someone I don’t recall claiming there had been no warming in NZ since 1941. It didn’t last long, from memory, I could not find it 20 minutes after reading it, which is a shame.

      00

  • #
    Adam Smith

    crakar24
    July 17, 2012 at 3:24 pm

    This is a different Smith than the one who posted before, either that or you are suffering from some sort of Bi polar disease (keep taking the pills as most people would like to talk to the pill popping version of Smith).

    If you bothered to read the moderation in post 37, 1,1,1,2 you would see a moderator point out that there is only ONE Adam Smith.

    But nothing makes me laugh more than posts that assert that there must be more than one of me.

    The other points you make here are just abuse. I have not abused you, but the fact you need to revert to abusing me suggests you realise that you’ve lost this debate.

    Nope plagiarisim is where i steal one of your original ideas you merely made a statement based on an observation i can reword that many times if i like.

    Well sure you can re-word it as many times as you like, but it doesn’t get around the fact that you simply re-stated something I had already said, which made your point banal.

    As for the rest of Mr Holle’s predictions i must agree with you Smith *THUMP* (falls off chair) i doubt very much we will ever get snow in Adelaide be it summer or winter.

    Well, “ever” is a very long time, so I don’t know if I can go along with you there.

    We did get snow in my town one year, well it wasnt really snow i suppose just a light dusting of white stuff.

    I think weather boffins call this “graupel” which does count as a type of snow: http://en.wikipedia.org/wiki/Graupel

    It can get quite cold in the mornings even in summer but not to the extent you veleive Holle claims (cant look at his website from here which lead to my intial confusion).

    I’m not completely discounting the possibility that Mr Holle’s map will turn out to be accurate representation of minimum temps in Adelaide on 1/1/13.

    I just think it is very, very, very unlikely.

    A true sceptic always leaves open the possibility, however small it may be, that they may turn out to be wrong.

    The weather for today doesn’t seem to be warming up much, so I’m highly doubtful that it will reach the low to mid 30s as Mr Holle’s map suggests it will.

    00

    • #
      crakar24

      You see this is what i am talking about, you throw out abuse here like it is going out of fashion and then claim the victim when it comes back just as fast all the time not making much sense at all. Then in another comment you sound coherent and at times you almost convince me of the point you are trying to make but then in the next comment you produce another version of yourself which differs from the previous again.

      Consistency is not your strong point, when Jo says you and Smith Vers I through Smith Vers XV are the same i assume Jo looks at your IP address i suggest there are ways one could have more than one PC connected and assume the same IP either that or you really need some serious heavy duty medication to rectify your situation.

      (Please stop this line of discussion and get back on topic) CTS

      00

      • #
        Adam Smith

        You see this is what i am talking about, you throw out abuse here like it is going out of fashion and then claim the victim when it comes back just as fast all the time not making much sense at all.

        Absolute nonsense. You yourself just accused me of suffering bipolar disorder mate! If that isn’t abuse and playing the person instead of debating the issue then what is?

        Whenever someone here attacks me it just proves that I’ve won the debate and that the person doing the attacking is extremely sensitive because they realise that their position is wrong, or at least partly flawed.

        So attack me all you want, it simply proves that you’re losing the debate.

        Consistency is not your strong point, when Jo says you and Smith Vers I through Smith Vers XV are the same i assume Jo looks at your IP address i suggest there are ways one could have more than one PC connected and assume the same IP either that or you really need some serious heavy duty medication to rectify your situation.

        Sorry mate but you’re just embarrassing yourself. Theres only one person! And it is quite likely that my posts show up from a variety of IP addresses because I access the internet through a range of internet connections including at home, at my different work places, and even when using public wifi.

        But anyway, the more you talk about me the more I know you’ve run out of constructive things to say.

        (please let it go and get back on topic… if you can) CTS

        00

        • #
          crakar24

          Adam,

          Not too long ago i made a comment where i said i actually agreed with you, i also said that at times you make coherent points, to the point that you come close to convincing me that your position is correct, i also said that i could see Holle’s website and this lead to me initial confusion

          I also said that “This Smith” is the one that people will readily engage in to debate the finer points however you are not consistent and you must either have bipolar or there are more than one of you.

          In other words a vast majority of my comment was me saying nice things about you but………..you as per standard practice take a selective cut and paste about the bipolar bit and ignore the rest. This is not the first time you have done this and this is where your problem lies.

          Remember a little whle ago i said quality is more important than quantity? Your comments are usually full of abuse so please do not cry wolf when you get it in return. Another tip…….read all of the comment…..reply to all of the comment…….explain which bits you agree with and which bits you dont, you cannot simply quote out of context and expect people to engage you in a nice manner as it irritates people when you do that.

          More tips to follow

          Cheers

          Crakar

          00

          • #
            Adam Smith

            I also said that “This Smith” is the one that people will readily engage in to debate the finer points however you are not consistent and you must either have bipolar or there are more than one of you.

            There you go again mate, reverting to abuse and thus proving that I’ve won the debate.

            00

          • #
            crakar24

            You do realise that cutting ab=nd pasting the bits you like and rejecting the bits you dont to change the context of what someone writes is a form of abuse, a crime which you commit on a regular basis.

            (If you and Adam continue to carry on this way off topic absurdity anymore it will be SNIPPED!) CTS

            00

  • #
    Ross James

    Adam Smith,

    I suspected all along that this was a beat up about Australian temperatures. I followed your links to get an unbiased understanding of BOM methodology and the sources derived.

    Ross J.

    00

  • #

    […] This is a blindingly obvious type of error which should not have escaped quality control. It throws serious doubt on the whole ACORN-SAT project. In my opinion, these violations indicate that the entire ACORN-SAT database is suspect, and should be withdrawn for further testing. (source) […]

    00

  • #
    Manfred

    AS: Your tolerance of error appears impressive. I wonder whether you apply the same forbearance to the Captain of your B747 who decided the minima for an approach was actually the maxima, or possibly your dentist, when regrettably extracting a tooth, takes the adjacent one explaining that as there several teeth in the mouth only one wrong extraction is a tolerable error? In my book, no amount of obfuscation or explanation changes the meaning of Min and Max. Your defense of rationalisation amounts to little more than Orwellian double speak in my view.

    00

  • #
    Adam Smith

    Joanne Nova
    July 17, 2012 at 4:48 pm
    Adam, it’s not scientific to find that data sets need adjustments which all increase the “trend” of the data.

    Where is the evidence that all of the homogenisation to the data increases the temps!?

    Take the weather station enclosure issue for example. Since the early enclosures were not sealed at the bottom they tended to record higher temperatures on warm days than later enclosures that have sealed bottoms. What is wrong with making an adjustment to the data to account for this fact?

    The effect of such a change means a relative reduction in readings for the early period when those bottomless enclosures were in use, but why is that an inherently bad thing to do?

    If you want to be scientific you have to account for these variables, but you seem to be saying that any adjustment that results in lower temps in older data and higher temps in more recent data is inherently biased, when this is absurd because clearly there are good reasons why sometimes reducing older temps needs to be performed!

    But also the BOM did the opposite too! They tended to homogenise more recent data down as development started to encroach on existing weather stations. But even after this proces, they still find an over all warming trend.

    Your “misinterpretation” of this point is becoming a bore. The raw trend could be anything, but the adjustments should have a random effect (both up and down) on the trend, and the BOM has even told us they are “neutral” when clearly they are not.

    What do you mean “clearly they are not” neutral? How do you know this?

    You seem to just be putting the cart before the horse. Since you don’t accept that there has been a warming trend, you are simply concluding that the homogenisation process is somehow flawed.

    You’ll only be happy if the data is massaged to the point that it doesn’t show any warming trend, at that point you will say the BOM’s homogenisation process is correct.

    That’s not scientific.

    00

    • #
      KinkyKeith

      Warning

      SPACER ahead.

      Please use detour.

      00

    • #
      cohenite

      This is tedious; you have been given the evidence that the ‘homogenisation’ was not neutral and increased the trend.

      You are merely an energetic troll.

      00

  • #
    Pat

    Having firsthand experience working with NIWA and staff, if their “temperature data and programs” are anything like their “fish” database I’d be *very* suspicious of ANY output from it!

    00

  • #
    crakar24

    Adam try and stay with me here OK,

    Take the weather station enclosure issue for example. Since the early enclosures were not sealed at the bottom they tended to record higher temperatures on warm days than later enclosures that have sealed bottoms. What is wrong with making an adjustment to the data to account for this fact?

    Lets assume for a moment that the raw data for 30 years has shown no trend but an average of say 20.1C we then change the enclosure and find the temp has dropped by 0.2C so now if left alone the next 30 years of raw data will once again show no trend at 19.9C but there will be a step in the data. Now we can either udjust the first 30 years up or the second 30 years down but even after adjustments there will still be no trend from the raw data or the adjusted data.

    To put this another way if you have created a trend from the adjustments then you have done something wrong, and another, an adjustment will make a change to the data from a moment in time and forward from that point it will not keep adjusting the data incrementally from that time on.

    Therefore if there is no trend in the raw data then there cannot be a trend in the processed data it is not possible for this to happen all you can do is change the base line of the raw data. Do you understand this concept now?

    00

    • #
      Adam Smith

      Lets assume for a moment that the raw data for 30 years has shown no trend but an average of say 20.1C we then change the enclosure and find the temp has dropped by 0.2C so now if left alone the next 30 years of raw data will once again show no trend at 19.9C but there will be a step in the data. Now we can either udjust the first 30 years up or the second 30 years down but even after adjustments there will still be no trend from the raw data or the adjusted data.

      Excuse me!? Why would you adjust the data for the first 30 years UP when you know the time of enclosure used that that weather station exaggerated warm temps UP!? The adjustment for that period should clearly be DOWN on SOME DAYS to account for the exaggerated warming effect of that enclosure!

      You don’t just do one or the other and say what you have done works out equal in the end!!!! You make adjustments only for good reasons!!! Adjusting values up when you know that that sort of enclosure already tends to exaggerate values in that direction MAKES NO SENSE and is NOT a scientific approach to adjusting the data!

      To put this another way if you have created a trend from the adjustments then you have done something wrong,

      Mate, the trend is ‘latent’ in the data after you rationally homogenise it.

      You don’t do what you suggest and just add or subtract based on a whim. You need GOOD REASONS for changing those values in anyway.

      After you carefully consider ALL the factors for homogenisation, if it turns out that that shows a warming trend then so be it.

      You seem to be saying that the only way you know if the homogenisation is right is if it DOESN”T reveal a trend of any sort. But this makes no sense.

      Therefore if there is no trend in the raw data then there cannot be a trend in the processed data it is not possible for this to happen all you can do is change the base line of the raw data. Do you understand this concept now?

      This makes no sense at all!

      Again you are starting with a conclusion “there cannot be a trend in the processed data” and you are saying that homogenisation is only correct if you end up with this pre-determined conclusion! That’s not scientific!

      What you do is you sit down and consider all the variables about how the temperature record has changed and you account for all those factors. It may mean some observations are increased while others are reduced it may mean that some are just left alone, then you apply it to the data.

      Then you look at the trend.

      It seems you have the complete opposite approach. You just asserted that if it turns out there is a particular trend apparent in the data that you couldn’t see on the raw numbers then by definition you are saying the homogenisation is wrong.

      Well, that’s not scientific at all and is just bias.

      00

      • #
        crakar24

        Oh for the love of God Adam, i am trying to decide whether this is just a thing you do to amuse yourself or are you really stupid and cannot grasp a concept.

        Let me try one more time, lets assume we have a station that shows after 100 years of raw data that the temps have been flat, then the BOM adjust for time of observation, changes in screens, changes in thermometers and now the “processed data shows a positive trend but teh raw data still shows a flat trend.

        Do you beleive this positive trend is a “sign” of AGW or simply an artifact of BOM manipulation?

        Just answer this question as breify as possible and then we can progress.

        00

        • #
          Adam Smith

          Let me try one more time, lets assume we have a station that shows after 100 years of raw data that the temps have been flat,

          Well here is your problem mate! “The temps have been flat” is an analytical conclusion you are trying to draw from the raw data.

          But the problem is, YOU CAN”T ANALYSE THE RAW DATA WITHOUT HOMOGENISING IT FIRST! You have to account for all the changes that have happened at that particular station over 100 years!

          I am extremely sceptical that there is a 100 year old station that hasn’t experienced a single change that requires homogenisation of the data before it can be analysed.

          We know for a fact that the earliest enclosures didn’t cover the bottom which meant they tended to exaggerate temp readings on warm days. Are you going to account for that in your analysis by homogenising the data first, or are you going to take your pseudo scientific approach and just ignore that factor and analyse the data as is any way?

          then the BOM adjust for time of observation, changes in screens, changes in thermometers and now the “processed data shows a positive trend but teh raw data still shows a flat trend.

          You don’t seem to understand this do you? The reason you homogenise the data is so you CAN MAKE SCIENTIFICALLY VALID CLAIMS about any trends in the data!

          You can’t just do an analysis of the raw data because it contains too many variables that need to be accounted for BEFORE you do any statistical analysis on the data!

          So your assertion that according to the raw data “the temps have been flat” (which I take to me no warming or cooling trend) is SCIENTIFICALLY MEANINGLESS!

          00

          • #
            Sonny


            Adam,
            The process you have described of homogenizing data leaves it to the honesty and integrity of the staff at the BOM to do so in such a way as to not accidentally or intentionally introduce a warming bias into the data.

            Unfortunately from my experience I would sooner trust the devil with the inner workings

            00

          • #
            Sonny

            Of my soul than a climastrologists with a temperature record. All they know how to do is cook the books and invent rationalizations to satisfy their religious adherents.

            If there is no warming trend in the data before it is adjusted then there is no significant warming. End of story.

            00

          • #
            BobC

            Adam Smith
            July 17, 2012 at 6:37 pm · Reply

            The reason you homogenise the data is so you CAN MAKE SCIENTIFICALLY VALID CLAIMS about any trends in the data!

            You can’t just do an analysis of the raw data because it contains too many variables that need to be accounted for BEFORE you do any statistical analysis on the data!

            So your assertion that according to the raw data “the temps have been flat” (which I take to me no warming or cooling trend) is SCIENTIFICALLY MEANINGLESS!

            Indeed, this is a common belief among the government-funded climate science community. This group, however, has been justly criticized for their misuse of statistical methods (also see the “Hockey Stick” issue) and lack of communication with the statistical analysis community.

            Real statisticians strongly disagree that “homogenized” data is more scientifically significant than raw data. The short conclusion is that any homogenization technique increases the uncertainty in the data going forward — as opposed to making it more “accurate” as you imply — and if that uncertainty is correctly acknowledged and accounted for (which it never is, in government-funded climate science) it just makes the conclusions drawn from such homogenized data meaningless.

            In the end, there is only one way to validate a theory — demonstrate its predictive skill. Unfortunately for AGW, such attempted demonstrations routinely fail.

            00

  • #
    crakar24

    OT,

    Wont be long now before the getup campaign successfully ditches one of the carbon tax questions

    http://www.oursay.org/hangout-with-the-prime-minister

    Remember if you have not voted yet vote now or as some US politician once said vote early and vote often.

    00

  • #
    Adam Smith

    crakar24
    July 17, 2012 at 5:02 pm
    If the adjustments were neutral then what would be the point, Adumb Smith (Reg TMK to Mark D) has once again seized on a non topic and is attempting to create a debating point where there is none.

    There’s no reason to revert to abuse because you are losing the debate.

    The fact is the only “adjustments” that can be made are:

    #1 When the time of day changes for the observation ie instead of reading the temp at 9 am you read it at 9:30 am (This would tend to create a higher average if you read it at 8:30 am then a lower average)

    Sure, but it is actually more complicated than this because there has been three different standards used for monitoring. The current standard was introduced in 1962, but there were still some weather stations working on the old standard into the early 1970s.

    #2 When the site has been moved, this will render all previous data useless from a trend perspective unless you come up with a fudge factor (either up or down) to compensate but how exactly do you do this with any degree of accuracy?

    Sorry mate but you seem to be completely underestimating human ingenuity. Just because a weather station is moved doesn’t make the early information useless at all. There are statistical approaches to relating the data if the new weather station isn’t that far away. Of course it isn’t as good as running a weather station in the same conditions for 100 years, but the way places get redeveloped this just isn’t possible.

    It seems your solution is to just throw up your arms and claim that the data is useless but this isn’t true!

    #3 When the station becomes enveloped into the UHI umbrella (as a rural town gets bigger). Once again this would tend to create a higher average.

    Sure and in practical terms that means that a lot of the values from the second half of the last century are reduced as development encroaches on existing weather stations.

    Changing of equipment (old for new) should not change a thing as both old and new equipment are calibrated to read zero when it is zero or 23.4 when it is 23.4.

    Sorry? For starters the new platinum probe thermometers ARE far more accurate than the old thermometers! The old glass thermometers had a tolerance of +/- 0.5 celsius, whereas the new ones have a tolerance of +/- 0.2 celsius.

    But it isn’t just the thermometers that changed. The weather station shelters themselves have changed. The early ones didn’t have closed bottoms, which meant they tended to exaggerate the temperature up on warm days.

    So the change of equipment certainly IS something that needs to be accounted for!

    You present yourself as an authority on homogenisation but you completely ignore other factors that need to be considered. Here are some off the top of my head you don’t mention:

    1) The fact different types of thermometers have been used at different times, with newer ones more accurate than older ones. You assert that all thermometers are equally accurate but don’t present evidence to back up this claim.
    2) The fact early automatic thermometers couldn’t transmit data with decimal places, which implies the introduction of a rounding error
    3) Changes to the design of weather stations, most importantly that early weather station enclosures were not sealed at the bottom which meant they exaggerated temps on hot days.
    4) The expansion of the network into more extreme places, including the edges of deserts, over time
    5) The fact most weather stations are closer to populated rather than remote areas
    6) The fact errors can be introduced into the data either by people or because of automatic equipment failing
    7) The fact weather stations near the coast can produce low readings due to onshore winds

    No Adumb you are wrong and to prove it to yourself pick a station and look at the “raw data” if the adjustments are neutral as the BOM says then the processed data should have the same trend. I think you will find that a majority of sites wil have raw data that show zero or very little trend but the processed data will show a very large positive trend which can only mean the trend is created via adjustments…………..or are you now going to claim the raw data cannot measure the AGW signal?

    HA! So there you go again! Instead of explaining that homogenisation is an essential procedure for determining the significance of the temp data over time, you just assert that we should just look at the raw data!!!

    So you are saying we should just pretend that:

    We are still using the same thermometers we used up to 112 years ago.
    We are using exactly the same weather stations and haven’t added any
    Development hasn’t encroached on weather stations
    There are the same number of weather stations now as there were in 1910
    No weather station has ever been moved
    The procedures for monitoring weather stations has never been changed

    Your claims are not scientific.

    00

    • #
      crakar24

      Adam, (remember read all of the comment and do not cut and paste only the bits you want to change reality)

      I have not claimed to be an expert on this topic but i do consider myself to be able to apply logic and common sense.

      Nowhere in my comment do i assert this:

      HA! So there you go again! Instead of explaining that homogenisation is an essential procedure for determining the significance of the temp data over time, you just assert that we should just look at the raw data!!!

      This is something you have made up (as usual as if in some way to prolong a debate that you cannot win)

      You do raise some very good points about differing standards etc and i basically agree with what you are saying about adjustments (where and when etc) however you have failed to address the one, the most important point that i and others have repeatedly made.

      The whole purpose of adjustments is to ensure the accuracy of the data so (once again i will give an example) if at some point in a 100 year record you change the thermometer and it creates a higher or lower temp *NOTE* it will not change the trend it will merely put in a step change then if you adjust the temp from that point on to compensate then there is no change to the data.

      You can make all the adjustments you like but you cannot create a trend as these are adjustments, you cannot adjust teh data for AGW as it would already be visable in the raw data. So for the last time Adam how can you create a trend from flat raw data and call it a rise due to AGW? Answer this Adam……….

      00

  • #
    Adam Smith

    The whole purpose of adjustments is to ensure the accuracy of the data so (once again i will give an example) if at some point in a 100 year record you change the thermometer and it creates a higher or lower temp *NOTE* it will not change the trend it will merely put in a step change then if you adjust the temp from that point on to compensate then there is no change to the data.

    The REASON you homogenise the data is to see if there IS a trend!

    You are basically asserting that there can’t be a trend and therefore any homogenisation that dares show a trend must be flawed. That makes no sense and is a completely unscientific approach to this issue.

    You can make all the adjustments you like but you cannot create a trend as these are adjustments,

    What are you talking about? A “trend” is an analysis of the data after it has been homogenised!

    If someone seriously homogenised the data SIMPLY to decrease older observations and increase newer ones in order to create a warming trend, then that would be unscienitific and totally biased. If you are accusing BOM of doing this, then present your evidence.

    But in a weird way you are proposing another type of bias. You are saying that we should start off with the assumption that an analysis of the post-homogenised data CAN NOT show any trend, and then you are asserting that any homogenisation process that ends up showing a TREND (and note I have NOT simply said “a warming trend”) is inherently flawed.

    Well that’s not how real science works. You don’t start off with your conclusions and then massage the data to reach that conclusion!

    you cannot adjust teh data for AGW as it would already be visable in the raw data. So for the last time Adam how can you create a trend from flat raw data and call it a rise due to AGW? Answer this Adam……….

    See, you’ve done it again? You have just said to me that if an analysis of the post-homogenised data shows a warming trend then you are saying that it must be wrong. You are starting off with your conclusion and are going to work back from there which means criticising any homogenisation of the data which reveals any trend, let alone a one of warming.

    Well I have the opposite view. What you need to do is sit down and think carefully about all the variables that have changed over time and how these have effected the raw data. In some instances this would mean the values are too low, in other instances they are too high. But you think rationally about what does and doesn’t need to be changed in what ways depending on a range of variables such as weather station type, location, thermometers etc, et, etc. It could even mean in some instances you reduce the data for one reason, but then increase it for some other reason which may almost cancel out the other changes, but you do it anyway because you need to account for all the factors.

    You then apply those changes to the data.

    Only after you have done this do you analyse the data for statistically significant trends.

    If they show a warming trend, so be it.
    If they show a cooling trend, so be it.
    If they show no statisticly significant trends, so be it.

    That’s how you do real science!

    You don’t start from the position “if the homogenised data shows a warming trend then it is wrong”

    That’s not science, that is ideological dogma.

    00

    • #
      KinkyKeith

      Every scientist knows that homogenisation must be done as soon as possible after milking.

      00

      • #
        memoryvault

        .
        No no no KK.

        Homogenisation (of the data) must be done BEFORE milking the taxpayer.
        Otherwise they would never stand for it.

        .
        BTW – apologies for my unsubtle comments on the previous thread.
        I just get SO angry when I see senior Australians on a pension treated as third class citizens on welfare. My comments weren’t personal – just my anger showing through of how generation after generation of Australians get conned by their lying, cheating political class.

        ALL working Australians are contributing 9% of their gross income via their income tax to a fund that is supposed to provide them with a superannuation pension of 60% of the their average wages or salaries for their last four years of their working life. This money is supposed to be assets test free, means test free, and non-taxable.

        This legislation has never been rescinded. The payments never eventuated, but the deductions continue.

        00

        • #
          KinkyKeith

          Hi MV

          No need.

          Just crossed wires.

          Everybody has a slightly different take on things and because of my background for some reason or other never saw any stigma attached to the Old Age Pension. As you say everybody earned it.

          At the other extreme in the early days of the “social security explosion ” there were many young blokes encouraged by the dole to leave school early, pool resources , buy a van and go surfing.

          My father died at 57 with kidney cancer almost certainly part of the New Guinea service and I have a short fuse when it comes to people asking for more out of the social security support system than he got.

          Even more recently, payments for service disability & death are poor compared with the bonanza for those on the other government benefits.

          Enough bitterness and ranting!

          Homogenisation; an excuse to adjust the data.

          🙂 KK

          00

  • #
    Adam Smith

    Sonny
    July 17, 2012 at 7:26 pm

    Adam,
    The process you have described of homogenizing data leaves it to the honesty and integrity of the staff at the BOM to do so in such a way as to not accidentally or intentionally introduce a warming bias into the data.

    Unfortunately from my experience I would sooner trust the devil with the inner workings

    Well the BOM’s procedures have been peer reviewed twice, first by an Australian group and second by an international group which said the BOM approach was of a very high standard.

    But I accept that it is easier to just carp from the sidelines and assert, without presenting any evidence, that the BOM doesn’t know what it is doingl

    Of course what you won’t do is explain how you would perform the homogenisation of the data differently, because that would require you to actually think about the issue.

    Report this 00
    #
    Sonny
    July 17, 2012 at 7:30 pm
    Of my soul than a climastrologists with a temperature record. All they know how to do is cook the books and invent rationalizations to satisfy their religious adherents.

    So since you can’t explain where they went wrong you are just asserting that they have no idea what they are doing. This is a great example of you attacking the person, well in this case the entire organisation, because you don’t actually have a coherent argument against their methodology.

    If there is no warming trend in the data before it is adjusted then there is no significant warming. End of story.

    Sorry, but you can’t make claims about trends in raw data (e.g. “If there is no warming trend in the data before it is adjusted”) and expect them to be scientifically valid!!!! The data isn’t comparable across the changes in time because of the range of factors discussed at length in this thread.

    The reason you homogenise the data is so you account for all the variables that have changed in the accumulation of that data over time so you CAN start to perform scientifically valid analysis of that data!!!

    Thank you for giving me a PERFECT example of an unscientific dogmatic approach to this issue. You have just stated that if the end result of the analysis of the homogenised data shows a trend you dont agree with, then by definition the adjustment of the data is wrong!

    00

    • #
      Sonny

      The BOM’s procedures have been “peer reviewed” twice?
      Hooray! The rubber stamp of scientific conformity – lol!
      Why don’t I just leave my wallet out on my front door step and you just come and help yourself?

      00

  • #
    Sonny

    Here’s an interesting question for you Adam.
    If a warming trend is not perceptable from the raw data but is only made apparent once the “homogenization” takes place..

    how can we (as so many politicians and scientists have) assert that this warming is “unprecedented”, “dangerous” or “caused by man”?

    Do you see the problem Adam? Somebody wants my money on the premise that man is causing global warming yet they can only demonstrate global warming by “homogenizing” the data?

    To any intuitive, logical and thinking individual this seems like one gigantic scam (at worst) or a gigantic confluence of vested interests (at best).

    00

    • #
      BobC

      Sonny
      July 17, 2012 at 9:54 pm · Reply

      Here’s an interesting question for you Adam.
      If a warming trend is not perceptable from the raw data but is only made apparent once the “homogenization” takes place..

      Then it’s evidence that the “homogenization” is what is responsible for the “warming trend” (in the data, that is).

      how can we (as so many politicians and scientists have) assert that this warming is “unprecedented”, “dangerous” or “caused by man”?

      Well, I have to disagree with you here, Sonny — the “warming trend” in the data is obviously man-made; It just doesn’t represent anything in the real world. 🙂

      00

      • #
        Adam Smith

        Then it’s evidence that the “homogenization” is what is responsible for the “warming trend” (in the data, that is).

        Completely wrong. The trend is present in the raw data, but it only because apparent and available for analysis after careful homogenisation of the data.

        If you are proposing otherwise then you are ultimately saying that we should start with an assumption that there is no trend in the data and that you are going to just manipulate the figures until you reach your pre-existing conclusion.

        That’s not science.

        00

        • #
          BobC

          Adam Smith
          July 17, 2012 at 11:11 pm · Reply

          Completely wrong. The trend is present in the raw data, but it only because apparent and available for analysis after careful homogenisation of the data.

          If you are proposing otherwise then you are ultimately saying that we should start with an assumption that there is no trend in the data and that you are going to just manipulate the figures until you reach your pre-existing conclusion.

          I can’t figure out, Smith, if you’re just dumb, or if you think that by repeating dumb things you can convince others — but perhaps I’m being redundant.

          Manipulating the data to get the desired result IS the modus operandi of government-funded climate science.
          NASA has been manipulating the US temperature record — changing old temperature records — now for over 30 years. Each time they change it, more of a temperature trend “emerges”. None of their changes has been justified (or usually, even documented — generally it is skeptics that catch the latest change in the temperature “history”).

          The IPCC has been doing the same thing — that is how they got from this temperature history (in 1990) to this one (in 2001) — and removed the Medieval Warm Period to boot.

          Here is a re-analysis of the NOAA raw data (what NASA-GISS supposedly uses), including code to extract and grid the data and step-by-step instructions on how to repeat it yourself. Compare the final result of these clearly-explained, transparent data processing steps with the published result of NASA-GISS. The only trend here is obviously contained in the “corrections” applied to the data.

          And finally, here is the most egregious example of data manipulation to achieve a warming trend I have yet seen — Using the single station in Nepal that is used to extrapolate the “temperature” of the Himalayas, NASA-GISS converted a 2 deg/century cooling trend in the raw data to a 3.5 deg warming trend by simply adding a 5.5 deg/century “adjustment” without any attempt at justification. Here is the summary graphic, and here is a discussion on WUWT.

          There’s no way to characterize this except as fraud and an example of “just manipulat(ing) the figures until you reach your pre-existing conclusion.”

          ————————————————————–
          [Based on Adam Smith’s track history don’t expect him to actually address any of the specific points you have raised – *Ed]

          [the new (Ed) is not the same as the old [ED]. I will be asking Jo to suggest that we use different initials.

          00

    • #
      Adam Smith

      Here’s an interesting question for you Adam.
      If a warming trend is not perceptable from the raw data but is only made apparent once the “homogenization” takes place..

      Sorry mate but clearly you’re understanding of how science works is very limited.

      You can’t make claims about trends in the raw data with any confidence because the raw figures don’t take into account any changes at each weather station over time.

      They don’t consider the different thermometers, or the different weather enclosures, or changes to the way the measurements were observed and reported, or the fact that development may mean there are new structures near the weather station.

      If all you do is look at the raw figures you have all these variables at play that may skew the analysis this way or that, up or down. That’s why the data has to be homogenised first.

      Since you seem to have a very limited understanding of how science works you don’t seem to understand this.

      how can we (as so many politicians and scientists have) assert that this warming is “unprecedented”, “dangerous” or “caused by man”?

      Well I am not sure what you are referring to, because even if the temp record shows a general warming trend across Australia, that alone wouldn’t be enough to say there’s a general warming trend in the entire globe. So again it seems you’ve jumped to a conclusion that isn’t actually supported by the evidence.

      Do you see the problem Adam? Somebody wants my money on the premise that man is causing global warming yet they can only demonstrate global warming by “homogenizing” the data?

      Sorry mate, but you seem utterly confused about how science works. If all we did is took the raw data and shoved it in a spreadsheet and spat out a graph we wouldn’t be doing real science. Real science involves taking into account variables, such as how different thermometers may be slightly biased up or down. Real science involves taking into account how a weather station enclosure that has an open bottom will tend to result in higher temps than one with a closed bottom. Real science takes into account that a weather station build close to buildings will tend to report higher temps than one that doesn’t have surrounding buildings.

      These are all factors that must be taken into account before you can come to analytical conclusions about the data.

      If you just analyse the raw numbers, then you may as well just be guessing because it is impossible to know if you have found an actual trend or if it is simply caused by one or more of the variables that you didn’t bother accounting for.

      To any intuitive, logical and thinking individual this seems like one gigantic scam (at worst) or a gigantic confluence of vested interests (at best).

      Sorry mate, but your assertion that we shouldn’t bother homogenising the data for known flaws is itself one gigantic scam. It is also completely unscientific thinking.

      00

      • #
        BobC

        Adam Smith
        July 17, 2012 at 11:07 pm · Reply

        Sorry mate but clearly you’re understanding of how science works is very limited.

        Sorry mate, but your assertion that we shouldn’t bother homogenising the data for known flaws is itself one gigantic scam. It is also completely unscientific thinking.

        I strongly suspect, Mr. Smith, that you have never actually “done” any science.

        If the flaws in the data were known, then it would be a trivial task to remove them and have perfect data. The flaws, however, are never completely known: Many flaws you don’t even have evidence of, and others are of unknown effect.

        The Tmin > Tmax error, for example, is trivially easy to spot, but impossible to perfectly correct. The best one can do, perhaps, is to fill the data with some interpolation from previous time periods — sure to create its own errors, but less egregious (one hopes) than the original error. (The main point of this article, which you still haven’t grasped, is that the BOM can’t even get this simple level of data “correction” right.)

        When “adjusting” data to “remove flaws” therefore, you must have some sort of controls to insure that you are getting closer to the “ground truth” rather than just adding more errors. With satellite measurements, for example, the “ground truth” is often radiosonde measurements, which have their own problems.

        Sometimes one can use a well-verified theory to fit and interpolate the data. This has the logical problem, however, of self-confirmation — if you consider deviation from the theory as “errors”, then you will never find data that contradicts the theory. This is often the way “adjustments” are made in climate science (also see here) — without, however, the advantage of a verified theory.

        Science is considerably more complicated than you seem to grasp, Smith. In particular, your concepts of data “correction” are rather primitive and ill-considered.

        00

  • #
    Chris Whitley

    Sure, the planets actually cooling. NOT!

    http://nsidc.org/arcticseaicenews/files/2012/07/Figure3.png

    00

    • #
      Sonny

      You judge planetary warming/cooling by arctic sea ice melt?
      That my friend is neither a measure of global temperature nor is it compatible with he idea of greenhouse warming (which should be most obvious in the tropics)
      How patently absurd. Why not use temperature data (once it’s been homogenized to show said warming trend?) And I guess you have no idea at all that the reason the arctic is warming three times faster than anywhere else on earth because the center of the mythical hollow earth is heating up and expelling heat through that great big mythical hole at the North Pole. Lmao!

      00

      • #
        Adam Smith

        How patently absurd. Why not use temperature data (once it’s been homogenized to show said warming trend?)

        Wrong again mate. You don’t homogenise data so you can show a particular trend.

        You homogenise the data so, as best as you can, it reflects what you’d see if you had consistent weather observations all along.

        Of course that’s impossible because we don’t have time machines, but it is what you aim for.

        It is you that has repeatedly asserted that you start with a conclusion and manipulate the data until you get there. That’s wrong and unscientific.

        00

        • #
          cohenite

          Troll.

          00

        • #
          Mark D.

          Adumb says:

          It is you that has repeatedly asserted that you start with a conclusion and manipulate the data until you get there. That’s wrong and unscientific.

          Well well well! It seems you are getting it finally! The subject of this thread is that BoM left error ridden data in the new ACORN results. If accidental then why are you supporting BoM about the flaws? If on the other hand they left the flaws in there to skew the results then as you say “That’s wrong and unscientific”.

          Exactly the point of the whole post. Nice to see that you are finally getting it Adumb.

          00

      • #
        Chris Whitley

        I look at all pieces of evidence.

        Surface Temps show warming.

        http://processtrends.com/images/RClimate_5_temp_anom_series_latest.png

        So too does Ocean Heat Content.

        http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT/heat_content2000m.png

        Satellites measuring outgoing radiation also find that over time, more energy is being absorbed in the spectrums that GHG absorb.

        Highlightng a tiny percentage of errors in the BOM data seem rather fickle in light of all evidence that show the planet is warming.

        00

        • #
          KinkyKeith

          Hi Chris

          I see you think you are a scientist: quoting Dr. Chris: “I look at all pieces of evidence”.

          Well actually Chris what you are doing is “looking at anecdotes” about a topic.

          Evidence is logically collected information, anecdotes are what you can remember after having spent the afternoon talking with fellow Warmers.

          But you are correct.

          Of course the planet is warming, we have just come out of an ice age….Derh!!!!

          Things happen when you come out of an ICE AGE. Derh!!

          Like man:

          It gets hotter. Derh!!!

          Oceans rise 120 metres plus; and overshoot a little which explains the SEA LEVEL DROP of about 1.5 metres just 4,000 years ago , just

          after they built the Great Pyramid at Ghiza.

          Oceans are still going up and down a bit and and there is no big drama about that as far as educated oceanographers and physicists are concerned.

          All of these changes are due to::

          The SUN. Of course Tim Flannery is a plant biologist so he doesn’t know that the earth and Sun do not remain at constant distance from each other. Separation distance changes and this is periodic. perhaps periods are tens of thousands of years?

          Now that I have removed your pet worry about CAGW I feel bad, here , try this for size.

          When the sun reaches the end of its cycle in a few million years it will expand outwards and engulf the Earth in a blaze of heat.

          Now that’s heat. We will all be incinerated.

          Sweet dreams.

          00

        • #
          Mark D.

          Chris, the ocean heat content as provided by Levitus et al: from their paper:

          We provide updated estimates of the change of heat content and the thermosteric component of sea level change of the 0-700 and 0-2000 m layers of the world ocean for 1955-2010. Our estimates are based on historical data not previously available, additional modern data, correcting for instrumental biases of bathythermograph data, and correcting or excluding some Argo float data.

          Now it is funny that you should post that here together with the discussion of “data adjustment and homogenization”.

          00

    • #
      Wendy

      and yet this is being reported. Who do I trust? Actual on-the-ground reporting.
      Brutal Bering Sea ice blocking Arctic supply ships

      00

  • #
    Sonny

    Lets not forget we are talking about tenths of a degree.
    Any adjustments or homogenization can have a big influence on whether a trend becomes apparent.

    The BOM and CSIRO are government organizations who are duty bound to support government policy. Government wants climate change to remain a problem for taxation and other purposes.
    Therefore BOM adjustments are done to induce a warming trend in line with government policy and in accordance with modern day scientific groupthink on climate change.

    Your “homogenization” is of the same Ilk as “epicycles” used to extend the life of the earth as center of universe theory. It’s all bullshit Adam!

    00

  • #
    Adam Smith

    Lets not forget we are talking about tenths of a degree.

    Not true mate. Some of the adjustments on some days had to be pretty big.

    Not true mate. The early weather station enclosures called the Glaisher stand were open at the bottom which could result exaggerated temps on hot days, but a small bias towards lower temps minimum temps.

    A 60-year set of parallel observations at Adelaide (Fig. 3) showed a warm
    bias in maximum temperatures measured using the Glaisher stand
    relative to those measured in
    the Stevenson screen (Nicholls et al., 1996), that ranged from 0.2 to 0.6°C in annual means, and
    reached up to 1.0°C in mean summer maximum temperatures and 2-3°C on some individual hot
    days, most likely due to heat re-radiated from the ground
    , from which the floorless Glaisher
    stand provides no protection. Minimum Glaisher stand temperatures tended to have a cool bias
    of 0.2-0.3°C all year
    , and the diurnal temperature range thus has a positive bias

    Taken from Page 5 of this document:
    http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Observation_practices_WEB.pdf

    Any adjustments or homogenization can have a big influence on whether a trend becomes apparent.

    T

    he BOM and CSIRO are government organizations who are duty bound to support government policy. Government wants climate change to remain a problem for taxation and other purposes.

    Therefore BOM adjustments are done to induce a warming trend in line with government policy and in accordance with modern day scientific groupthink on climate change.

    Your “homogenization” is of the same Ilk as “epicycles” used to extend the life of the earth as center of universe theory. It’s all bullshit Adam!

    Oh sorry, I made a terrible mistake of assuming you were actually interested in the complexities of getting information from the raw temperature data. Clearly you are more interested in making assertions that you can’t back up with any evidence.

    The homogenisation is done in order to enable the analysis of the data as if the weather stations were never changed. It is a process used to remove variables in order to be able to make scientific claims about the significance of the data with a high degree of confidence.

    For you to just reject this out of hand, without presenting any evidence for why, demonstrates that you are not interested in science at all.

    00

    • #
      cohenite

      This particular comment by ‘Adam’ is very revealing; not only because it deals with a particularly esoteric aspect of the data collection history of Australia, but also one which goes to the heart of why the ‘homogenisation’ adjustment of data collected by BOM is so controversial.

      IMO, only an employee of BOM, well versed and with access to this obscure information could have posted such a comment.

      The Stevenson/Glaisher distinction and the conclusion that the Glaisher method of collecting data caused a warming bias has justified the COOLING of the early temperature data to compensate for this warming bias.

      This cooling has INCREASED the trend in the hogenised data of the ACORN set.

      The Glaisher stand is discussed in Reports from 1992-3 and this 1994 paper by Parker.

      The 1992-3 reports are discussed here.

      Both the Reports and Parker paper conclude that this warming bias was in the order of 0.2C. However there was no consideration to the idea that the Stevenson screen was at fault or had inherent bias; the Glaisher screen warming bias was the default assumption.

      The Parker paper says this:

      this bias, implied by comparisons between Stevenson screens and the tropical sheds then in use, is confirmed by comparisons between coastal land surface air temperatures and nearby marine surface temperatures, and was probably of the order of 0.2c.

      At 4.1 of the Parker paper it is noted that the Glaisher stand consistently records warmer than the Stevenson during the day at various locations around the world; but the conclusion that it is the Glaisher stand which is at fault is assumed; that is, it could be the Stevenson which has a cooling bias is not considered in the paper or Reports; there is, therefore, no Null Hypothesis, or rather the NUll that it is the Stevenson and not the Glaisher is not tested, merely assumed.

      With all his resources and expertise perhaps ‘Adam’ can address this issue instead of trolling, no doubt, as per his brief.

      00

  • #
    crakar

    Adam Smith in #52

    The REASON you homogenise the data is to see if there IS a trend!

    Definition of homogenization

    Homogenization in climate research means the removal of non-climatic changes. Next to changes in the climate itself, raw climate records also contain non-climatic jumps and changes for example due to relocations or changes in instrumentation. The most used principle to remove these inhomogeneities is the relative homogenization approach in which a candidate stations is compared to a reference time series based on one or more neighboring stations. The candidate and reference station(s) experience about the same climate, non-climatic changes that happen only in one station can thus be identified and removed.

    http://en.wikipedia.org/wiki/Homogenization_(climate)

    Now that you understand what the word means care to retract what you just said and come up with a new story?

    00

    • #
      Adam Smith

      Now that you understand what the word means care to retract what you just said and come up with a new story?

      Sorry mate, but it seems your limited understanding of science means you can’t see that that definition you just quoted is exactly what I have been explaining.

      00

      • #
        crakar

        yeah righto there Adam, you have just confirmed to that you are a waste of space if you cannot see that you have contradicted yourself then you are nothing more than an argumentative troll.

        To all,

        In light of this most recent evidence of the true nature of Adam may i suggest we all completely ignore this person from now on i for one am sick and tired of his ramblings clogging up my inbox.

        Regards

        Crakar

        PS Dont bother to reply to this Adam as i will not return the compliment

        00

        • #
          Adam Smith

          yeah righto there Adam, you have just confirmed to that you are a waste of space if you cannot see that you have contradicted yourself then you are nothing more than an argumentative troll.

          Absolute bullshit mate. If you can’t understand that you can’t take the raw data and make scientific conclusions from it before homogenising it then it isn’t my problem.

          You said it yourself that you think the way to deal with the data is to just start with a conclusion about what the trend should or shouldn’t be then you do anything to get to that pre-determined result.

          That’s not science mate, that is dogmatism.

          To all,

          In light of this most recent evidence of the true nature of Adam may i suggest we all completely ignore this person from now on i for one am sick and tired of his ramblings clogging up my inbox.

          So you can’t hack a bit of debate or of having your views challenged so now you are running away!

          That’s pretty funny mate, clearly you aren’t very confident of your views considering you can’t even defend them.

          PS Dont bother to reply to this Adam as i will not return the compliment

          Stop bossing people around, especially when you can’t defend your silly views, it makes you seem like a weak poseur.

          00

  • #
    memoryvault

    JO

    From comment #53

    But I accept that it is easier to just carp from the sidelines and assert, without presenting any evidence, that the BOM doesn’t know what it is doing.

    The fact that the above comment appears – eventually – on a thread that starts off by specifically exposing that the BoM doesn’t know what it is doing (or does, and is deliberately being dishonest) – with evidence- I think demonstrates fairly conclusively that Team Smith’s only reason for being here is to disrupt.

    It has further been disclosed that input is from three separate computers on the same login – so we are definitely dealing with a team, not an individual. So we are dealing with a team using carpet bombing techniques, and meaningless circular arguments leading nowhere, to bury your site in irrelevancy.

    The truth is Jo, unless you do something about it soon, Team Smith will destroy your site as a valuable alternative viewpoint – which, of course, is their only objective.

    Since none of us support censorship – even of vitriolic elements like Team Smith, I suggest the alternative I’ve put forward before. Simply create a “Team Smith” thread, transfer all “Team Smith” comments there with a [snip] to show where they came from, and a link.

    That way all “Team Smiths” comments are published, those who who want to access them can, and those who want to waste their energies in circular, meaningless arguments, can, to their heart’s desire, without disrupting the original intent of the thread.

    00

    • #
      Adam Smith

      The fact that the above comment appears – eventually – on a thread that starts off by specifically exposing that the BoM doesn’t know what it is doing (or does, and is deliberately being dishonest) – with evidence- I think demonstrates fairly conclusively that Team Smith’s only reason for being here is to disrupt.

      Ha! So you’ve got nothing to contribute to the debate so you have to revert to just casting aspersions about other people’s motives! That’s weak as piss mate. If you can’t handle the heat, get out of the kitchen.

      It has further been disclosed that input is from three separate computers on the same login – so we are definitely dealing with a team, not an individual.

      Absolute bullshit mate. There is a much simpler explanation. I use more than one computer at home and I access this site using a variety of internet connections both at home, at work (which itself is at different locations), and using public wifi.

      But thank you very much for the laugh. I find it extremely amusing that you think your posts are so important that they need to be ‘monitored’ by a group of people!!!

      Even a moderator in this thread pointed out that I am one person, but your conspiratorial way of thinking means you have come up with a far more complex but stupider explanation for why I connect to this site in a variety of ways.

      So we are dealing with a team using carpet bombing techniques, and meaningless circular arguments leading nowhere, to bury your site in irrelevancy.

      Absolute nonsense. And if my arguments are “meaningless”, why are you incapable of presenting something that challenges them?

      You’re not a sceptic at all, you’re a dogmatist who can’t handle the fact that someone would challenge your dogmatic views.

      The truth is Jo, unless you do something about it soon, Team Smith will destroy your site as a valuable alternative viewpoint – which, of course, is their only objective.

      What’s wrong with having alternative view points to alternate view points? Why does everyone have to agre with you? What is wrong with having a competition of ideas instead of crying to the moderator because you can’t handle reading some ideas and views you don’t agree with? Grow up mate, you’re acting like a child.

      Since none of us support censorship – even of vitriolic elements like Team Smith, I suggest the alternative I’ve put forward before. Simply create a “Team Smith” thread, transfer all “Team Smith” comments there with a [snip] to show where they came from, and a link.

      Nah, why don’t we do that to all of your posts with the thread titled “Young child has a sook to the site owner”.

      That way all “Team Smiths” comments are published, those who who want to access them can, and those who want to waste their energies in circular, meaningless arguments, can, to their heart’s desire, without disrupting the original intent of the thread.

      Sorry mate, but the original intent of this thread wasn’t for you to have a cry and expect everyone here to show sympathy to you. It is YOU that has distracted this thread from its main purpose. On the other hand I have presented a lot of useful information about the complexity of the task the BOM faces when trying to accurately analyse the temperature data it has.

      Show me what contribution you made to this thread at the topic at hand?

      (He was commenting TO Jo and yet you still had to make a long reply to it anyway.I am getting close to just snipping out whole comments until you and others GET BACK ON TOPIC! and drop this sideshow.) CTS

      00

  • #

    Say,

    just musing here.

    We all know why Doctor Smith has come here.

    This Blog was voted the most popular Blog in Australia, and therein lies a small clue. Smith wants to cash in on that and see his name splashed all over a very popular blog, because his own is a bit of a flop.

    Either Smith himself or his puppet masters obviously see this Blog’s popularity as a threat, so Smith, in his infinite wisdom thinks he can stir things up enough with his carpet bombing that people will just give up and go away, hence, threat neutralised. It’s just a game with him, because, well, he’s a Doctor you know, so he’s always right, even when he’s wrong. It’s just his private debating forum where the only rules are his rules, and he just watches the screens as a result of his stirring, and really, that’s all it is. If you complain, or ask that he should be censored, then he gets all blustery and says he has the right to free speech, when all he ever does is carp carp carp. We always lose the debate because, well, because he says so.

    Everyone has had a go, but to him, it’s water off a duck’s back, because after all, it’s just a game, and for him, a game where he knows it’ll never get back to him, because he’s anonymous hiding behind Doctor Adam Smith.

    Censorship is his perceived winning hand, so let’s not censor him at all. Let’s do the democratic thing like when Joanne’s Blog was voted as the most popular Blog in Australia, in an International vote.

    Let’s vote.

    1. That Doctor Smith be no longer heard.
    2. That Doctor Smith should stay.

    Tally up the totals and no censorship here, because the honourable thing to do after seeing the result is to abide by it, eh!

    If 1, then Doctor Smith honourably withdraws.
    If 2, then fair enough, we put up with him.

    Now how could you argue against that, eh!

    As I said, just musing here.

    Tony.

    —————————————————————————–
    [Moderators have been very liberal with Adam Smith. If we had applied the usual moderation rules at most blogs, most of Adam Smith’s comments would be snipped as ‘off topic’. This is because he rarely addresses the topic, or even the specific points made by others. This patience is not limited, but we do wish to extend courtesies to ‘believers’ which aren’t extended to ‘sceptics’ at ‘warmist’ sites. -*Ed]

    00

    • #
      cohenite

      Hi TonyOz; ‘Adam’, I think, has revealed his origin; I reckon if he fesses up and declares he is speaking either from the viewpoint of BOM, or with their imprimateur, then full steam ahead with his comments and insights. I for one, would be glad to have a rep of our scientific lord and masters telling us how it is! [sarc off]

      00

      • #

        Aww cohenite, not fair.

        I thought he had a really important Doctorate, you know, something like ‘Movie Appreciation’.

        I was almost tempted to ask him to grade an old Post of mine.

        Rosebud

        Tony.

        00

    • #
      Ross James

      Tony,

      False accusations – pure fantasy of your own making.

      Ross J.

      ———————————————————–
      [Tony has a thesis about as solid as CAGW. Personally I just believe Adam Smith has too much time on his hands, is somewhat clever but his PhD is certainly not science related, probably esoteric arts so he can’t get a decent job. But he rubs shoulders with the left wing ideologues. – We can all have theories, let Adam disprove them. *Ed]

      00

    • #
      Adam Smith

      Either Smith himself or his puppet masters…

      Mate, there are no “puppet masters”.

      Why is it so hard for you to understand that while there are many people on this blog who basically agree with each other, there are actually millions of people in Australia alone who disagree with many of the views that are often expressed here?

      I’m just an average citizen of our country who is willing to point out mistakes in some of the things written here. For you to say that there must be “puppet masters” telling me what to do is frankly paranoid thinking.

      ——————————————————————————
      [But you never address the real issues. Where is the proven link between humans and catastrophic global warming? How can a doubling of atmospheric CO2 content since industrialisation heat average global temperatures by more than 1.2C and assuming no negative feedback mechanisms. Why do the IPCC Summary for Policy Makers not accurately reflect the science content of the chapters? Why do they fail to mention the uncertainties expressed in the chapters? Why should we place any value on Climate models which have failed the predictive tests? Why hasn’t the IPCC insisted that negative feedback mechanisms be accounted for in the Climate Models? Etc Etc. Rather than mis-direct the conversation off topic, and criticise others based on semantics which are not critical to the actual debate, why don’t you contribute some real content to the debate on the science, with references? -*Ed]

      00

      • #
        Adam Smith

        But you never address the real issues. Where is the proven link between humans and catastrophic global warming? How can a doubling of atmospheric CO2 content since industrialisation heat average global temperatures by more than 1.2C and assuming no negative feedback mechanisms. Why do the IPCC Summary for Policy Makers not accurately reflect the science content of the chapters? Why do they fail to mention the uncertainties expressed in the chapters? Why should we place any value on Climate models which have failed the predictive tests? Why hasn’t the IPCC insisted that negative feedback mechanisms be accounted for in the Climate Models? Etc Etc. Rather than mis-direct the conversation off topic, and criticise others based on semantics which are not critical to the actual debate, why don’t you contribute some real content to the debate on the science, with references?

        Sorry mate, but these things are off-topic in this thread. This thread is about BOM’s ACORN data set.

        00

  • #
    cohenite

    I’ll repeat this comment I made to ‘Adam’s’ comment at 57 where ‘Adam’ quotes from a BOM document thus:


    A 60-year set of parallel observations at Adelaide (Fig. 3) showed a warm
    bias in maximum temperatures measured using the Glaisher stand relative to those measured in
    the Stevenson screen (Nicholls et al., 1996), that ranged from 0.2 to 0.6°C in annual means, and
    reached up to 1.0°C in mean summer maximum temperatures and 2-3°C on some individual hot
    days, most likely due to heat re-radiated from the ground, from which the floorless Glaisher
    stand provides no protection. Minimum Glaisher stand temperatures tended to have a cool bias
    of 0.2-0.3°C all year, and the diurnal temperature range thus has a positive bias

    Taken from Page 5 of this document:
    http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Observation_practices_WEB.pdf

    My response at comment 57.1:

    This particular comment by ‘Adam’ is very revealing; not only because it deals with a particularly esoteric aspect of the data collection history of Australia, but also one which goes to the heart of why the ‘homogenisation’ adjustment of data collected by BOM is so controversial.

    IMO, only an employee of BOM, well versed and with access to this obscure information could have posted such a comment.

    The Stevenson/Glaisher distinction and the conclusion that the Glaisher method of collecting data caused a warming bias has justified the COOLING of the early temperature data to compensate for this warming bias.

    This cooling has INCREASED the trend in the hogenised data of the ACORN set.

    The Glaisher stand is discussed in Reports from 1992-3 and this 1994 paper by Parker.

    The 1992-3 reports are discussed here.

    Both the Reports and Parker paper conclude that this warming bias was in the order of 0.2C. However there was no consideration to the idea that the Stevenson screen was at fault or had inherent bias; the Glaisher screen warming bias was the default assumption.

    The Parker paper says this:

    this bias, implied by comparisons between Stevenson screens and the tropical sheds then in use, is confirmed by comparisons between coastal land surface air temperatures and nearby marine surface temperatures, and was probably of the order of 0.2c.

    At 4.1 of the Parker paper it is noted that the Glaisher stand consistently records warmer than the Stevenson during the day at various locations around the world; but the conclusion that it is the Glaisher stand which is at fault is assumed; that is, it could be the Stevenson which has a cooling bias is not considered in the paper or Reports; there is, therefore, no Null Hypothesis, or rather the NUll that it is the Stevenson and not the Glaisher is not tested, merely assumed.

    With all his resources and expertise perhaps ‘Adam’ can address this issue instead of trolling, no doubt, as per his brief.

    If ‘Adam’ is indeed associated with BOM in any way perhaps he could acknowledge same; personally I would be grateful to have a BOM response, even if done surrespstitiously, to these issues raised by Jo. Otherwise perhaps ‘Adam’ can address the point of the Stevenson/Glaisher issue.

    00

    • #
    • #
      Adam Smith

      IMO, only an employee of BOM, well versed and with access to this obscure information could have posted such a comment.

      What on earth are you going on about? I copied and pasted that quote directly out of the document that is freely available on the BOM’s webpage!!!

      If you simply took the time to read the information that is freely available you would find that your conspiracies are based on nothing!

      The Stevenson/Glaisher distinction and the conclusion that the Glaisher method of collecting data caused a warming bias has justified the COOLING of the early temperature data to compensate for this warming bias.

      This is just WRONG! If you bothered to read the document you would find that the Glaisher enclosure also had the effect of slightly increasing the MINIMUM temperature ALL YEAR ROUND.

      Minimum Glaisher stand temperatures tended to have a cool bias
      of 0.2-0.3°C all year, and the diurnal temperature range thus has a positive bias.

      Page 7:
      http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

      So the homogenisation of the data to account for these enclosures involved reducing temps on SOME particularly hot days (which are rare), but then slightly INCREASING the minimum temp on all other days!

      So again your conspiracy ends up to be based on you simply refusing to read a document that is freely available to the public! And since you didn’t bother to read it, you didn’t realise that what it explains results in a general INCREASING of minimum temps from when these enclosures were used, which was the EARLY period of the temp record!

      If ‘Adam’ is indeed associated with BOM in any way perhaps he could acknowledge same; personally I would be grateful to have a BOM response, even if done surrespstitiously, to these issues raised by Jo. Otherwise perhaps ‘Adam’ can address the point of the Stevenson/Glaisher issue.

      I am not associated with BOM in any way apart from the fact some of my taxes fund its operation.

      00

      • #
        cohenite

        I am not associated with BOM in any way apart from the fact some of my taxes fund its operation

        Well said! Are you twins?

        This:

        This is just WRONG! If you bothered to read the document you would find that the Glaisher enclosure also had the effect of slightly increasing the MINIMUM temperature ALL YEAR ROUND.

        Same point Adam[s]; compared with the Stevenson; why isn’t the idea that the Stevenson is at fault in respect of both the maximums and minimums considered? If it is the Stevenson, then at least 0.2C can be taken off the trend. Add that to the 0.1C metrification and the 40% from the non-neutral homogenisation process as found here, which you obviously haven’t read and you have hardly any trend left.

        Then add the rounding off issue – what’s that you don’t know about it? Even the ACORN Technical Manual notes the rounding issue but dismisses it – and we’ve probably had a negative trend over the 20thC.

        00

        • #
          crakar24

          Yes twins it is………..

          Last night i trapped Messrs Smith and Smith in their own web of lies, one of them said homoginisation was used to calculate the trend which of ocurse i prooved otherwise (at some point they hot bunked the seat but the second Smith had no idea what the first Smith had previously said) which is why the second Smith claimed to have said no such thing thus creating another rabbit hole of obfucation which i refused to be drawn into.

          One can imagine the hand over take over process

          Smith Vers I “Well time i am off the missus will be pissed on account of the time”

          Smith Vers II “Yeah sorry mate got stuck blogging with a non comformist on Bolt”

          Smith VI “Yeah now the type, watch out for that crakar he is being a real pain in the arse this arvo”

          Smith VII “Whats happening?”

          Smith VI “Same old shit”

          Smith VII “So nothing i need to know about?”

          Smith VI “Nah, see ya”

          Your own hubris has been your achillies heel.

          00

          • #
            Adam Smith

            one of them said homoginisation was used to calculate the trend which of ocurse i prooved otherwise

            Sorry mate, but clearly I didn’t explain myself well.

            You homogenise the data so you can analyse it and make claims with confidence.

            If you just take the raw numbers and look for trends, then how do you know that the trend was caused by the climate itself and not just one or many of the variables that you haven’t accounted for?

            If you take the raw data and look for a trend that shows warming, how do you know that that was caused by actual changes in the climate or simply the fact different thermometers were used at different places at different times?

            The homogenisation process involves carefully accounting for all the variables that you know about so you can say with some certainty that any trends in the data, be it up down or sideways, is actually indicative of a changing climate.

            If you don’t understand this explanation I am happy to explain it to you again. But the simple fact is if you just take the raw data and make a graph out of it, you can’t prove anything with any certainty because you don’t know what caused the change.

            00

          • #
            crakar24

            I am not sure which Smith this is but this is the only one that should be allowed to comment from now on.

            Your retraction is excepted Adam and as for the rest of your comment i agree, you cannot draw any conclusions from the raw data as you need to make corrections (where justified) and then see what you end up with.

            Keep up the work

            Cheers

            Crakar

            00

          • #
            Rereke Whakaaro

            I actually gave him a thumbs up – how about that?

            00

          • #
            KinkyKeith

            Hi Adams

            In all of your discussion you have constantly made the claim that the data is faulty.

            If data is faulty, it should be excluded from the analysis.

            The best use of the data is to process it all and comment on outliers during analysis.

            This process eliminates the possibility of there being any “monkey business” with the data.

            Adjusting old data because of the “suspicion” of the “possibility” that “just maybe” the experienced weather person taking the readings made an error is a hideous self indulgence of the Climate Intelligentsia.

            The reason all of this effort is being made to LOOK BACK INTO THE PAST TO JUSTIFY CAGW is that there is No Current indication of

            temperature rises that correlate with CO2 values, but mainly because THERE IS NO VALID SCIENTIFIC ASSOCIATION BETWEEN DELTA co2 AND

            DELTA ATMOSPHERIC TEMPERATURE.

            “But the simple fact is if you just take the raw data and make a graph out of it, you can’t prove anything with any certainty because you don’t know what caused the change.” WOW !!!

            00

        • #
          Adam Smith

          Same point Adam[s]; compared with the Stevenson; why isn’t the idea that the Stevenson is at fault in respect of both the maximums and minimums considered?

          Well thank you for asking a reasonable question. The Stevenson type shelter was standardised earlier and is still in use. Also when automatic thermometers were introduced in the early 1990s, the BOM continued to use the Stevenson shelter.

          Since the Stevenson shelter has been in use far longer, and is still in use, it makes more sense leaving that data alone and adjusting the other data instead that used other enclosure designs instead. Because of course any data adjustment does risk introducing uncertainty.

          If it is the Stevenson, then at least 0.2C can be taken off the trend.

          This is absolute bullshit! You can’t just knock off something from the over all trend in this way! This is statistical nonsense.

          Add that to the 0.1C metrification and the 40% from the non-neutral homogenisation process as found here, which you obviously haven’t read and you have hardly any trend left.

          HAHAHAHAHAHAH so you just make up some non existent 40% figure because you’ve run out of argument.

          If you bothered to read the BOM report you’d find a summary of the adjustments on Table 6 of page 62:

          Number of positive adjustments 153 (49%) 154 (46%)
          Number of negative adjustments 160 (51%) 184 (54%)

          If you then look at the chart at the top of page 63 it shows that MOST of the largest NEGATIVE adjustments for both the min and max temp occurred over the last 30 years!

          Then add the rounding off issue – what’s that you don’t know about it?

          WELL HELLO MATE, I only talked about rounding issues in this very thread!!!

          That’s another thing that needs to be accounted for during homogenisation of the data!!!

          Even the ACORN Technical Manual notes the rounding issue but dismisses it – and we’ve probably had a negative trend over the 20thC.

          BULLSHIT! If the manual actually simply outright dismisses rounding issues you would’ve quoted me the page number or even the text itself.

          But of course you didn’t do that because you are just spreading nonsense.

          00

          • #
            cohenite

            BULLSHIT! If the manual actually simply outright dismisses rounding issues you would’ve quoted me the page number or even the text itself.

            Check it yourself, I’m not doing your homework.

            As for neutrality in the homogenisation; you have been repeatedly given a link to the Audit Application which shows the lack of neutrality; the link is here again. On page 6 the graph clearly shows the warming bias in degree and number of sites.

            In addition a peer reviewed paper on the neutrality issue is shortly to be published; the Abstract reads:

            The Australian Bureau of Meteorology (BoM) [BoM, 2011] publishes a dataset of Australian
            7 surface temperature series, called the High Quality Network (HQN). These data are regarded
            8 as a reliable basis for regional and national climate analysis, including determination of the
            9 Australian warming trend. Della-Marta et al. [2004] noted problems with this dataset, particularly subjectivity of the homogeneity adjustments used to correct for changes in location,
            11 instrumentation and other errors in individual station data. We evaluate two assumptions underpinning reliability of these data: (1) the adjustments are largely neutral, and (2) they contain
            13 minimal site specific effects such as urban warming. The effects of adjustments are examined
            14 by manually compiling a new network from raw data with the minimal adjustments necessary
            15 to produce contiguous series, called the Minimal Adjustment Network (MAN). The available
            16 data yielded 268 series of annual temperatures from 1910 to 2009 {134 from the HQN series
            17 downloaded from the BoM website, and 134 compiled from raw data, composed of 100 rural
            18 and 34 urban sites. Anova analysis shows significant differences between the Raw/HQ and Urban/Rural factors. Homogeneity adjustments appear to double the average urban trend from
            20 0.4 to 0.8C/Century, while the average trend of the rural group increases from 0.7C/Century
            21 to 0.9C/Century (significant at 95%CL as standard error of the mean is 0.05C). The assump2
            2 tion (1) that the adjustments are largely neutral is clearly refuted, due to the robust warming
            23 bias in the HQN. The trends of the rural vs. urban groups also differ significantly, refuting
            24 (2) minimal site specific effects. Contrary to expectations, the rural trends exceed the urban
            25 trends, indicating that other site-specific factors may exert a stronger infuence on trends than
            26 population sizes.

            You really are an arrogant goose. Typical warmer.

            00

          • #
            cohenite

            Well, have you checked the rounding?

            Goose.

            00

  • #
    Capn Jack Walker

    Aargh Smithy me old mate, yer be making a fuss with yer blathering and yer troll speak, if yer be asking fer keel hauling yer can be obliged.

    The premise is simple, they said best practice they have outliers in tmax and tmin, in the wrong order. They have had to be dragged to explain some of their methodologies and they work for us the tax payers, we don’t elect them.

    The fuss is simple, all they have to do is explain, present techniques how it happened fix it and move on.

    The problem for people like you and the some of the warmer crowd, is that in places like this you are likely to be talking to be best in fields of endeavours, statistics included.

    So be a dear Greer (germaine) and stop obfuscating, I thought old crakar and others have been more than patient.

    Me I am not on anyone’s side because no one is on my side.

    If you are this bad now Toddler training must have been your mother’s nightmare.

    Yer I know, off topic and so on.

    As far as censorship is concerned, time out corner is recommended for Smithy’s IP address and his balatant troll thuggery.

    Old trick, nag nag and accuse and obfuscate, he is not assisting the conversation. If he was funny it would be different but it’s just mean and vicious and most of all tedious.

    [Cap’n, yar brighter than the morning star….fair winds to ya] ED

    00

    • #
      Tristan

      There isn’t a single person on this blog anywhere near ‘best in field’ of stats. Most of the statistical commentary here is completely uninformed.

      00

      • #
        Rereke Whakaaro

        Can you produce some figures to support your assertion?

        00

      • #
        KinkyKeith

        Ah Tristan

        I aspire to be as suave and sophisticated.

        What was your point again?

        or as usual was there no point?

        So the monkey business that passes for statistical analysis by the “big boys” is beyond us?

        page 1 Stats 101

        COLLECT RELIABLE MEANINGFUL DATA.

        oops.

        Does that mean we cant use old hand measured temperature stats for the whole world?

        ha de ha ha

        00

  • #
    Capn Jack Walker

    Vote at Tony’s post no 61, yes means a sin bin, no means a stay.

    No rigging the vote.

    00

  • #
    Pete of Perth

    Does BOM have a list of all the uncertainties / variables that can be ascribed to each weather station used within Oz/

    I would like to see it.

    I postulate some of it is guesswork.

    00

    • #
      Adam Smith

      Does BOM have a list of all the uncertainties / variables that can be ascribed to each weather station used within Oz/

      Read this 92 page document and you will then understand all the factors they had to take into account:
      http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

      Most of the problems people here have with the data is because they are refusing to inform themselves of the issues at hand.

      It is a strange kind of wilful ignorance.

      00

      • #

        Documenting practices that essentially invalidate the data for a real day, doesn’t make the data valid.

        Nor does it make the entire process of homogenizing valid as temperatures aren’t quantities of anything. To relate a temperature to a quantity, one has to know the composition of the material and its thermal capacity. It’s insufficient to simply assume that it’s dry air in the case of air temperature as the presence of water vapour substantially alters the thermal capacity of air.

        The low heat capacity of air, regardless of humidity, means that it’s very sensitive to thermal perturbation. i.e. its temperature will change substantially for a small gain/loss in heat; often in “pockets” of air as air itself is a poor conductor and requires convection to move heat throughout its volume. Air temperature measurement is fraught with complications because it requires only a short-term perturbation to produce an extreme for the day; something which doesn’t affect non-gas components to nearly the same extent.

        Other parts of the climate system; those in the solid and liquid surface, have greater heat capacity. They intrinsically “filter” small fluctuations because their heat capacity is so large, and a great deal more energy has to be moved to measurably change their temperature.

        The studies and analyses of air temperatures are largely irrelevant to determining the thermal state of the climate system. The atmosphere contains just a few percent of the total heat. So little heat that, should the sun go out; that the whole surface of the Earth would be uninhabitable within 72 hours because of the loss of the very little heat that is stored in it.

        Air temperatures are only somewhat interesting for weather; the short-term transients of heat transfer. Not climate.

        00

        • #
          Adam Smith

          Nor does it make the entire process of homogenizing valid as temperatures aren’t quantities of anything.

          WHAT!? A measurement from a thermometer in a weather station is a measurement of the temperature in that weather station at that point in time!

          Your assertion that “temperatures aren’t quantities of anything” is just bizarre!!!

          00

          • #

            Msrs Smith;

            Perhaps you’d like to enlighten us as to which quantity is being measured by temperature?

            How does one e.g. move a temperature? Can one put it in a container?

            Thermodynamics may have moved on since I studied it in the 1970’s and 1980’s. That included some psychrometrics relating to the heat content of air for purposes of air conditioning, etc. I still have Mollier diagrams to which I refer infrequently.

            00

  • #
    justjoshin

    ok, i’ve got all of the ‘adjusted’ station data from the ACORN set which is downloadable from the BOM (wget is a wonderful thing 🙂 ).

    I’m importing all of it into SQL now. I’ll have a crack at it once it’s finished importing (there is a LOT of data, and I didn’t have time to write an optimised importer – it’s pretty quick and dirty).

    Does anyone know where I can find the raw data? It would make for a much more comprehensive analysis.

    00

  • #

    Do a BOM site search for this raw data set Daily maximum and minimum air temperature data
    Product reference
    IDCJDC03.201106

    00

    • #
      Adam Smith

      Hi Richard,

      Why does your map say it is going to get into the 30s near Adelaide today?
      http://www.aerology.com/?location=Australia&mapType=Tmax&date=7%2F17%2F2012

      The Bureau of Meteorology says it is going to get to 13.

      00

      • #

        The data set mentioned in the comment above is the one I am using as input into an analog method of combining the past four cycles of raw weather data to present the composite maps of the temperature from BOM records for equivalent days with the same lunar declinational patterns as the date forecast in this case 7-17-2012.

        The past four cycles were all cases with much more solar sunspot activity, the resultant changes you see today (increased precipitation and cooler temps), compared to the past patterns might be due to the fact we are now in a protracted slow solar cycle, I would expect more of the same, as long as the sun stays quiet.

        00

        • #
          Gee Aye

          sorry but many of those 30 degree spots, including those in western Victoria are never going to happen in July in our lifetimes, even in the worst case AGW scenario.

          00

          • #

            When I received the raw data set from BOM I went through and ran some QA tests to verify accuracy, looking for Min values higher than Max, values above 160 degrees F or below -60 F, negative precipitation values (there are only a couple, in the USA COOP data there were closer to 100, due to the digitization process done by mostly volunteer collage interns) made notes as to errors found and sent the data off to the developer who wrote the SQL language for finding and converting all values to degrees F XXX.XX format, it is possible that some of the short term/interrupted records from some of the outlying stations were mishandled in the process.

            He is currently processing the data into maps and saving the csv files and maps from the four analog year dates and (just) the composites are shown on my site. Copies of all his work are retained on a 2TB hard drive, to be returned to me, for my further investigative use when he finishes up in about a week. At that time I will again go through the data he returns to me, the csv files and look at the analog maps to determine if there were any errors introduced by his processing of the data.

            In my haste to bring what I believe to be a valid forecast to Australia for your use I posted the maps live to the site before I get a chance to double check all of the developer’s work, if any problems are found in the temperature conversion process that could lead to the type of abnormal temps you are suggesting I will find and correct them asap. It was by reading the file (http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf) Adam Smith linked to that I just became aware of how scrambled up the past records were, I did not evaluate the start/stop times of the conversion of the recording of temps from F to C for each station and if it was not consistent across the country, like the Canadian data set was, errors might have been introduced, I will check ASAP thanks for the heads up.

            00

    • #
      justjoshin

      found it.

      http://reg.bom.gov.au/climate/how/newproducts/IDCdtempgrids.shtml

      $609 for a copy.

      Take it out of my taxes you greedy sods.

      You can access the ACORN-SAT data for free from their site, but the untouched raw data you have to pay for? Seems backwards to me.

      00

      • #

        It looks like what they did at UEA in the Met office in the UK only the gridded data set is available on line, the individual raw daily stations data, can’t be had as Jones “lost it”.

        I bought the three raw data sets listed at the bottom of my Australian map pages, paid out the nose for it, the Environmental Canada data was free access on line, the COOP summary for the day daily set was about the same price as you found. Site development cost for the page and graphic interface cost ~$6,000.00, the current balance for the developer is $5,500.00 other costs run close to $6,000.00 for hosting and server maintenance fees, $6,300.00 for forecast evaluation services for 12 months, results will be posted at the end of 12 months of evaluation.

        So just a token gift to the people of Canada, and Australia, put together in my spare time since I retired three years ago.

        00

      • #

        The gridded data set you found for $609.00 is not the raw data from each station but plotted at the same resolution I grid my data to make my maps,”The resolution of the data is 0.05 degrees (approximately 5km)”
        It depends on how many nearest neighbors they used as to how much resultant smoothing you get to smear the data out to unreliable averages with 8 nearest each station add 12% to the mix with 32 nearest neighbors 3% at 50 – 2%.

        But it is not the raw STATION DATA…

        00

      • #
        cohenite

        What is needed from ACORN is the code they used to homogenise the data to produce the final ACORN temperature record.

        That will be the test of Adam’s bluster.

        00

  • #
    Adam Smith

    BobC
    July 18, 2012 at 2:27 am
    Adam Smith
    July 17, 2012 at 6:37 pm · Reply

    Real statisticians strongly disagree that “homogenized” data is more scientifically significant than raw data.

    This is just an completely absurd claim!

    The raw data has been compiled using different thermometers, different weather station enclosures (including significant variation between the states), different reporting procedures (the BOM has used 3 different reporting procedures over the last 104 years), different weather station locales.

    You simply can’t assume that ALL of these variables magically didn’t effect the raw data in anyway and expect to be able to make any scientifically valid claims for it!

    You MUST account for these variables before coming to any conclusions from the data else you simply may be analysing the ‘noise’ caused by the variables rather than actual changes (the ‘signal’) in the climate over that period!

    This is just basic elementary science. You need to account for variables so you can be confident that your conclusions are actually measuring the variable you are interested in, i.e. changes in the climate over the period in question.

    The short conclusion is that any homogenization technique increases the uncertainty in the data going forward

    Sorry mate, but how can it be less certain than simply leaving all the variables that you are aware of unaccounted for? Doing that just dooms you from the start of not being able to make any statistically or scientifically valid claims about trends in the data because you don’t know what has caused the changes, the climate itself, or one of any number of other variables that you have just decided to ignore.

    — as opposed to making it more “accurate” as you imply — and if that uncertainty is correctly acknowledged and accounted for (which it never is, in government-funded climate science) it just makes the conclusions drawn from such homogenized data meaningless.

    You seem to just be choosing to ignore all the careful work that has gone in to reduce uncertainty as much as possible. If you bothered reading this document you would understand that:
    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    And you can see in their calculations and charts that they DO account for degrees of uncertainty. They even show some statistical approaches to handling the data that they REJECTED because it resulted in increasing the uncertainty!

    But it is astonishing for you to claim that you want to reduce uncertainty when you are saying that it is better to just IGNORE all the effect of all the variables! That will make you very certain of coming to conclusions that are scientific nonsense.

    In the end, there is only one way to validate a theory — demonstrate its predictive skill. Unfortunately for AGW, such attempted demonstrations routinely fail.

    Sorry mate, but that’s not actually the task at hand. The ACORN data set is an attempt to look at trends in the Australian climate based on data that has been recorded since 1910. It doesn’t say what will happen in the weather from tomorrow, it is a look at WHAT HAS HAPPENED in the recent past.

    It seems that your baseless assertion that the raw data is magically more accurate than the homogenised data is based on a flawed understanding of what the BOM is trying to accomplish. They are looking at trends in past temp data, that is all.

    It is possible that they just happened to record 102 years that had a warming trend and that things will revert to a cooling trend from now on. If that happens so be it, but for you to just reject their research out of hand and assert that the raw data is somehow better is laughable.

    00

    • #
      Gee Aye

      You guys are going to dance around each other and also [delete]

      sorry Gee, you have stepped into the twilight zone. Ponder all of the loose nuggets somewhere else. You have been here long enough to know right from wrong. [ED the oldest]

      00

      • #
        Gee Aye

        Thanks Ed the oldest… I thought this thread was about BOM data handling and how the manipulation of data and inclusion of suspect data makes the final datasets of limited use for predicting trends.

        The post by me that you deleted was following on from the debate here about whether the homogenisation process was a) justified at any level and b) done poorly. My post was asking the protagonists to clarify the aspects they are debating under a and b as I for one am finding their posts more and more esoteric. I would have thought it up to them to either take me up on the offer or to ignore me, but I didn’t think it was so off topic and there was no offensive statements (can I make that clear to everyone or can you please make this clear). I’m not even sure what a loose nugget is.

        [I thought you used the random paragraph generator. Perhaps I was hasty but you did a better job making your point in the reply.] ED
        [PS How about loose marbles?]

        00

        • #
          Gee Aye

          hmm… It wasn’t random so perhaps something went wrong in transmission which would be a problem at my end. If you are able, can you please email me the post? Thanks.

          00

    • #
      cohenite

      This is just an completely absurd claim!

      Rubiish Adam; in fact your claim that BobC’s claim that “Real statisticians strongly disagree that “homogenized” data is more scientifically significant than raw data” is an absurd claim. See.

      Koutsoyiannis notes:

      The difference between the trends of the raw and the homogenized data is often very large.

      And:

      In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends.

      The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4C and 0.7C, where these two values are the estimates derived from raw and adjusted data, respectively.

      In Australia the Della-Marta et al 2004 study of the the then HQ network concluded this:

      Despite the suitability of the dataset for national and regional-scale analyses, any individual station record within the dataset should still be treated with
      caution. The subjectivity inherently involved in the homogeneity process means that two different adjustment schemes will not necessarily result in the same homogeneity adjustments being calculated for individual records. However, if the overall biases of the different approaches are neutral then spatial averages should be highly consistent across a large number of homogenised records. Also, even though a subjective
      assessment of the likelihood of urban contamination within these records has been made, further work needs to be undertaken to better understand the relative contribution of urban warming to the temperature records of small Australian towns.

      As has been pointed out repeatedly to Adam the overall bias of the homogenisation is NOT neutral.

      Really, Adam, you and BOM are a joke; the only problem is noone is laughing!

      00

      • #
        Tristan

        The difference between the trends of the raw and the homogenized data is often very large.

        The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4C and 0.7C, where these two values are the estimates derived from raw and adjusted data, respectively.

        Casts doubts how? None of the above says anything about the veracity of the adjustments which are:

        A) Documented

        and

        B) Replicable

        Still waiting for McIntyre’s analysis of all the HadCRUT data he received nearly a year ago.

        Keep in mind that the fuss skeptics make over temperature records is quite odd, given that their pet record, UAH and their most hated record, GISS are well within each others error bars when it comes to estimation of temperature trend.

        00

        • #
          cohenite

          B) Replicable

          No, read Della-Marta.

          UAH, GISS and other friends: very strange

          Even stranger.

          Explain that statistical whiz.

          00

        • #
          Tristan

          If that site provided error bars you’d see that all trends fall well within each other’s range.

          The error bars are pretty massive (especially for the satellite records) for that time period because it contains two bigass el-ninos.

          As for Della-Marta, I see no issue wrt replicability. Two different analysts will get two different results, which may well be different at a station to station level but will tend to insignificance once viewed at a regional or national level. Law of large numbers and all that.

          00

          • #
            KinkyKeith

            You cannot, remove , process, pasteurize or homogenise any data relating to El Ninos whether they are “big ass” or not.

            That is SELECTIVE EDITING OF DATA.

            Put bluntly.

            SCIENTISTS DON’T DO THAT SORT OF THING.

            It is the domain of politicians to edit the reality we inhabit.

            KK

            00

          • #
            cohenite

            Law of large numbers has been tried to be transferred to climate and weather; it’s called Telecommunication; Steig relied on it in his famous Antarctica paper; it’s garbage.

            00

          • #
            BobC

            Tristan
            July 18, 2012 at 11:27 pm · Reply

            As for Della-Marta, I see no issue wrt replicability. Two different analysts will get two different results, which may well be different at a station to station level…

            So, you’re using a different definition of “replicability” than the rest of us?

            00

    • #
      BobC

      Adam Smith
      July 18, 2012 at 1:47 pm · Reply

      “BobC
      July 18, 2012 at 2:27 am
      Real statisticians strongly disagree that “homogenized” data is more scientifically significant than raw data.

      This is just an completely absurd claim!

      So, read Brigg’s explanation and tell me what part of it is “absurd”. All you have are strong opinions, no real arguments.

      You simply can’t assume that ALL of these variables magically didn’t effect the raw data in anyway and expect to be able to make any scientifically valid claims for it!

      No one assumes any such thing. You seem to have a reading comprehension problem.

      “The short conclusion is that any homogenization technique increases the uncertainty in the data going forward

      Sorry mate, but how can it be less certain than simply leaving all the variables that you are aware of unaccounted for?

      It’s really pretty simple, Smith: The effects of the variables you list are not precisely known — all homogenization algorithms attempt to detect and reduce them, without exact knowledge of what or how large they are. The extra uncertainty is introduced by the algorithms. This is established in empirical testing . Your opinion, however loudly and often repeated, is irrelevant compared with actual testing.

      Of course, if the errors in the data were exactly known, they could be removed and the data would be exactly correct. That this can’t happen in the real world is obvious.

      Your knowledge of “basic science” seems to be less than “basic”.

      00

      • #
        Rereke Whakaaro

        Go easy on Adam, Bob.

        He is a construct of the post-modern education system where thinking paradigms are more reliant on intuition and imposed moral frameworks than on logic and empirical observation.

        00

        • #
          BobC

          Sorry if Smith ends up under the bus, Rereke; but I’m really writing for those lurkers who might be fooled into thinking he knows anything, or has some valid point.

          00

        • #
          Adam Smith

          Sorry mate, but clearly your education is of a much lower standard than mine because I can actually engage with ideas I don’t agree with.

          When I asked you to have a look at some videos that point out all the errors on Monckton’s understanding of climate you just ignored that post.

          Because you are only a sceptic about things you generally disagree with. As soon as someone says something that you hope is true you stop being a sceptic.

          Your final assertion that the only things worth knowing about are those that can be subject to “empirical observation” is completely hilarious, especially when you somehow relate that to “moral frameworks”.

          How many units of liberty equals how many units of equality mate?

          Your assertion of empiricism relating directly to morality simply demonstrates how poorly educated you are, but what is worse is you don’t seem to know how much of a philosophical howler that statement is!

          (Then you avoided a direct reply to BobC’s 67.3 post because it was too hard for you to answer.Why did you dodge it Adam? could it be because you are too busy trying to look smart?) CTS

          (I am now at the point where I am going to start snipping out your baseless crap and off topic comments) CTS

          00

          • #
            Rereke Whakaaro

            clearly your education is of a much lower standard than mine

            I personally doubt that. And as I have said before, I don’t put much credence on degrees or diplomas or certificates – been there, done that, don’t mean anything at the end of the day – it is what people do, it is what they achieve, that matters – especially since those pieces of paper have become so devalued. Has it not occurred to you, that when people refer to you as “Doctor Smith”, they might just be taking the piss?

            I can actually engage with ideas I don’t agree with.

            Then prove it, You can start by telling us what you would expect the sceptical response would be, to your position on any item – present both sides of the argument, and then say why you accept one and discount the other. And then let us know what additional information would cause you to reverse your position.

            When I asked you to have a look at some videos … you just ignored that post.

            I didn’t even bother reading your comment. When you appeared willing to die in a ditch over the word “discipline”, I just gave up trying to engage with you in an adult way. The only reason I am responding now, is because your comment directly follows one by BobC, who was responding to a comment of mine. In that comment I was talking about you, but not to you.

            …you are only a sceptic about things you generally disagree with. As soon as someone says something that you hope is true you stop being a sceptic.

            That is another statement that you are going to have to prove, if you want to keep what little credibility you still have. I will be interested in your rationale, if you have one.

            Your assertion of empiricism relating directly to morality simply demonstrates how poorly educated you are, but what is worse is you don’t seem to know how much of a philosophical howler that statement is!

            Well, you loose big time on that, because a) you have misread what I wrote – I did not mention morality, and b) in the way you have responded, you have demonstrated the very point I was trying to make.

            00

          • #
            BobC

            Smith: Now that you have started with the ad hom attacks, does that mean (by your own standards) that you have lost the argument?

            00

      • #
        Adam Smith

        It’s really pretty simple, Smith: The effects of the variables you list are not precisely known

        What a weak as piss statement mate. Just because they aren’t “precisely known” doesn’t mean they are unknown.

        Your final result sure has more precision after you account for the variables than if you just leave them unaccounted for! This is absolute basic science!!!

        — all homogenization algorithms attempt to detect and reduce them, without exact knowledge of what or how large they are.

        More bullshit mate! Here’s just one example. We have careful designs for the old and new weather station enclosures, so we can run an experiment on them to see how the enclosures effect temps! In fact we have a WORKING EXAMPLE of this because different enclosure types were used concurrently at Adelaide Airport for 60 years, so we can compare the temps against each other to see how the different enclosure types ‘biased’ the data one way or the other over a long period of time under a range of conditions!

        You then take the information learned from that and apply it to other stations with different enclosure types!

        Of course this isn’t PERFECT, it would be great if we used exactly the same type of enclosure everywhere for 100 years. But we can use some science and reason to make a rational adjustment of the data at hand.

        The extra uncertainty is introduced by the algorithms. This is established in empirical testing . Your opinion, however loudly and often repeated, is irrelevant compared with actual testing.

        This is nonsense as well, because we don’t have a time machine mate! You can’t go back to 1910 and set up an auto thermometer that wasn’t introduced until the 1990s and see how this effects the data you got back in 1910. So instead you do an experiment so you learn about how the automatic thermometer varies against the old manual thermometer, you then use that info to adjust the data!

        Thats SCIENCE mate! You are basically asserting that your limited understanding of science means that people who know about science are wrong!

        Of course, if the errors in the data were exactly known, they could be removed and the data would be exactly correct. That this can’t happen in the real world is obvious.

        This is just more misleading nonsense mate because you used the term “exactly known”. Of course we don’t “exactly know” the ‘right’ answer. But we use science, and reason, and experimentation, to make the best adjustments we can. If we find out new info in the future, then that can be added to the mix to produce better adjustments.

        But just because we don’t know the absolute precise answer doesn’t mean we have absolutely no idea what adjustments to make!

        Your knowledge of “basic science” seems to be less than “basic”.

        Sorry mate, but your repeated assertion that since we don’t know the precise and absolute perfect answer means we don’t know the answer at all is just idiotic. It defies basic logic let alone a basic understanding of science.

        00

        • #
          BobC

          Someday Adam, we would like to see a logical argument from you. You seem to think that an unsupported claim followed by multiple exclamation marks is an argument.

          00

        • #
          BobC

          Smith, you’re starting to look like the [snip] with your persistent inability to understand the point of this post:

          While you go on and on (unsupported by any evidence) about how great the BOM’s adjustment procedures are, the fact remains that it has been demonstrated that they are unable to correct the simplest, most easily found error — Tmin > Tmax.

          Despite claiming that they do correct for these errors, independent screening has shown that they actually increase the number of them.

          And what evidence do you produce for the claim that their adjustment procedures improve the data? Well, none actually — apparently you think that following unsupported claims with multiple exclamation marks constitutes evidence, and stubbornly refusing to acknowledge the point of the post means it will go away.

          [snip] indeed.

          ———————————————
          [Your point may be well made, but try not to let your frustration lead to name calling. – Mod]

          00

    • #
      BobC

      Adam Smith
      July 18, 2012 at 1:47 pm · Reply

      You seem to just be choosing to ignore all the careful work that has gone in to reduce uncertainty as much as possible. If you bothered reading this document you would understand that:
      http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

      When one reads the linked document, however, you find that it doesn’t support much of anything Smith claims (as cohenite has already shown, here).

      Let’s look at a few examples of the document vs. Adam Smith:

      Smith claims:

      But it is astonishing for you to claim that you want to reduce uncertainty when you are saying that it is better to just IGNORE all the effect of all the variables! That will make you very certain of coming to conclusions that are scientific nonsense.

      However, the document claims (page 1):

      Site specific inhomogeneities have only a marginal impact on observed temperature trends at the global scale because they tend to cancel each other out when averaged across large numbers of stations

      And, again on page 36:

      Experience with the analysis of data has highlighted that most errors in temperature variables tend to be quite random, so that their impact in aggregate when averaging across many sites is very small and can be effectively ignored.

      So, the BOM implies that the overall climate trend ought to be detectable in the raw data, as long as enough stations are used — and if the homogenization proceedure increases that overall trend (as it has been shown to do here), then that should be evidence that the homogenization has introduced a bias.

      (Of course, anyone who thinks that inhomogenities that, in practice, work in one direction only — such as Urban Heat Island effect — “average themselves out” might want to try Dogbert’s investment advice.)

      Now Smith exclaims:

      You MUST account for these variables before coming to any conclusions from the data else you simply may be analysing the ‘noise’ caused by the variables rather than actual changes (the ‘signal’) in the climate over that period!

      But, the BOM admits that these variables are largely unknown in both magnitude and time (page 28):

      Metadata are often incomplete or missing, and can be open to interpretation. This is especially true of metadata that are relevant to a large part of the network at the same time, for reasons discussed in the previous section. It is also an increasing problem as one goes further back in time.

      Breezing past these difficulties, Smith proclaims:

      This is just basic elementary science. You need to account for variables so you can be confident that your conclusions are actually measuring the variable you are interested in, i.e. changes in the climate over the period in question.

      So, tell me Smith: How does one “account” for unknown (or poorly known) variables and arrive at the pure climate data underneath? A fabulous career in science awaits your explication and solution of this problem — which faces everyone who honestly tries to measure data about the world.

      The BOM’s solution, apparently, is to use their own judgement as to what does and does not constitute “climate data”:

      Optimisation of the network therefore requires a blend of objective and heuristic methods of selection and treatment, to ensure that the best possible data sets are selected.

      the “heuristic” part of this apparently involves things like using “local climatic knowledge” to reject “anomolously low temperatures” (page 39).

      Basically, the BOM document Smith so admires is a collection of fuzzy requirements based largely on judgement calls, with nary a definable (hence codeable) algorithm in sight. No wonder they don’t want to release their code.

      Of course, it never occurs to them to test their methods (however fuzzy) on synthesized data, say, to see if it really makes it any better.
      But that would be “basic science” — and we already know the BOM (and Smith) know nothing about it.

      http://jonova.s3.amazonaws.com/artwork/icons/star-pick-cts.gif

      00

      • #
        Tristan

        BOM implies that the overall climate trend ought to be detectable in the raw data, as long as enough stations are used

        Not quite. A subset of artifacts in the data (Site specific inhomogeneities) do exhibit this property. Systemic artifacts, such as changes in the process of observing temperature itself (new tech/methods) or the location (urbanisation/moving the station) are the important ones to control for.

        How does one “account” for unknown (or poorly known) variables and arrive at the pure climate data underneath?

        Basically no such thing as pure data when it comes to most types of observations. Good statistical practice assumes that data corruption exists. As long as the corruption is uniformly distributed, all it does is increase the confidence interval of the estimate. When the corruption is not uniformly distributed (determined by testing the anomaly vs each variable) there is an issue that needs to be addressed.

        the “heuristic” part of this apparently involves things like using “local climatic knowledge” to reject “anomolously low temperatures”

        Standard practice for many sorts of observational studies. It’s basically just a ‘sanity check’ that can identify particular sorts of errors. A lot of data examination is arbitrary, its unavoidable. You might call -8C anomalously low, I might draw the line at -5C. That’s not as important as the fact that we’d both spot -60C and say ‘that doesn’t make sense’.

        00

        • #
          BobC

          Fairly sensible comment, Tristan.

          As you point out, the problem of data corruption is universal (no “perfect” data exists), and the problem of dealing with it is far more complicated than Adam Smith seems to realize (why I think he never has and is just pretending expertise).

          “the “heuristic” part of this apparently involves things like using “local climatic knowledge” to reject “anomolously low temperatures”

          Standard practice for many sorts of observational studies. It’s basically just a ‘sanity check’ that can identify particular sorts of errors. A lot of data examination is arbitrary, its unavoidable. You might call -8C anomalously low, I might draw the line at -5C. That’s not as important as the fact that we’d both spot -60C and say ‘that doesn’t make sense’.

          Agreed — but if your heuristics involve judgement calls based on shifting standards determined on the spot by different individuals (like the BOM’s, and your example above), then you don’t have a testable scientific proceedure, and can not claim that the data is thereby improved.

          If you explicitly define your standards such that the adjustments are repeatable, you still can’t claim to have improved the data without testing (often done on randomly produced synthetic data using both a model of the data and models of the known and suspected corrupting variables). Even then, testing might show that the adjustment proceedures make the corruption worse, not better.

          And, even if you succeed in removing some of the corrupted data, you make the uncertainty in the (hidden) “real” data greater by adding the uncertainty of the homogenization algorithms, as statistician William Briggs points out.

          *******************

          BOM’s adjustment proceedures are rarely specified enough to be repeatable, and they show no interest in testing (or even acknowledging that it needs to be done).

          In one instance where they have specified a repeatable test — screening for Tmin > Tmax — independent repetition of that test has shown that they failed to remove these errors, and even increased their frequency. This is prima facie evidence that their “homogenized” data is less likely to represent the true climate signal than the raw data.

          00

          • #
            Tristan

            but if your heuristics involve judgement calls based on shifting standards determined on the spot by different individuals (like the BOM’s, and your example above), then you don’t have a testable scientific proceedure,

            The procedure is repeatable as long as you document the changes.

            you still can’t claim to have improved the data without testing (often done on randomly produced synthetic data using both a model of the data and models of the known and suspected corrupting variables). Even then, testing might show that the adjustment proceedures make the corruption worse, not better.

            It’s actually pretty easy to check whether or not data has been ‘improved’. ie Before adjustments, errors not a Gaussian distribution, can’t perform robust parametric analysis; After adjustments, errors are a Gaussian distribution, can perform a robust statistical analysis.

            you make the uncertainty in the (hidden) “real” data greater by adding the uncertainty of the homogenization algorithms, as statistician William Briggs points out.

            Absolutely. However, unless that uncertainty would not be expected to follow a Gaussian distribution, the size of the errors of the individual records becomes less and less relevant as more and more records are used. In the instance of GMST, the errors in the individual data are all but removed thanks to the number of temperature stations being used.

            00

          • #
            cohenite

            Before adjustments, errors not a Gaussian distribution, can’t perform robust parametric analysis; After adjustments, errors are a Gaussian distribution, can perform a robust statistical analysis.

            That is one test for the validity of homogenisation; but in any event ACORN does not achieve it. Adam gleefully notes above that:

            HAHAHAHAHAHAH so you just make up some non existent 40% figure because you’ve run out of argument.

            If you bothered to read the BOM report you’d find a summary of the adjustments on Table 6 of page 62:

            Number of positive adjustments 153 (49%) 154 (46%)
            Number of negative adjustments 160 (51%) 184 (54%)

            If you then look at the chart at the top of page 63 it shows that MOST of the largest NEGATIVE adjustments for both the min and max temp occurred over the last 30 years!

            This completely misses the point as page 6 of the Audit Application shows.

            The graph on page 6 shows the differences in trend caused by the adjustments; it does not matter if there is an equivalence in +ve and -ve adjustments; what is crucial is that the effect on trend is neutral. HQ did not do this and neither, apparently, has ACORN.

            00

          • #
            Tristan

            it does not matter if there is an equivalence in +ve and -ve adjustments; what is crucial is that the effect on trend is neutral.

            Given that the adjustments are about preserving the trend from various sorts of errors, to say that they must have a neutral effect on the trend is like saying ‘you can only make adjustments if they don’t do anything’.

            00

          • #
            cohenite

            Given that the adjustments are about preserving the trend from various sorts of errors, to say that they must have a neutral effect on the trend is like saying ‘you can only make adjustments if they don’t do anything’.

            Tristan, on the face of it, you and Adam appear to be, wilfully or otherwise, misrepresenting what BOM have done in ACORN adjustments.

            Page 62 and 63 of the ACORN technical manual is relevant.

            For purposes of true adjustment neutrality the equality between -ve and +ve adjustments over the whole of the particular temperature sites is not important.

            The crucial point is whether those adjustments are neutral over the particular sites. Table 6 and Figure 19 do not tell us whether there has been equality of trend produced by the equality between -ve and +ve adjustments. That is because a particular site can be overall -vely adjusted but still have a +ve trend produced by the adjustment and, to a lessor extent because overall the trend has been increased or made +ve by the adjustments, vice-versa.

            In otherwords, the adjustments have created a part of the trend. That is wrong.

            00

          • #
            Tristan

            Hang on, you’re saying that 760 adjustments, most of which are less than +/- 1C, in a set of 7 million data, introduces a trend?

            00

          • #
            cohenite

            Don’t be disingenuous Tristan, something produced a trend which wasn’t in the raw data.

            I’m not sure where you get 760 from.

            00

          • #
            Tristan

            The pages you referenced. Graphs of 760 adjustments.

            00

          • #
            cohenite

            You mean 660 not 760.

            And even the 660 makes no sense in respect of the table on page 62 because the number of adjustments per category add up to far more than the max and min totals; so some adjustments must have had more than one reason which would suggest that some of them may have had reasons for a +ve and a -ve adjustment competing.

            It’s a schmozzle.

            00

          • #
            BobC

            Tristan
            July 20, 2012 at 8:57 am

            “but if your heuristics involve judgement calls based on shifting standards determined on the spot by different individuals (like the BOM’s, and your example above), then you don’t have a testable scientific proceedure,

            The procedure is repeatable as long as you document the changes.

            You’re seriously missing the point here, Tristan. A well-specified algorithm can be applied to multiple data sets. In particular, it can be applied to synthetic data sets in order to explore its effects.

            A heuristic procedure that depends on judgement calls based on the particular data at hand cannot be meaningfully repeated on a different set of data. Since the process changes in a unspecified way for each data set (being dependent on a person’s judgements about that specific data), you have no way to test it for its ability to correct errors or add bias.

            The best you could do, in this case, is to study a particular person’s modifications of various synthetic data sets and try to extract a definable algorithm that closely matches that person’s judgement. Then, you can test that algorithm for bias, etc. This is the classic problem faced by anyone trying to automate a procedure that is judgement-heavy, such as pap smear screening. It is always hard, and not always even possible.

            There is no indication in their document that the BOM even contemplated doing any such thing.

            00

          • #
            BobC

            Tristan
            July 20, 2012 at 8:57 am

            It’s actually pretty easy to check whether or not data has been ‘improved’. ie Before adjustments, errors not a Gaussian distribution, can’t perform robust parametric analysis; After adjustments, errors are a Gaussian distribution, can perform a robust statistical analysis.

            It may be easy, but it’s not necessarily correct. You are applying a model here (the assumption of Gaussian distribution of errors) which, if not true (say, due to UHI) will itself introduce a bias, as you admit:

            …unless that uncertainty would not be expected to follow a Gaussian distribution, the size of the errors of the individual records becomes less and less relevant as more and more records are used.

            Let us suppose, however, that you are correct and, as you claim:

            In the instance of GMST, the errors in the individual data are all but removed thanks to the number of temperature stations being used.

            So, if we assume that the year-to-year temperature changes are accurately measured (because the thermometer errors are normally distributed and have been averaged to irrelevance), what does that tell us about any detected trend?

            Not that much, it turns out:

            Here is an analysis of the statistics of global temperature changes, year-to-year. The changes (from NASA-GISS published data since 1850) are nearly randomly distributed. The second graph in this analysis plots a new “temperature” graph of 200 points generated by a similar (but truly random) distribution of year-to-year changes each time the page is refreshed. It is interesting that, almost always, the “temperature” graph appears to have a significant trend.

            A detailed analysis of the reported global temperature changes show that the distribution is not normal, but deviates in several essential ways that allow a calculation of the most probable underlying temperature trend.

            I quote from the conclusion:

            Although the distribution of annual changes in global average temperatures is close to a normal distribution it differs from a normal distribution in two essential ways. First it is skewed to the right. Second, because of this skewness it does not have a finite standard deviation and thus any sample estimates of the standard deviation of annual changes is meaningless. The expected increase in average global temperature from the analysis is 0.31°C per century. This is of the same order of magnitude of other empirical estimates of the trend in average global temperature.

            and that is after NASA’a strenuous efforts to introduce the most postive trend that they can achieve.

            Perhaps it’s not surprising that government-funded climate scientists don’t seem to be very good at statistics, when standard statistical analyses tend to show that there is nothing much exceptional about the recent global temperature.

            00

          • #
            Tristan

            It may be easy, but it’s not necessarily correct. You are applying a model here (the assumption of Gaussian distribution of errors) which, if not true (say, due to UHI) will itself introduce a bias, as you admit:

            If I said ‘residuals’ instead of ‘errors’ it’d be more precise.

            Before we adjust for UHIE there’s a correlation between rural/urban and residuals (aka non-Gaussian distribution of residuals or bias). That’s bad.

            After we adjust for UHIE, there’s no longer a correlation between rural/urban and residuals. The residuals are now distributed in a Gaussian manner. That’s good.

            00

          • #
            Tristan

            The temperature of the Earth’s surface is thermodynamically the cumulative sum of the net heat inflow to it. The question is whether or not the net heat inflow is a random (stochastic) variable.

            No it ain’t. His model assumes that the year to year change in earth’s temperature is ‘heat in – heat out’, Which is fundamentally wrong.

            It’s hard to make a sensible claim about climate science when your knowledge of climate science is basically nil, as this fellow shows.

            00

          • #
            BobC

            Tristan
            July 27, 2012 at 5:00 am

            The temperature of the Earth’s surface is thermodynamically the cumulative sum of the net heat inflow to it. The question is whether or not the net heat inflow is a random (stochastic) variable.

            No it ain’t. His model assumes that the year to year change in earth’s temperature is ‘heat in – heat out’, Which is fundamentally wrong.

            So, if we’re going to let you say “‘residuals’ instead of ‘errors’”, perhaps we should let his statement mean “related to the total energy flux” instead of “the cumulative sum of the net heat inflow”. At any rate, the accuracy of that peripheral statement has no bearing on the hypotheses that the yearly changes in Global Temperature are or are not random Gaussian distributed — and that it is not possible to extract a meaningful, deterministic trend of sub-degree per century size from a series of 150 numbers with the approximately the same deviation as seen in the Global Temperature series.

            00

          • #
            Tristan

            I changed error to residual, because the ‘statistical error’ is the unknowable which the residual estimates. I was playing fast and loose with terminology, but it certainly doesn’t alter the meaning of anything I said.

            perhaps we should let his statement mean “related to the total energy flux” instead of “the cumulative sum of the net heat inflow”

            I don’t think that helps his cause. He’s artificially inflating the standard deviation by including an internal oscillation. In other words, he’s not measuring what he thinks he is measuring. If he wants to ascertain whether the earth’s heat content has behaved unpredictably, he needs to measure it appropriately.

            00

          • #
            Tristan

            ENSO fools a lot of people.

            If you’re trying to measure the earth’s heat content via the surface temperature record, you have to control for ENSO (and AoD) if you want meaningful results.

            00

        • #
          Tristan

          Perhaps it’s not surprising that government-funded climate scientists don’t seem to be very good at statistics, when standard statistical analyses tend to show that there is nothing much exceptional about the recent global temperature.

          The guy you linked to doesn’t even bother to test if the year to year temperature trends are independent. Normally you’d need to run a test to do this but in this case it’s so blatant you can tell they aren’t just by looking for at his graph for 5 seconds…a big z one way is followed by big z the other way.

          There’s a good physical reason for that. It’s called ENSO (El Nino Southern Oscillation). You can’t have 3 el ninos in a row.

          I think you need to be a bit more cautious when you makes statements about who is or isn’t good at statistics.

          00

          • #
            BobC

            Tristan
            July 27, 2012 at 5:21 am · Reply

            I think you need to be a bit more cautious when you makes statements about who is or isn’t good at statistics.

            You’re right — I didn’t take a good look at his procedures. I’m probably prejudiced by my research into Michael Mann’s blatant misuse of statistics, and the irrational way he is defended by the ‘team’ (even while they admit he is full of it in ‘private’ emails).

            00

          • #
            Tristan

            Mann 98/99 certainly is a contentious topic.

            00

  • #
    crakar24

    This site

    http://cawcr.gov.au/publications/technicalreports/CTR_049.pdf

    has been used quite a bit of late to defend ones position so i had a bit of a read firstly it has been quoted here that the corrections made are split at number of positive adjustments 153 (49%) 154 (46%) Number of negative adjustments 160 (51%) 184 (54%) whilst this is true the paper does not stipulate the magnitude of said adjustments so the point is probably moot.

    Also the list of adjustments and their complexity goes on and on and on and on, who would have thought something so simple could be made so difficult, there is not one station that escapes the adjustment treatment and the reason why so many adjustments are made is simply because the temp record is full of situations that WILL create an error in the data. So many errors, so many adjustments.

    I would like to highlight this statement ignored by others which can be found on page 10

    The ideal criteria, that are met at very few places in the world (and none in Australia) for
    locations used in a climate change analysis include:
    • A long period (preferably 100 years or more) of continuous data with few or no missing
    observations.
    • No site changes, changes in observation practices or instruments, or significant changes
    in local site environment.
    • Located well outside any urban areas

    In other words there is not one site in Australia that is suitable to be used for the purpose of studying climate change and yet here are the BOM using over 100 such sites with adjustments too numerous to mention for that exact purpose.

    Little wonder we find this statement at the beginning of the paper

    CSIRO and the Bureau of Meteorology advise that the information contained in this publication
    comprises general statements based on scientific research. The reader is advised and needs to be
    aware that such information may be incomplete or unable to be used in any specific situation. No
    reliance or actions must therefore be made on that information without seeking prior expert
    professional, scientific and technical advice. To the extent permitted by law, CSIRO and the Bureau
    of Meteorology (including each of its employees and consultants) excludes all liability to any person
    for any consequences, including but not limited to all losses, damages, costs, expenses and any other
    compensation, arising directly or indirectly from using this publication (in part or in whole) and any
    information or material contained in it.

    I think the next time a report like this is waved around as some kind of proof we should just ignore it.

    00

  • #
    John Smith101

    Though a regular visitor here I do not often comment but seeing as I may be a “cyber cousin” to Adam Smith or Smiths there are a few comments I like to make.

    Adam, you need to differentiate between your lengthy cutting and pasting on homogenisation (and read about it from sources besides the BOM’s version) from temperature adjustments – these are two different processes and any study of these processes shows a predominant positive bias in surface temperatures over latter decades compared to earlier (pre-1950s) decades. This applies to many data sets (not just the BOM’s) including Gisstemp, HadCrut, etc. Your homogenised “pasting” is (mostly) correct in theory (though questions remain as to its accuracy – peer-reviewed techniques notwithstanding – as to how homogenisation is applied, in particular with regard to the BOM dataset, but with other datasets as well, yet for some reason you make no mention of adjustments, which a number of other posters have alluded to in the above postings. Why is this? Hint: adjustments can refer to flux adjustments and parameterisations used in climate modelling (where temperature and other variables are adjusted to make the model “fit” observed data) – there are other meanings as well, which I leave to you to research.

    Adam, you obviously have the ability to quickly research and partially process information but while your views are partisan (and prejudicial – I assume you do not personally know Twiggy Forrest or Gina Reinhardt) – you currently only make for a good propagandist, which I suspect is your aim here given your hubris. What I doubt is whether you actually understand what it is you are talking about. To have knowledge, and be able to synthesis that knowledge, generally comes with age and experience. Until you can show that you have that understanding I think the moderators should limit your excessive postings.

    [good post John. I’ll keep trying] ED

    00

    • #
      KinkyKeith

      It’s Time!

      00

    • #
      Adam Smith

      Why is this? Hint: adjustments can refer to flux adjustments and parameterisations used in climate modelling (where temperature and other variables are adjusted to make the model “fit” observed data) – there are other meanings as well, which I leave to you to research.

      This is just irrelevant to the discussion mate.

      Read the bloody document for yourself. The BOM doesn’t apply any adjustments during the homogenisation process to make the data fit a particular climate model!

      All it did is adjust to account for things like changing equipment, changing location of weather stations, encroachment of development on existing weather stations, the transition to different reporting methodologies, the potential introduction of rounding errors, potential errors introduced when Australia shifted to metric measurement etc, etc.

      Your assertion that they somehow fit the data to some pre-existing climate model is completely without merit, this is obvious because you didn’t even bother referencing this claim!

      Basically you are just too lazy to actually read and learn about how the data was homogenised, but then you expect me to treat you as if you are an expert on what was done to it!

      00

      • #
        Andrew Barnham

        Adam

        Homogenization is an admission that empirical data is imperfect, contains unwanted signal and requires additional treatment before further analysis. The proverbial ‘polishing a turd’.

        Homogenization would be all well and good if we have 100% confidence that the process of homogenization perfectly removed all unwanted influences without error.

        A key problem in my mind with process of homogenization is that it is not an objective automated process. It involves analysts making judgement calls about how much to tweak and where. The ‘statistical tools’ used to help are not particularly restrictive at all. i.e. temperature data is so noisy and variable that techniques such as linear regression break analysis provides multiple points of opportunity for tweakage across a temperature time line. Each addressed in an arbitrary way.

        I have in my mind for some time now a thought experiment, to somehow ‘double-blind’ the process of homogenization. I would give a group of BOM homogenizers some temperature data, I would randomly flip some of the station data right to left without informing them that I even did this and then ask them to homogenize it and then compare trend effects to see if the analysts are inadvertently biasing towards chasing rising slopes. I think the results would be very interesting. (Not an easy exercise to carry out though, for reasons I wont cover here at this time).

        00

        • #
          KinkyKeith

          Excellent comment Andrew.

          The idea of a double blind is great and really if this sort of processing is going to be done then that is the solution.

          Easy to do. But time consuming and what Group of Bom’ers is going to mess around when they know they are under scrutiny.

          It has to be done secretly.

          KK

          00

          • #
            Adam Smith

            Easy to do. But time consuming and what Group of Bom’ers is going to mess around when they know they are under scrutiny.

            It has to be done secretly.

            This makes no sense because the BOM’s procedures were peer reviewed twice, both by an Australian group and by an international group.

            When are you making your detail explanation of the flaws in the BOM’s approach public? Or are you keeping that a secret?

            00

          • #
            KinkyKeith

            If they were peer reviewed twice why was that.

            Wasn’t the first PEER REVIEW accurate.

            Doesn’t say much for Peer review!

            00

        • #
          Adam Smith

          Adam

          Homogenization is an admission that empirical data is imperfect…

          Well of course it is ‘imperfect’ in the sense it was collated over a long period of time using different equipment, methods and procedures! We simply can’t go back in time and re-record all the temps using the same equipment and procedures!

          Those are the variables that must be accounted for before you can look at any trends in the data with any confidence. If you don’t do that, then you just have no idea what caused the trend, an actual change in the climate, or one more more of the variables.

          This is BASIC SCIENCE! You need to account for variables so you know that any statistically significant relationships are caused by the variable you are interested in, which in this case is the climate across Australia.

          .

          ..contains unwanted signal and requires additional treatment before further analysis. The proverbial ‘polishing a turd’.

          WRONG! The variables are unwanted NOISE, they aren’t signal. The signal is the variable you are trying to study, i.e. the climate. The other variables, e.g. equipment changes, reporting / monitoring changes, add NOISE to the signal.

          Homogenization would be all well and good if we have 100% confidence that the process of homogenization perfectly removed all unwanted influences without error.

          This attitude is NOT SCIENTIFIC! Just because we can’t be 100% confident of something doesn’t mean we can’t have any confidence at all!

          If you read the BOM document they actually document a range of statistical approaches they consider but reject. And the reason they reject some and accept others is because some approaches create too much uncertainty!

          So you don’t seem to realise that the BOM itself is extremely careful to ensure confidence in the data, in a statistical sense, is kept as high as possible. But of course they can’t have 100% confidence. The only way they could get 100% confidence that their homogenisation process was perfectly correct is if they had a time machine and could go back to 1910 and set up exactly the same equipment that is used now at every weather station so they could test their homogenisation process against actual experimentation.

          But they can’t do that! All they can do is use science, and reason, and statistical techniques to come up with the absolute best homogenisation process possible given all the information they have.

          But what is the alternate approach? If you don’t homogenise the data at all because you are worried about a lack of 100% certainty, then you can be certain that you can’t make ANY scientifically valid claims at all, because you have no idea of the cause of any trends in the data. Are the trends caused by the signal or the noise, or a bit of both?

          A key problem in my mind with process of homogenization is that it is not an objective automated process. It involves analysts making judgement calls about how much to tweak and where.

          I think you have confused “objectivity” and “automation”. If you set up a programme to just add 2 degrees to all the raw temps that would be automated but it wouldn’t be objective!

          I agree that it requires judgement calls. But simply not homogenising the data would be very bad judgement call in itself because it means leaving a dozen variables unaccounted for.

          The fact is we can make good judgement calls based on evidence. We can do experiments on old and new enclosures to see how they biased the data this way or that, and in fact we have a REAL LIFE experiment because Adelaide Airport has had two weather stations of different designs in use for 60 years, so you can start to draw conclusions about how different enclosures effect the results.

          We can test old thermometers against new, we can look at data about the distance of the station enclosure to the nearest structure and see how that biases the data and thus learn from it and apply that knowledge back to other stations. Since 1910 BOM has used 3 different reporting standards, the current one has been in use everywhere since the early 1970s, so again we can look at trends in those 3 different periods to see trends there regarding how reporting procedures (i.e. the time of the day measurements were taken) effected the results.

          And of course in science there’s never 100% of anything. Some new piece of data or information may come along which will change our thinking. Someone could come up with a better explanation of how enclosures effect temps, which can form part of a new analysis.

          Being 100% certain is dogmatism, not science.

          The ‘statistical tools’ used to help are not particularly restrictive at all. i.e. temperature data is so noisy and variable that techniques such as linear regression break analysis provides multiple points of opportunity for tweakage across a temperature time line. Each addressed in an arbitrary way.

          Sorry, but this doesn’t make sense. There certainly are useful statistical approaches to the data because we have instances where there are two or more weather stations in close proximity to each other which have a high degree of correlation.

          I have in my mind for some time now a thought experiment, to somehow ‘double-blind’ the process of homogenization. I would give a group of BOM homogenizers some temperature data, I would randomly flip some of the station data right to left without informing them that I even did this and then ask them to homogenize it and then compare trend effects to see if the analysts are inadvertently biasing towards chasing rising slopes. I think the results would be very interesting. (Not an easy exercise to carry out though, for reasons I wont cover here at this time).

          If the BOM is deliberately trying to create a warming trend out of the data, why were most of the largest reductions to minimum and maximum temps over the last 30 years?

          Shouldn’t they be the ones being increased the most?

          10

          • #
            Andrew Barnham

            Adam.

            With respect you have gone to alot of effort to misconstrue what I intended to communicate. But since you wrote such a lengthy reply, I doubt you are trolling. So here goes, a first and final attempt to clarify.

            Firstly on the words “noise” and “signal”. This is just semantics. An air-con exhaust built 3m away from a screen introduces a new “signal”. It is an unwanted signal. It can also be called “noise”. The goal is to remove that. Call it what you will. We are talking about the same thing.

            When I said “homogenization is an admission”, I am not saying analysis requires perfect data. I am saying that the raw data is such poor quality that it is admitted that it requires additional treatment first, otherwise any analysis done on untreated data is meaningless.

            Homogenization injects a new chain into the link. A new point of error for inappropriate assumptions and methodologies to impact the final result. Now you argue that BOM’s processes are rigourous. Maybe you are right. Yet based on their past record with Torok and Della-Marta, why should now be any different. Past two, peer reviewed, efforts have been criticised and now superceded. Why should I trust this new effort?

            That you fail to acknowledge the risk of possible bias in the process is something I realise is futile to discuss with you further. So I will only say a few further words on a subject I personally find very, very interesting and is also professionally relevant to me. Bias, subconscious bias is a reality that tools like double blind are designed to deal with. I do time series signal analysis professionally (database/backoffice server performance, business intelligence), and I see bias, even admittedly in my own work, all the time. Why should the homogenizers, actively hunting down evidence of CAGW, be assumed to be above such considerations? Does their documented processes even acknowledge this risk? Is the documented process an accurate reflection of what they actually did? Does the process even demand that they rigorously record all actions and decisons? Is the process of going from Raw -> HQ even repeatable? Skimming through the 92 page document I don’t get a sense that any of my above concerns are even mentioned.

            As for your claim that large reductions in later data, firstly this certainly wasn’t the case with prior data sets. Adjustments down on old HQ series where mainly in pre 1950 period. I’ve yet to do my own analysis on the new data set. So I won’t take you word for it that now is any different. I expect you to furnish evidence of your claim.

            00

    • #
      Rereke Whakaaro

      John,

      … you currently only make for a good propagandist …

      I am sorry to have to disagree, but it is not good propaganda. Good propaganda does not seek to alienate an audience, it does not resort to argument, it avoids the situation where there are winners and losers.

      A good propagandist will make a point, and if the audience disagrees, they will withdraw with dignity.

      They will later try again, with a slightly modified line, and again withdraw in a dignified way, if challenged.

      What they are doing is building a “wall” in the target’s mind, one “brick” at a time. Eventually, the “wall” takes on its own appearance of reality in the minds of a sufficiently large proportion of the population, to become “generally accepted”.

      This is what has happened with the Global Warming/Climate Change meme. At one time, the climate was warming, and of course climate always changes. Are either of these dangerous? Not at current levels, but that fact is no longer important, because the overall propaganda meme says it is, and that is believed under the precautionary principle – another and previous propaganda exercise.

      Actually Adam, with all his brashness and argumentative anger, is doing us a favour.

      People start to question why he is so angry and argumentative. “What is he trying to hide?”, they ask. “Why is he so politically polarised?”, they wonder. “Perhaps the two things are connected”, they conclude.

      And the real fear of the propagandist is that as soon as people start to question in that way, they start to see the “wall” for what it is, and the whole edifice starts to fall down.

      Good propagandists never loose their cool, “Move along folks, nothing to see here”.

      00

      • #
        Adam Smith

        Actually Adam, with all his brashness and argumentative anger, is doing us a favour.

        People start to question why he is so angry and argumentative.

        So even though I disagree with most of your views, you want me to pretend that I agree with you?

        That is the weirdest argument I have ever heard.

        In fact it is kind of propagandistic as it seems to be based on the idea that I shouldn’t express my alternative points of view and should just pretend that there is a consensus between us even though there isn’t!

        That’s absolute craziness!

        00

  • #
    crakar24

    http://oursay.org/hangout-with-the-prime-minister/sort:votes/dir:desc

    Wont be long now before the PM will get 3 soft questions thanks to a concerted effort by Getup

    00

  • #
    crakar24

    According to the oursay vote these are the top two most pressing problems we have in this country

    “Dear Prime Minister, as the first female, atheist, unmarried Prime Minister of Australia, and leader of a self-described socially progressive party, how do you explain your opposition to same-sex marriage and “deeply held” belief that same-sex couples should continue to be discriminated against by a piece of legislation (the Marriage Act)? Why are heterosexual relationships more valued than same-sex relationships?”

    “Dear Prime Minister. Against the strongly expressed concerns of mental health professionals, teacher unions and secular organisations, why do you allow the outrageous situation to continue where largely unqualified, religious evangelists have access to young children in public schools, in the form of the National School Chaplaincy Program? ”

    And in current trends this will be number 3

    “Dear Prime Minister, the Stronger Futures legislation was recently passed in the Senate, subjecting Aboriginal people in the NT to 10 more years of Interventionist policy. This is despite overwhelming opposition expressed from Aboriginal leaders, national organisations, the general public and even condemnation from the United Nations. How can we call ourselves a country of the ‘fair go’ if the Government is now refusing to allow a human rights test of the legislation by the Parliamentary Joint Committee on Human Rights, as called for by the National Congress of Australia’s First Peoples? ”

    So piss poor border protection is not an issue, a carbon tax lie is not an issue, global warming is not an issue, the inevitable collapse of the economy (local and international) is not an issue but same sex marriage, the churches in primary schools and how stop abbo’s from spending all their dole cheques is?

    Tell me this is not a setup?

    00

    • #

      Of course it’s not a setup.

      OurSay got its seed funding from (previously-Reverend) Nic Frances who was spruiking for Cool nrg (promoting mercury CFL for all to “reduce energy consumption by 5%”). [Cool nrg] seems to be 404 today]

      On the board of OurSay are
      – Eyal Halamish (an imported activist from Chicago, previously working for Futureye Pty Ltd (“sustainability” spruikers”), Finding Infinity (ditto), BrainReactions LLC (brainstormer) and a Fellow of the Centre for Sustainability Leadership
      – Luke Giuliani a self-described “social enterpreneur” with lots of oars in teh water including alternative energy fantasies at Future Spark
      – Martin Conley-Wood is a former employee of the Canadian government and admits to being a free-loader and having aspired to and failed at lots of things.

      There are others, including a public servant of Victoria’s EPA, but I’ve given myself just an hour to collect background information

      I’m not saying that people like that can’t be objective and operate OurSay in a transparent manner (especially give Eyal’s preachings on corporate transparency via Futureye). I would like to see them demonstrate it, instead of tweeting @huffpo overseas to get more votes on Austrlia’s OurSay.

      00

  • #
    Adam Smith

    WHAT!? Have I been banned?

    [Not banned, you got caught in the spam filter] ED

    00

  • #
    P Dragone

    Adam Smith,
    That is a brilliant analysis, but it is wasted on people so far into confirmation bias they wouldn’t recognise the scientific method if it married their daughters.

    If you keep up with this truth and commonsense gig on in here, you will get banned.

    00

    • #
      crakar24

      Hi P Dragone,

      Can you look at this paper and tell me what you think of it, i cannot see the paper due to firewall restrictions so your help here would be much appreciated.

      http://www.leif.org/EOS/2012GL052094-pip.pdf

      Thanks in advance

      Crakar

      00

    • #
      BobC

      Ah, the Smith(s) cheering section has arrived.

      You wouldn’t have anything to actually add to the discussion, would you Dragone?

      00

    • #
      cohenite

      That is a brilliant analysis, but it is wasted on people so far into confirmation bias they wouldn’t recognise the scientific method if it married their daughters.

      You are an idiot; I have a son.

      00

  • #
    crakar24

    http://www.bbc.co.uk/news/uk-scotland-scotland-politics-18871679

    Scotland failed to meet its GHG reduction targets in 2010, so what some might say well its the reason why that is interesting.

    Mr Stevenson said “Scotland faced its coldest winter temperatures in almost a century – and quite rightly people across Scotland needed to heat their homes to keep warm and safe”.

    Oh the irony…………..

    Some funny comments

    Leader of the Scottish Liberal Democrats Willie Rennie said climate change would not stop for a “bit of frost on the ground”.

    and

    “The Scottish government remains fully committed to delivering ambitious and world-leading climate change targets. We always knew it would be a challenging path to follow when these were set and that year to year fluctuations were inevitable”.

    Oh so the coldest winter in a century is rolled up as jsut year to year fluctuations, in other words in a warming world of AGW it is sy=till possible to smash a 100 year old record…………..that would make it NOT a warming world of AGW would it not?

    How stupid are the Scotts?

    00

    • #
      Adam Smith

      Oh so the coldest winter in a century is rolled up as jsut year to year fluctuations, in other words in a warming world of AGW it is sy=till possible to smash a 100 year old record…………..that would make it NOT a warming world of AGW would it not?

      Mate, if you think global warming will just result in more and more warmer weather then you clearly have a simplistic understanding of this issue.

      Global warming will result in climatic change, which means more extreme weather. Both colder cold spells and hotter warm spells.

      You may also like to know that it will result in MORE, not less precipitation because a warmer planet means more evapouration, which means more clouds, which means more rain and / or snow.

      So just saying since some place has experienced record lows that global warming isn’t happening is simplistic nonsense.

      00

      • #
        Dave

        .
        You Say

        Both colder cold spells and hotter warm spells

        Will the temperature of colder cold spells be greater than the temperature of the hotter warm spells?

        00

      • #
        Tristan

        Add energy to the climate system, the weather distribution shifts to the right (getting warmer on average) and the weather distribution flattens (annual max and mins spread further apart).

        So we get a lot more hot days, but not a lot less cold days.

        00

        • #
          Dave

          .
          Tristan says:

          lot more hot days, but not a lot less cold days.

          Adam says:

          Both colder cold spells and hotter warm spells

          So winter (cold days) will not be as long but a lot colder? Is this right?
          So summer (warm days) will be longer but a lot hotter? Is this right?
          And spring and autumn will both be what? Longer and warmer or shorter & colder?

          So with the energy in the system – which shifts weather distribution to the right – what percentages have you calculated for the colder colds, hotter warms, longer hots and not a lot less colds?

          So with Adams’ new thermometers, and BOM sites (in operation for ???) – how long before we can have exact predictions of what you guys are stating here without having to rely on the old BOM corrupted data that they have to modify to suit?

          And also:
          Adam stated above:

          You may also like to know that it will result in MORE, not less precipitation because a warmer planet means more evapouration, which means more clouds, which means more rain and / or snow.

          Why did we build all these desal plants just a few years ago – or is all that you two have cited and stated above all brand new in the AGW debate and where is it published?

          00

          • #
            BobC

            Dave
            July 19, 2012 at 7:58 pm · Reply

            So winter (cold days) will not be as long but a lot colder? Is this right?
            So summer (warm days) will be longer but a lot hotter? Is this right?
            And spring and autumn will both be what? Longer and warmer or shorter & colder?

            Yes indeed Dave — Global Warming causes everything, including contradictory things: Here is a list of what has been claimed.

            It’s irrational magical thinking, not science.

            00

          • #
            KinkyKeith

            Hi Dave – Great piece.

            You ask the question we all want answered.

            “Why did we build all these desal plants just a few years ago”

            We all know it wasn’t about the water supply , after all we aren’t Israel, we have huge amounts of wasted runoff from the Eastern states.

            So the answer lies somewhere else.

            Perhaps it lies in the favours owed by the Government.

            Everyone owes favours in politics and this is a payoff.

            Funny, we are killing two birds here with this one stone.

            Why build desal plants? and why has the federal Government run up debts of $140 billion.

            Where has all the money gone. Reminds me of a song. Thanks Pete thanks PP&M. Songs of injustice.

            Well the feds give states money for infrastructure..

            Unions Build the infrastructure.

            Examples are the centre of New York where public Works construction was used over many decades to enrich those with access to the public teat.

            More locally we the infamous road works in Victoria where each worker had a salary double normal.

            And then ,, ,,,,,,, drumroll, yes we have our Desal Plants where everybody got rich except the taxpayers…..

            Man Made Global Warming is just so politically useful.

            It cam be used to wrap almost any scam.

            And the greens haven’t got a clue they have been used for this!!!!!

            Totally Thick.

            KK 🙂

            00

      • #
        Jaymez

        The problem with what you write ‘Adam’ is the same problem with the climate models, it is way too simplistic and climate is far more complex than that. For instance, evaporation has a cooling effect, while clouds will trap heat. Clouds will also reflect solar radiation. The dynamics of cloud formation is poorly understood and poorly modelled. What impact does cosmic and solar radiation have on cloud formation? And how much if anything does all that have to do with human CO2 emissions? Where is the evidence of catastrophic global warming?

        None of what you have written on this page has escaped the truth that:

        1. The BOM adjusted the original raw temperature data and selectively dropped temperature stations.

        2. Even though BOM originally claimed the positive and negative adjustments were roughly equal and therefore had a negligible impact on temperature trends, this was proven to be demonstrably false. The warming trends were enhanced across the board.

        3. What was even more remarkable was that the adjustments inevitably meant that raw temperatures from earlier years were reduced. Hey it was actually cooler than we thought it was back then so the temperature records we achieved back then no longer count!

        4. When BOM were asked on a number of occasions to provide scientific reasons supporting the various adjustments and ‘homogenizations’ which increased the warming trends. They refused. What do they have to hide?

        5. Unlike any other Public or Government agency the BOM records are not externally audited. Yet the Government and CSIRO are relying on them and making multi-billion dollar policy decisions based on the data.

        6. When pressure was placed on the CSIRO to audit the temperature data set – hey presto they produced the new improved ACORN data set. This confirms what they had been denying before – that there were problems with the data set they had assured us was just fine. And to pre-empt any need for an audit of ACORN they got a couple of international bureaux with similarly dodgy temperature records to state the BOM’s ACORN is world best practice!

        7. Then a very basic check proves that the BOM didn’t carry out a VERY BASIC check on their new you beaut data set! It doesn’t matter how you explain away how you could technically get a higher minimum than maximum, the BOM never actually officially record them as such.

        So what level of confidence should we have in BOM and their temperature data set? Zilch, until they allow an independent audit. You going off on tangents will do nothing to change that.

        00

      • #
        crakar24

        Firstly Smith i am not your “mate” please refrain from using that term in future.

        Mate, if you think global warming will just result in more and more warmer weather then you clearly have a simplistic understanding of this issue.

        Simplistic understanding Oh the irony…………………

        Smith please explain to all where in all the postulations and pompous preening by the IPCC the negative feed back to rising CO2 levels that is going to cause anything other than warming?

        Do you honestly realise the stupidity of your own words, if you now claim that global warming can in fact cause cooling then where does this cool come from?

        You may also like to know that it will result in MORE, not less precipitation because a warmer planet means more evapouration, which means more clouds, which means more rain and / or snow.

        No Smith i call you on your bullshit here, AGW will not lead to more rain it will lead to more water vapor ergo more warming, read the bloofy IPCC reports next time you open your big fat gob you idiot. Oh yes i know you just won the argument because i have called you names………..piss off you precious little dick head.

        So answer me this you idiot, if all that extra water vapor falls back as rain then this is a negative feed back in action so where does your planet destroying, life extinguishing global warming come from you imbecile?

        So just saying since some place has experienced record lows that global warming isn’t happening is simplistic nonsense

        Can an idiot become even more idiotic? Where did all the cold come from idiot?

        Here is some more global warming you stupid incompetent spastic

        http://www.smh.com.au/environment/weather/central-australias-frostiest-winter-in-a-decade-20120719-22c5a.html

        [Crakar, we realise that debating with Adam is like throwing marshmallows at a mattress – deep breaths, calm down, don’t make me have to snip you – warning, OK? – Fly]

        00

  • #

    Adam Smith: this is addressed to you.
    I have been busy of late and haven’t been able to keep up with this thread which deeply interests me, given my analysis of 10 Acorn sites Jo was kind enough to post.
    Adam, I note you seem to have a pretty good understanding of Acorn and its methodology, and BOM adjustments generally. Yet a couple of things puzzle me. In your very first comment at #15 you say:

    I’m not sure what this post is about. The BOM points out that the data was analysed to test for higher Min than Max temps in order to evaluate the quality of the data.

    It seems that’s what they did. The very first one in the excerpt has the values reversed, the second one is suspect.

    What more should the BOM do?

    I have checked the data, both Acorn and raw data, and none of the values have been reversed. Acorn has Tmax being less than Tmin, after adjusting the raw data where Tmax is correctly greater. Here are the first 3 sites:
    Site, Date, Raw Min Max, Acorn Min Max
    Kalumburu 3/12/45 25 25.5 24.5 23.8
    Halls Ck 17/5/14 19.4 21.2 20.3 21.2
    Marble Bar 10/6/42 17.9 20.1 18.6 18.3
    Marble Bar 25/11/42 25 26.1 25 23.7

    Adam, can you PLEASE address the issue of this post: how can BOM claim to have performed quality assurance tests on their final product, including testing that Tmax is greater than Tmin, when the evidence shows that on 954 occasions they have stuffed up by NOT identifying Tmin being greater than Tmax? These errors were not in the raw data, they have been introduced into a dataset which is supposed to be world’s best practice. And don’t tell me how low the error rate is – it should be zero. All they have to do is write a line of code to test “If Tmax-Tmin <0 then flag". How good is their testing procedure if they can't do that?
    Ken

    ————————————————————————–
    [Given Adam Smith’s keeness to respond to every comment on every post I’m sure you’ll get an answer soon – Mod]

    00

    • #
      cohenite

      it should be zero

      Exactly; what a schmozzle; and what a tirade of amphigory and hubris from Dr Smith we have had due to his petulant refusal to accept that.

      00

    • #
      Wayne, s. Job

      My hunch is that they ran the process for min and max before the homogenisation, pastuerisation and Al-gorisation and then patted themselves on the back.

      By lowering early years Tmax big time and Tmin only a little they shot themselves in the foot as many days ended up ASSbackwards.

      They dodged the bullet of an audit, clever little vegemites, they may soon end in court like their cousins in Kiwiland.

      00

  • #
    janama

    1996 homogenisation. Nothing’s changed.

    http://users.tpg.com.au/johnsay1/Stuff/ALLADJ.TXT

    00

  • #
  • #
    sp

    Jo – please remove the moronoic Adam Smith – he is wasting bandwidth, he is ignorant, he adds no value. He wastes our time. Please Jo ….. pleeeeeeeeeeeeeeeessssseee I beg you

    [Simple solution, don’t read his posts. We don’t remove people just because they are not liked. Mod oggi]

    00

    • #
      crakar24

      Mod oggi,

      Your answer to the Smith defies logic, remember for every person that comments there are dozens more that simply read what is being said. Are you suggesting everyone here simply ignore Smith and allow its lies and propagander go unchallenged?

      Surely this is not the answer?

      Regards

      Crakar24

      [My response was to SPs request that we REMOVE Adam Smith. That will not happen. Those who wish to challenge Adam may do so and they do. Those who don’t have the patience, the inclination or the counter view may choose to ignore Adam. As for the many lurkers who read but don’t comment, lets not sell their intellect short. They will make up their own minds in their own way.
      Unless blog comment policy (as liberal as it is) is broken repeatedly, commentors will not be removed. This is the reason why I volunteer to moderate here. Everybody gets a fair suck of the sauce bottle. Mod oggi]

      00

  • #

    I’ve added a few inline comments into some of “Adam Smiths” remarks.
    #15.4.1 July 16, 2012 at 9:27 pm
    #15.4.1.2.1 July 16, 2012 at 9:56 pm
    #19 July 16, 2012 at 8:55 pm
    19.1.1.2.1 July 17, 2012 at 1:51 pm
    #22.1.1.1 July 16, 2012 at 10:29 pm
    Yes, some of those comments do dilute the thread and the sheer number of his comments dominates discussion. I’d prefer if Adam could post less “filler” and lift standards of reason in general. His tactics of ignoring the main point, and dragging in extraneous points (in great detail) are not uncommon. It’s useful for skeptics to practice dealing with that.

    00

    • #

      His tactics of ignoring the main point, and dragging in extraneous points (in great detail) are not uncommon. It’s useful for skeptics to practice dealing with that.

      Preferably by totally ignoring him in future.

      As I mentioned, it’s just a game to him.

      He knows everything about everything, and has absolutely no respect for other people’s expertise in their own fields.

      He’s just looking for ‘bites’ so he can go off on his own, because, as a Doctor, he’s the expert in everything. His plan is to just drive people away.

      Even he understands that he gets more airtime here than any of us would get at other sites where our comments are ruthlessly culled, and not even posted.

      Tony.

      00

      • #
        Adam Smith

        He knows everything about everything, and has absolutely no respect for other people’s expertise in their own fields.

        Get over yourself Tony. Where have I claimed to know everything about everything?

        I have never said you don’t have expertise in given fields, but I have also pointed out occasions where your views on issues, e.g. the relationship between retail electricity prices and increased distribution costs, are way outside of the mainstream. I have presented references to this fact that you then just ignore and continue repeating the same thing as a mantra.

        He’s just looking for ‘bites’ so he can go off on his own, because, as a Doctor, he’s the expert in everything. His plan is to just drive people away.

        Err what? As a Dr I am an expert in the field I hold a doctorate in. That’s the complete opposite to saying I am an expert in everything!

        Even he understands that he gets more airtime here than any of us would get at other sites where our comments are ruthlessly culled, and not even posted.

        Well a day and a half ago I had every post I made here automatically put into moderation.

        And clearly you don’t actually post in many open forums. I post in forums where my views are in the majority but the moderators still support and even defend minority views. In fact one forum I participate on is so open that the ONLY posts that get snipped are those that are potentially defamatory.

        00

        • #

          As a Dr I am an expert in the field I hold a doctorate in

          No doubt. But do you know, there are 2 groups of doctorates.
          The first group are people with doctorates whom we can’t do without if, say, a tragedy befell all of them. (Doctors of medicine for example)
          Then there is a 2nd group of people with doctorates. If a tragedy befell all of them all at the same time, the world wouldn’t skip a beat.

          Hmmmmm, I wonder which group you belong to. (purely for meaningless interest of course)

          00

          • #
            Adam Smith

            The first group are people with doctorates whom we can’t do without if, say, a tragedy befell all of them. (Doctors of medicine for example)

            LOL! What a hilarious statement to make!

            Most medical doctors in Australia are NOT Doctors of Medicine (M.D.). Instead they have an M.B.B.S., which is a Bachelor of Medicine, Bachelor of Surgery.

            It is simply a CONVENTION that we refer to people with these BACHELOR degrees as “Doctor”.

            Then there is a 2nd group of people with doctorates. If a tragedy befell all of them all at the same time, the world wouldn’t skip a beat.

            Since you seem to think that M.D.s are more important than Ph.D.s, you are effectively proposing that the world doesn’t need people with Ph.D.s in sciences (including D.Sci.?), engineering, or even medicine!

            Clearly you don’t know much about doctorates.

            00

          • #

            That’s very good Adam, well done.
            The subject is expertise in an area etc. Clearly I’m no expert in doctorates. I concede.

            Now, the matter which you didn’t address, that of people we can do without vs people we can’t do without. Which category do you belong to? (p.s. I’m in the can do without. have no degrees whatsoever)

            00

          • #
            Tristan

            Now, the matter which you didn’t address, that of people we can do without vs people we can’t do without. Which category do you belong to? (p.s. I’m in the can do without. have no degrees whatsoever)

            At the risk of sounding like his cheer squad, he’s in the latter category.

            00

          • #
            Mark

            At the risk of sounding like his cheer squad, he’s in the latter category.

            Nah, no risk there Tristan, no risk at all.

            00

          • #
            Rereke Whakaaro

            Adam

            In your response to Baa Humbug #79.1.1.1.1 you said:

            LOL! What a hilarious statement to make!

            Most medical doctors in Australia are NOT Doctors of Medicine (M.D.). Instead they have an M.B.B.S., which is a Bachelor of Medicine, Bachelor of Surgery.

            It is simply a CONVENTION that we refer to people with these BACHELOR degrees as “Doctor”.

            Had you researched a little further when you were looking up your reference, you might have read, “In Britain, Ireland, India and most commonwealth countries, excluding Canada, the medical degree is the MBBS, i.e. Bachelor of Medicine, Bachelor of Surgery (MBChB, BM BCh, MB BCh, MBBS, BMBS, BMed, BM) and is the equivalent to the American and Canadian Doctor of Medicine (MD), from the Latin Medicinae Doctor – “Teacher of Medicine”. Both the MBBS and the MD are doctoral degrees.

            Although training may be entered after obtaining from 90 to 120 credit hours of university level work, in most cases entry to Medical School requires a Bachelors Degree (preferably with a majority of science subjects).

            Many holders of the MBBS and MD degree conduct clinical and basic scientific research and publish in peer-reviewed medical journals both during training and after graduation. Many universities offer combined medical and research training though programs granting combined MD/PhD or MBBS/PhD degrees. Holders of medical degrees that conduct pure research, and do not practice medicine, always adopt the title PhD. ”

            Lesson: The name of some thing, is not the meaning or the essence of that thing. And you have the gall to accuse me of failing philosophy

            00

        • #
          Rereke Whakaaro

          As a Dr I am an expert in the field I hold a doctorate in. That’s the complete opposite to saying I am an expert in everything!

          So, to paraphrase: If a person who knows something about everything, is an expert at nothing. Then a person who knows everything about something, knows nothing about anything else.

          Yes, that’s logical, and supported by the empirical evidence of your previous comments.

          All we need to do now, is to determine is which field your expertise is actually in.

          00

          • #
            Adam Smith

            So, to paraphrase: If a person who knows something about everything, is an expert at nothing. Then a person who knows everything about something, knows nothing about anything else.

            Well if you were someone who failed at first year philosophy then yes this would constitute a paraphrase of what I wrote.

            Yes, that’s logical, and supported by the empirical evidence of your previous comments.

            Well it would only be logical if you were someone who failed at logic.

            All we need to do now, is to determine is which field your expertise is actually in.

            Sorry mate, but you wouldn’t understand it. You’d need a Ph.D. in the relevant field to understand my Ph.D.

            By the way, have you managed to watch any of these great videos that show some of the flaws in Lord Monckton’s arguments about climate change?
            http://www.youtube.com/watch?v=fbW-aHvjOgM

            Or are you still only a sceptic about things you disagree with and a gullible believer about everything else?

            00

          • #
            Rereke Whakaaro

            I am an expert in the field I hold a doctorate in. That’s the complete opposite to saying I am an expert in everything!

            The complete opposite of being an expert in everything has two possible interpretations: to be a moron in everything; or to be an expert in nothing. Which of these interpretations would you prefer us to place on your doctorate?

            Oh, and it is only your contention that I failed first year philosophy. You cannot substantiate your claim, so you loose this exchange as well, and attract penalty points for the ad hominem.

            Poor Adam. You are not doing very well, are you?

            00

  • #
    Adam Smith

    Andrew Barnham
    July 20, 2012 at 9:17 am
    Adam.

    With respect you have gone to alot of effort to misconstrue what I intended to communicate. But since you wrote such a lengthy reply, I doubt you are trolling. So here goes, a first and final attempt to clarify.

    I didn’t misconstrue what you wrote at all. For example, your assertion that homogenisation can’t be accurate at all because we can’t be 100% certain about its accuracy demonstrates that you don’t understand how science works.

    Do you know that we can’t be 100% certain where an electron is in a molecule at any given moment? But does that mean we have 0% certainty of where the electrons are?

    Firstly on the words “noise” and “signal”. This is just semantics. An air-con exhaust built 3m away from a screen introduces a new “signal”. It is an unwanted signal. It can also be called “noise”. The goal is to remove that. Call it what you will. We are talking about the same thing.

    OK we will agree that you used those terms in a way that had their meaning reversed from their usual meaning.

    When I said “homogenization is an admission”, I am not saying analysis requires perfect data. I am saying that the raw data is such poor quality that it is admitted that it requires additional treatment first, otherwise any analysis done on untreated data is meaningless.

    Well if you want to get into semantics again, calling the data “poor quality” is a poor choice of words. If we have a reading that says the maximum temp reading on a thermometer at Adelaide on January 3rd 1919 was 32 degrees celsius, then how does it make sense to say that is “poor quality” data? That is a piece if empirical observation that we now just need to put into a broader context based on how we know that measurement was made. To say it is “poor quality” is misleading.

    Homogenization injects a new chain into the link. A new point of error for inappropriate assumptions and methodologies to impact the final result.

    Well yeah, but it also aims as far as possible to REMOVE some “chains in the link” by accounting for a range of known variables that effect the data.

    Now you argue that BOM’s processes are rigourous. Maybe you are right. Yet based on their past record with Torok and Della-Marta, why should now be any different. Past two, peer reviewed, efforts have been criticised and now superceded. Why should I trust this new effort?

    Because this is how science works! You develop an approach, you make it public, i.e. and then people get to comment on it. It doesn’t mean it is going to be 100% perfect, but if someone finds errors in it you improve on it.

    The BOM’s current technique has undergone two rounds of peer review both a domestic peer review and an international peer review. In the future someone may review those reviews and propose amendments to make the approach better. That’s FINE that is how science works.

    But again you seem to be implying that if you don’t have 100% certainty about your approach then you are left with complete uncertainty. This makes no sense! There is nothing wrong with saying “based on all the available information we have, this is the best way to approach the task at hand”. You may come across new information sometime down the track and have to rethink your approach, but that just means you are being open to change rather than just ignorantly assuming that you can’t do anything with the information you have.

    That you fail to acknowledge the risk of possible bias in the process is something I realise is futile to discuss with you further.

    Well maybe you are dealing in semantics again. I do reject that the BOM is “biased”, because this implies that they have some final conclusion that they are starting off with and they are just going to manipulate the data in order to achieve that conclusion.

    No one in this entire thread has presented any evidence that this is the case.

    However some people in this thread have proposed the view that the Australian temperature record should NOT show a warming trend and that they think that the homogenisation process should support this conclusion.

    THAT IS BIASED and is not a view I share.

    So I will only say a few further words on a subject I personally find very, very interesting and is also professionally relevant to me. Bias, subconscious bias is a reality that tools like double blind are designed to deal with. I do time series signal analysis professionally (database/backoffice server performance, business intelligence), and I see bias, even admittedly in my own work, all the time. Why should the homogenizers, actively hunting down evidence of CAGW, be assumed to be above such considerations?

    Sorry mate, but you have just asserted, without the presentation of evidence, that the people at the BOM are “actively hunting down evidence of CAGW”. This absurd statement simply DEMONSTRATES YOUR BIAS!

    What the BOM is trying to do is get the most accurate record it can about temperature across Australia going back to 1910. For you to assert that they are SIMPLY trying to confect evidence for global warming is absurd!

    I mean to start with it is absurd because EVEN IF the data was found to show a warming trend in Australia, this STILL wouldn’t mean that the entire globe is warming!? So why are you so worried that, that is the trend that they identified!?

    Does their documented processes even acknowledge this risk?

    YES! Read the bloody document! They show a range of statistical approaches, some that they reject for the very reason that it will just increase the error in the data!

    Is the documented process an accurate reflection of what they actually did?

    YES! Because it has been peer reviewed twice. If they did something else to the analysis then this would’ve be revealed during the peer review because it just wouldn’t have added up in the way they say it does.

    Does the process even demand that they rigorously record all actions and decisons?

    YES! How can something be peer reviewed if you don’t say what you did?

    Is the process of going from Raw -> HQ even repeatable? Skimming through the 92 page document I don’t get a sense that any of my above concerns are even mentioned.

    Rather than SKIMMING THROUGH the document, why not read the whole thing?

    If you want to criticise their processes shouldn’t you at least start by informing you of what those processes are? Wilful ignorance doesn’t count as a coherent critique.

    As for your claim that large reductions in later data, firstly this certainly wasn’t the case with prior data sets.

    So you are now criticising the homogenisation of the ACORN data by saying it is different to that done on other data.

    How does that make any sense?

    Adjustments down on old HQ series where mainly in pre 1950 period. I’ve yet to do my own analysis on the new data set. So I won’t take you word for it that now is any different. I expect you to furnish evidence of your claim.

    I got that claim by reading the document.

    Until you do that it doesn’t make sense for me to listen to your criticism that just demonstrates that you haven’t actually bothered to read the document.
    [You are a guest here, and you are expected to treat other guests with respect. If you are rude again I will snip you -Fly]

    00

  • #
    Andrew Barnham

    Adam

    You are probably the rudest person I have ever interacted with. A poor ambassador for your cause.

    I took the time to write to you, impassionately, impersonally and with considered respect and you have been anything but.

    No further. I could point out all the stupidities and inanities in your last comment. Why trouble myself?

    Good day. check in again in 20 years when the lid of your silly cargo cult is blown. We’ll talk then.

    00

    • #
      Tristan

      What is the cargo?

      00

      • #

        What is the cargo?

        (Previously difficult to obtain) grant money leading to lots of work for them and their students.
        Fame by way of regular appearances on T.V (especially public broadcasters) and radio and press.

        Think Flannery, Hoeg-Guldberg, Karoly et al. No household names prior to AGW scam.
        Invitations to leftard back-slapping gatherings such as arts festivals, film festivals, big ideas programs etc etc.

        Previously existing on average incomes, now owning double blocks on expensive river side property, travelling to far and wide exotic locales for all expenses paid shindigs and getting gongs on Australia day and Queens Birthday honour rolls.

        I can think of many many obscure ‘real’ scientists who do something spectacular and become famous for 15 mins only to go back to being obscure again.
        But climate alarmists spruiking doom and gloom? they go on and on and on putting energiser bunnies to shame.

        That there my naive friend is one very attractive cargo.

        00

      • #
        Andrew Barnham

        Cargo candidates:
        * A sense of self worth, and accordant social prestige, in that one is positively contributing to the well being of the species
        * A messianic conviction that you’ll be remembered as the saviour of the human race

        Doomsday cult is a more appropriate monicker for CAGW. Cargo Cult is more appropriate for describing some institutional processes that are used to justify CAGW fearfullness. My apologies for conflating the two. Thanks for calling me out on this.

        00

  • #

    So Adam, you’ve just posted copious comments, but still haven’t addressed the issue, or replied to my comment at #75 above. Please reply, briefly, succinctly, and stick to the topic- the evidence shows BOM have NOT tested Acorn for max > min: if they did, the testing was so poor they stuffed up 954 times.
    Ken

    00

  • #
    old44

    Are there any examples of the maximum temperature being less than the minimum?

    00

  • #
    Wayne, s. Job

    I am some what bemused by this Smith family Robertson, they seem to be marooned on an island of group think. Failed and debunked theory support their forte. Some one more sagacious than I once said ” If your experiment need statistics, you are doing the wrong experiment.”

    Sadly your Smith family faces a few problems, these could have been avoided if scientific principles had been adhered to. The crossroads where deceit meets truth is rapidly approaching as the normal warming cycle turns to the cooling cycle which is also normal.

    However this time the cooling cycle has a problem and it is not CO2, this time around the cooling cycle has co-incided with the suns sabatical, that it takes every few hundred years.
    This could mean a serious downgrading of your job description, the advent of a new L.I.A. is not beyond the realms of possibility.

    Regardless of your gurus and gods whom tend to say otherwise it is the sun the moon our planets and the universe that controls the cycles of our climate, including, collisions with large objects.

    That you believe the follies of your gurus implies much as to your easily manipulated thought processes. Bemused maybe not, pity for your plight perhaps.

    00

  • #

    Adam, I’ll take your silence on the specific issue of this post, and your lack of response to direct requests to you by myself (#75 and #82) as final evidence of your complete lack of understanding of the science being discussed – namely that the adjustments to the raw data are unreliable as the quality assurance is so poor as to allow 954 instances of Tmax < Tmin even though this is claimed to have been checked.
    Your obfuscation and hair splitting with replies to many others above further demonstrates this, and also demonstrates that you are not here to debate the facts but to push your own agenda.
    You have no hope of convincing anyone here with your wordy, over-blown, self-congratulatory rhetoric. In short, you are a troll, and a pretty poor one at that. Goodbye.
    Ken

    00

  • #

    […] I published a note concerning the quality of the first release of ACORN-SAT data. It appeared in Jo Nova in July 2012 and later in Andrew Bolt. (I think this might be the article Willis Eschenbach […]

    00

  • #

    […] I published a note concerning the quality of the first release of ACORN-SAT data. It appeared in Jo Nova in July 2012 and later in Andrew Bolt. (I think this might be the article Willis Eschenbach […]

    10

    • #
      John Oh

      Er, the “super computer that the BOM is using is not up to the job it seems.
      BoM now wants Australia’s biggest supercomputer!
      Link http://tinyurl.com/BomAus The Bureau is bidding for a supercomputer with 50 percent more capability than Raijin.
      The agency collects around 1TB of data every day, and expects this to grow by 30 percent every 18 months to two years.
      At 50 percent increase this hardly seems worth it. I didnt notice much difference when I increased my computer with only 50% increase in power…
      But at $50 Million plus, it seems exorbitant. Cant they wait till the new MS Xbox comes out please? They can still blame the computer if the crunched numbers come out wrong…and save millions!

      10