Tampering at Australian BOM exploded

Devastating criticism from William Kininmonth

This is dynamite. Heartland’s November Environment & Climate News reports scientist Jennifer Marohasy and environment editor Graham Lloyd, among others, have learned the Australian Bureau of Meteorology (BOM) has been “fudging” historical temperature records to fit a warming narrative. William Kininmonth, a retired meteorologist and former head of the National Climate Centre at BOM, said the agency “has constructed a synthetic climate record whose relevance to climate change is not scientifically defensible.”

Wow! Consider that, translated into everyday terms, he means: “No reason lies behind their results. They have fabricated them—they have lied.”

Also from Heartland, H. Sterling Burnett writes:

The Australian Bureau of Meteorology (ABM) was recently forced to admit it alters the temperatures recorded at almost all the official weather stations in Australia. The ABM came clean on its temperature fiddling largely because of the fierce scrutiny of Graham Lloyd, environment editor for The Australian and The Weekend Australian, who published a series of articles on the ABM’s number-fudging.

 

13 Thoughts on “Tampering at Australian BOM exploded

  1. Richard C (NZ) on November 9, 2014 at 7:59 am said:

    >”scientist Jennifer Marohasy”

    See Jennifer’s ‘Temperature’ posts here: http://jennifermarohasy.com/tag/temperatures/

    The respective break analyses, NIWA vs NZCSC vs BOM vs BEST vs GISS is where the investigation should be going. I laid it out at Bishop Hill in reply to Steve McIntyre:

    http://www.bishop-hill.net/blog/2014/10/31/new-zealands-temperature-record.html#comments

    See Nov 4, 2014 at 7:54 PM and subsequent comments.

  2. Richard C (NZ) on November 9, 2014 at 8:18 am said:

    From one of my Bishop Hill comments:

    It is really only necessary to consider a few specific breakpoints in order to compare NIWA vs NZCSC vs BOM vs BEST vs GISS. There is no need to reconstruct each of the entire Australian and New Zealand multi-location series. It is not even necessary to compile location series e.g. case studies of Auckland, Masterton (an easy one) in NZ or Rutherglen, Amberley, Bourke in AU. Just breakpoints within a homogenized location is a start, then a location series, then multiple locations.
    Nov 4, 2014 at 9:07 PM

    But how can NIWA’s method be applied to an ACORN-SAT break?

  3. Richard C (NZ) on November 9, 2014 at 8:23 am said:

    The different approaches on each side of the Tasman.

    For the NZ 7SS the approach has been:

    Site change identification => breakpoint analysis => adjustment criteria

    For the AU ACORN-SAT the BOM approach is:

    Breakpoint identification => adjustment criteria => non-climatic change identification as an afterthought.

    BOM have only now made public the adjustments under pressure. They still haven’t released their code as promised.

  4. Jennifer Marohasy and Graham Lloyd complain about a single station’s adjustments at Rutherglen where a breakpoint had been detected. Someone goes back through the records and finds some old photos which confirm a previously unidentified station shift. Score one for the homogenisation algorithm. There is no apology or retraction from Jennifer and Graham. Yet again, a Heartland sponsored attempt to spread uncertainty and doubt simply makes the protagonists looks stupid.

  5. Richard C (NZ) on November 9, 2014 at 8:40 am said:

    >Kininmonth noted, “There is no justifiable basis to modify actual observations without evidence of changed instrumentation or environmental factors; where there is evidence of such changes the adjustments can only be considered speculative, especially if the adjustments are made on the basis of statistical links to independent observations from tens of kilometres away.”

    Yes, many of the ACORN adjustments are for “statistical” cause with no local reason identifiable.

    BEST does station break analyses but doesn’t homogenize stations like BOM. But within the stations, BEST gets different results to BOM, see Rutherglen Min 1980 for example. BEST adjusts for missing data, BOM doesn’t.

    It’s when you start comparing the different methods applied to the same datasets that these inconsistencies become all too obvious.

  6. Richard C (NZ) on November 9, 2014 at 8:51 am said:

    >”…finds some old photos which confirm a previously unidentified station shift”

    Actually it wasn’t photos and the documentation found didn’t confirm it. The move had to be inferred.

    But that’s the minor issue. An adjustment was only made to Min, not to Max. And the adjustment was huge. BOM’s “statistical” break adjustment to Min is on the assumption that minimum temperatures on one side of the small rise are about 1.8 C cooler than on the other, but Max is the same. The Min difference could be proved, or not, by installation of an AWS at the previous (assumed) site.

    Proof of the statistical methodology is required – it has not been proved.

    But here’s the thing. After adjustment by neighbour comparison, Rutherglen Research Min has a greater trend than the same neighbours – bogus.

  7. Richard C (NZ) on November 9, 2014 at 8:54 am said:

    >”Jennifer Marohasy and Graham Lloyd complain about a single station’s adjustments at Rutherglen”

    And Amberley, and Bourke, and……and……. Not just Marohasy and Lloyd either, there’s swarms poring over this now. Ken Stewart (Kenskingdom) has been for yonks of course.

    Just getting started Simon, now that BOM have released their adjustments as promised – but only after pressure.

  8. Richard C (NZ) on November 9, 2014 at 9:00 am said:

    >”Score one for the homogenisation algorithm”

    Except different algorithms produce different results. BEST adjusts for missing data Rutherglen Research 1980, BOM doesn’t.

    Who scores one then Simon?

  9. Richard C (NZ) on November 9, 2014 at 9:05 am said:

    BEST don’t use this Rutherglen Research record Simon – why not?

    They go to Rutherglen Post Office.

    BTW, I would point out that the “photos” are of the same site – not different sites. It’s the vague documentation at the bottom that the move has been inferred from (i.e. doesn’t state explicitly).

  10. Richard C (NZ) on November 9, 2014 at 9:17 am said:

    Simon, 2 questions for you:

    1) What was the local reason for this adjustment?

    Rutherglen 82039 Max 01/01/1938 Statistical (Impact -0.59)

    2) What was the local reason for this adjustment?

    Rutherglen 82039 Min 01/01/1928 Statistical (Impact -0.49)

    http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT-Station-adjustment-summary.pdf

    Odd, because on the Rutherglen summary there;s this:

    Rutherglen 82039 Max 1/1/1938 Statistical -0.62

    http://www.bom.gov.au/climate/change/acorn-sat/documents/station-adjustment-summary-Rutherglen.pdf

  11. Richard C (NZ) on November 10, 2014 at 8:49 pm said:

    I see this in the Rutherglen Research Summary:

    2. 1 January 1950—breakpoint detected by statistical methods. A 1949 inspection reported that the site
    had become overgrown.
    • Daytime temperatures started to appear warmer relative to surrounding stations.
    • Max T changed by +0.62 °C (reversing the 1938 change). No detectable impact on Min T so no
    adjustments made.
    http://www.bom.gov.au/climate/change/acorn-sat/documents/station-adjustment-summary-Rutherglen.pdf

    So they change Max for the ENTIRE series prior to Jan 1950 by +0.62 °C. NZCSC/de Freitas et al (2014) do not do this. Instead, for example, there is a section in de Freitas et al:

    5.2 Gradual Inhomogeneities

    Comparison of inter-homogenization methods just gets worse and worse.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation