Last post for NIWA’s ‘recognised’ methods

I have rediscovered an incomplete skirmish with NIWA’s chief executive, Mr John Morgan; all that remains for me is to concede defeat. This is my last post mourning the passing of good science.

A year ago, Morgan boasted “The methodology applied by Niwa was in accordance with internationally recognised methodology.” He was referring to NIWA’s preparation of the NZ temperature record, the seven-station series (7SS) which the coalition challenged in an application for judicial review the year before. Those following the story will recall that the challenge concerned the original 7SS, dating from 1999, not the revised 7SS prepared by Dr Brett Mullan in 2010.

I summarised this story last November and in February this year I asked again (rather patiently, I thought) for “a copy of the scientific literature that approves of the measurement technique,” explaining “I believe you have not answered my question.”

After Morgan’s final refusal on 20 March I was angry. I told him:

Confirmation of your statement can only be found in documents describing the international recognition you cite, so nothing else will do; I ask you to produce them because you have not produced them anywhere else.

In the absence of your confirmation, doubts arise as to whether the documents exist. Yet as a prestigious institution steeped in the ways of science, NIWA surely understands the value to credibility of producing evidence.

My readers and I want to see a copy of those documents you cited in which international recognition of NIWA’s methods in the Review is described, or an Internet reference to such documents. As it concerns material whose existence you confirmed in a public forum, our request is an appropriate matter under the Official Information Act 1982. It doesn’t require scientific input. You can answer it easily with some photocopied pages.

But it’s a hopeless case now because after getting an answer, however simple-minded, the Ombudsman will not query the good sense of it, just (officially, whatever his reservations) be satisfied some answer was given. So Morgan wins the skirmish, but at the considerable cost of exposing himself to distrust.

For nobody could accept his preposterous answer to me (which implies no scientific corroboration whatsoever) so he surely advertises his loss of control of unruly scientists. For if David Wratt—or Brett Mullan or whomever Morgan might have gone to—had provided a normal scientific reference to even a single instance of international approval of their methods, Morgan would have passed it on to me, for why shouldn’t he?

But he had to go to a complete outsider, Justice Venning, for some comfort that his guys used an internationally-recognised method. Even the CEO couldn’t get a name or a reference out of his top climate scientists, from Wratt down. And what does a judge know of science? Well, in his own words, in his own judgment, Geoffrey Venning confessed (paragraph numbers given):

[41] It is well established that the Court, in considering an application for judicial review, will be cautious about interfering with decisions made by a specialist body acting within its own sphere of expertise.

[45] I consider this Court should be cautious about interfering with decisions made and conclusions drawn by a specialist body, such as NIWA, acting within its own sphere of expertise. In such circumstances a less intensive or, to put it another way, a more tolerant review is appropriate.

[47] Unless the decision maker has followed a clearly improper process, the Court will be reluctant to adjudicate on matters of science and substitute its own inexpert view of the science if there is a tenable expert opinion.

[48] I consider that unless the Trust can point to some defect in NIWA’s decision-making process or show that the decision was clearly wrong in principle or in law, this Court will not intervene. This Court should not seek to determine or resolve scientific questions demanding the evaluation of contentious expert opinion.

So there you have it. John Morgan relies on a self-adjudged, non-scientific source to warrant international ‘scientific recognition’ of NIWA’s 7SS.

Hoorah, hoorah.

But who believes him?

Views: 402

14 Thoughts on “Last post for NIWA’s ‘recognised’ methods

  1. Magoo on 04/11/2014 at 4:53 pm said:

    I stand by my previous comment that I made about NIWA that you pulled me up on RT, heaven forbid anyone should call a spade a spade. There’s a difference between an ad hominem attack and stating a fact about certain people’s honesty. People who hide things have a reason to hide them, they’re being less than honest – hopefully that way of putting it is, dare I say it, ‘intelligent?’ enough.

    • Magoo,

      I’m so glad you turned up again. I was missing you! You’re right: it’s just the comment I complained of was a bit raw. Nice to have you back, mate!

  2. Richard C (NZ) on 04/11/2014 at 5:36 pm said:

    Neither Morgan (science) nor Venning (law) have fact on their side. It was Venning’s duty to address a question of fact but not a question of science. He did not establish fact wrt the evidence (‘Statistical Audit’), NIWA’s methodology, and RS93 (the established scientific opinion) as discussed previously here:

    https://www.climateconversation.org.nz/2014/11/analysis-of-renowdens-analysis-of-our-reanalysis/#comment-1224251

    No science, just fact – simple.

    If Venning had established fact he would have had cause in [48] to intervene. At least to determine the respective standings of NIWA 7SS and NZCSET 7SS according to literature. Turns out that there was/is no literature underpinning NIWA’s 7SS, neither is there international recognition in literature.

    In this saga, neither Morgan nor Venning have represented the pinnacles of their professions.

    Defeat? No. You have fact and truth.

  3. Magoo on 04/11/2014 at 5:49 pm said:

    RT – Have you mentioned the new de Freitas paper to ACT’s mp David Seymour?

  4. Richard C (NZ) on 05/11/2014 at 8:59 am said:

    I replied to Steve M at Bishop Hill:

    Steve

    >”It is possible to provide turnkey code in R (for example) for analyses like this”

    Yes it is but I don’t think there was a code implementation of any sort. I’m not a co-author but I’m reasonably sure there was no code used. I get the impression that the method was simply translated to spreadsheet functions. But whether there’s VBA I don’t know. I doubt NIWA have code for their 7SS version. They may do but the question has never been raised until now to my knowledge. I expect they just use a spreadsheet too but as far as i know, NZCSC has never requested code or spreadsheets from NIWA that I know of, it’s not necessary for replication. NIWA produced their adjustments but NZCSC could not replicate those adjustments using the established method (RS93) – that’s the issue, not code.

    I’m not arguing, I’m not a co-author, I’m just expressing opinion from the point of view of someone who has accessed the raw data from CliFlo and attempted a rudimentary replication myself prior to the ‘Statistical Audit’ but that was before it was realized that RS93 was the statistical method to use. Not being prepared to commit to the project (I’m not a member of NZCSC) and not able to assist statistically (skill not up to it), I didn’t participate in the Audit. But with some effort I’m sure I could replicate de Freitas et al’s adjustment for at least one site change from the method supplied without code.

    The authors may make some statement re code – up to them. I think that rather than code, questions will eventually move to the respective adjustment methodologies e.g. RS93 (NZCSC) vs PM-95 (BOM). BOM do have code for ACORN-SAT implementing PM-95 (Percentile Matching), but they have not released it even though they promised to do so (they’ve only just released their adjustments). The issue then becomes – does PM-95 replicate RS93 and vice versa? Same for BEST’s “scalpel” and whatever method GISS uses (what is that?).

    If the respective methods don’t replicate each other then code is the last thing we’ll be asking about. It might be more productive, if you are interested enough, to acquire BOM’s code (if they’ll release it) and apply that to both ACORN-SAT Max/Min and eventually NZ 7SS Mean and Max/Min in the manner you have laid out. That would be interesting.

    http://www.bishop-hill.net/blog/2014/10/31/new-zealands-temperature-record.html?lastPage=true&postSubmitted=true

    • You’re right, RC. This is too hard.

      It isn’t something one can do with R code, there’s not enough to automate. The calculations and checks depend on operator decisions in every case and are time-consuming. Our team mainly used spreadsheets and wrote a little code but would never publish it. They have no plans to build a commercial-quality RS93 app.

      The journal didn’t expect any code. Maybe McIntyre doesn’t appreciate the lack of utility from a programmed algorithm. It would require a great effort for little return.

      A related question might be: where is the SI for Mullan (2012), M10, RS93, S81, etc.?

    • Richard C (NZ) on 05/11/2014 at 1:53 pm said:

      >”They have no plans to build a commercial-quality RS93 app.”

      Nice to have, overkill to my mind, not necessary for 7SS, but it’s what Steve is used to and there’s the Mann thing that Steve’s been immersed in. I don’t think the Mann thing is relevant in this case because the RS93/de Freitas et al method is transparent, but I’m not at all sure this is a proper distinction.

      I doubt BOM built their code from scratch, they may have, but I’m inclined to think they’ve picked up a package from off the shelf and modified it, possibly RHtestsV4 which is Quartile Matching. BOM haven’t released their code (see below). Still looking into this over time, little by little, but seems to me that RHtestsV4 could be adapted to Percentile Matching. I suspect we’ll find this is what BOM has done but I could be wrong.

      >”A related question might be: where is the SI for Mullan (2012), M10, RS93, S81, etc.?”

      Yes I put that to Steve but his response was:

      “Again, the fact that Rhoades and Salinger didn’t show code is no reason for you not to show your code. I don’t understand why you are arguing about this. It is possible to provide turnkey code in R (for example) for analyses like this so that readers can use the script to access the data, watch the analysis and produce the graphs and stop and do their own variations if they want. I’ve never heard a good reason why people can’t do this. I think that you should as well.”

      My reply was as per previous comment upthread. I’ve since made a more detailed reply, the synopsis of which is this:

      Steve, re NIWA vs NZCSC vs BOM vs BEST vs GISS

      Apart from breakpoint statistical analysis techniques, the respective homogenization methods are vastly different too. But there are common breakpoints. Then there’s non pre-identified site change break adjustments by the others extra to those in NIWA/NZCSC for which NIWA/NZCSC do not adjust for

      […]

      It is really only necessary to consider a few specific breakpoints in order to compare NIWA vs NZCSC vs BOM vs BEST vs GISS. There is no need to reconstruct each of the entire Australian and New Zealand multi-location series. It is not even necessary to compile location series e.g. case studies of Auckland, Masterton (an easy one) in NZ or Rutherglen, Amberley, Bourke in AU. Just breakpoints within a homogenized location is a start, then a location series, then multiple locations

      Along with,

      Steve FYI. BOM do not adjust for breaks of less than 0.3 as per Trewin TR-049. Of the 20 adjustments de Freitas et al make to the NZ 7SS, 8 are for less than 0.3, smallest is +0.02, largest is −1.00.

      # # #

      I’ll be interested to see whether Steve stick’s to his code requirement or whether I’ve piqued his interest enough to take my suggestion of applying different statistical breakpoint analysis techniques (RS93 vs PM-95 vs Scalpel vs whatever GISS uses) to a few selected breakpoints in both the NZ 7SS and ACORN-SAT.

    • Richard C (NZ) on 05/11/2014 at 2:10 pm said:

      >”The calculations and checks depend on operator decisions in every case”

      Exactly RT. This has been picked up on by myself and others in AU discussions, BOM have automated the process, made adjustments for “statistical” reason only (no recourse to station histories), and human input has been neglected.

      It has only been after the release of the list of adjustments that BOM have had to rush around retroactively to try to find local reasons for the automated statistical adjustments they’ve made when sceptics pointed out the glaring problems.

      I’ve briefly outlined the respective approaches for Steve, viz:

      NIWA/NZCSC: site change identification => breakpoint analysis => adjustment criteria
      BOM: breakpoint analysis => adjustment criteria => site change identification as an afterthought,

      I get the impression that Steve is oriented only towards an automated process. I think it will take some time to get through to everyone, including the likes of Steve, just what the real issues are.

    • Richard C (NZ) on 05/11/2014 at 2:15 pm said:

      >”BOM haven’t released their code (see below)”

      Forgot the main “see below” bit. In my initial comment upthread I said:

      “It might be more productive, if you are interested enough, to acquire BOM’s code (if they’ll release it) and apply that to both ACORN-SAT Max/Min and eventually NZ 7SS Mean and Max/Min in the manner you have laid out. That would be interesting.”

      If Steve’s chasing code, that’s the code to chase.

    • Richard+C+(NZ) on 05/11/2014 at 3:20 pm said:

      >”BOM do have code for ACORN-SAT implementing PM-95 (Percentile Matching), but they have not released it even though they promised to do so (they’ve only just released their adjustments).”

      The agreement is here (page 7 pdf):

      Bureau of Meteorology response to recommendations of the Independent
      Peer Review Panel
      15 February 2012

      C2. The computer codes underpinning the ACORNSAT
      data-set, including the algorithms and protocols
      used by the Bureau for data quality control,
      homogeneity testing and calculating adjustments to
      homogenise the ACORN-SAT data, should be made
      publicly available. An important preparatory step
      could be for key personnel to conduct code walkthroughs
      to members of the ACORN-SAT team.

      Agreed. The computer codes underpinning the
      ACORN-SAT data-sets will be made publicly available
      once they are adequately documented. The Bureau
      will invest effort in improving documentation on the
      code so that others can more readily understand it.

      http://www.bom.gov.au/climate/change/acorn-sat/documents/ACORN-SAT_Bureau_Response_WEB.pdf

      [For some reason this got stuck in moderation. – RT]

  5. RC,

    You quote Steve McIntyre: “Again, the fact that Rhoades and Salinger didn’t show code is no reason for you not to show your code. I don’t understand why you are arguing about this.”

    I’d like to see the whole conversation; did it take place online or privately?

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation