Last post for NIWA’s ‘recognised’ methods

I have rediscovered an incomplete skirmish with NIWA’s chief executive, Mr John Morgan; all that remains for me is to concede defeat. This is my last post mourning the passing of good science.

A year ago, Morgan boasted “The methodology applied by Niwa was in accordance with internationally recognised methodology.” He was referring to NIWA’s preparation of the NZ temperature record, the seven-station series (7SS) which the coalition challenged in an application for judicial review the year before. Those following the story will recall that the challenge concerned the original 7SS, dating from 1999, not the revised 7SS prepared by Dr Brett Mullan in 2010.

I summarised this story last November and in February this year I asked again (rather patiently, I thought) for “a copy of the scientific literature that approves of the measurement technique,” explaining “I believe you have not answered my question.”

After Morgan’s final refusal on 20 March I was angry. I told him:

Confirmation of your statement can only be found in documents describing the international recognition you cite, so nothing else will do; I ask you to produce them because you have not produced them anywhere else.

In the absence of your confirmation, doubts arise as to whether the documents exist. Yet as a prestigious institution steeped in the ways of science, NIWA surely understands the value to credibility of producing evidence.

My readers and I want to see a copy of those documents you cited in which international recognition of NIWA’s methods in the Review is described, or an Internet reference to such documents. As it concerns material whose existence you confirmed in a public forum, our request is an appropriate matter under the Official Information Act 1982. It doesn’t require scientific input. You can answer it easily with some photocopied pages.

But it’s a hopeless case now because after getting an answer, however simple-minded, the Ombudsman will not query the good sense of it, just (officially, whatever his reservations) be satisfied some answer was given. So Morgan wins the skirmish, but at the considerable cost of exposing himself to distrust.

For nobody could accept his preposterous answer to me (which implies no scientific corroboration whatsoever) so he surely advertises his loss of control of unruly scientists. For if David Wratt—or Brett Mullan or whomever Morgan might have gone to—had provided a normal scientific reference to even a single instance of international approval of their methods, Morgan would have passed it on to me, for why shouldn’t he?

But he had to go to a complete outsider, Justice Venning, for some comfort that his guys used an internationally-recognised method. Even the CEO couldn’t get a name or a reference out of his top climate scientists, from Wratt down. And what does a judge know of science? Well, in his own words, in his own judgment, Geoffrey Venning confessed (paragraph numbers given):

[41] It is well established that the Court, in considering an application for judicial review, will be cautious about interfering with decisions made by a specialist body acting within its own sphere of expertise.

[45] I consider this Court should be cautious about interfering with decisions made and conclusions drawn by a specialist body, such as NIWA, acting within its own sphere of expertise. In such circumstances a less intensive or, to put it another way, a more tolerant review is appropriate.

[47] Unless the decision maker has followed a clearly improper process, the Court will be reluctant to adjudicate on matters of science and substitute its own inexpert view of the science if there is a tenable expert opinion.

[48] I consider that unless the Trust can point to some defect in NIWA’s decision-making process or show that the decision was clearly wrong in principle or in law, this Court will not intervene. This Court should not seek to determine or resolve scientific questions demanding the evaluation of contentious expert opinion.

So there you have it. John Morgan relies on a self-adjudged, non-scientific source to warrant international ‘scientific recognition’ of NIWA’s 7SS.

Hoorah, hoorah.

But who believes him?

Leave a Reply

5 Comment threads
9 Thread replies
Most reacted comment
Hottest comment thread
3 Comment authors
Notify of

I stand by my previous comment that I made about NIWA that you pulled me up on RT, heaven forbid anyone should call a spade a spade. There’s a difference between an ad hominem attack and stating a fact about certain people’s honesty. People who hide things have a reason to hide them, they’re being less than honest – hopefully that way of putting it is, dare I say it, ‘intelligent?’ enough.


I’m so glad you turned up again. I was missing you! You’re right: it’s just the comment I complained of was a bit raw. Nice to have you back, mate!

Richard C (NZ)

Neither Morgan (science) nor Venning (law) have fact on their side. It was Venning’s duty to address a question of fact but not a question of science. He did not establish fact wrt the evidence (‘Statistical Audit’), NIWA’s methodology, and RS93 (the established scientific opinion) as discussed previously here:

No science, just fact – simple.

If Venning had established fact he would have had cause in [48] to intervene. At least to determine the respective standings of NIWA 7SS and NZCSET 7SS according to literature. Turns out that there was/is no literature underpinning NIWA’s 7SS, neither is there international recognition in literature.

In this saga, neither Morgan nor Venning have represented the pinnacles of their professions.

Defeat? No. You have fact and truth.


RT – Have you mentioned the new de Freitas paper to ACT’s mp David Seymour?

No, didn’t think of it. Thanks, I’ll look for an address.

Richard C (NZ)

I replied to Steve M at Bishop Hill: Steve >”It is possible to provide turnkey code in R (for example) for analyses like this” Yes it is but I don’t think there was a code implementation of any sort. I’m not a co-author but I’m reasonably sure there was no code used. I get the impression that the method was simply translated to spreadsheet functions. But whether there’s VBA I don’t know. I doubt NIWA have code for their 7SS version. They may do but the question has never been raised until now to my knowledge. I expect they just use a spreadsheet too but as far as i know, NZCSC has never requested code or spreadsheets from NIWA that I know of, it’s not necessary for replication. NIWA produced their adjustments but NZCSC could not replicate those adjustments using the established method (RS93) – that’s the issue, not code. I’m not arguing, I’m not a co-author, I’m just expressing opinion from the point of view of someone who has accessed the raw data from CliFlo and attempted a rudimentary replication myself prior to the ‘Statistical Audit’ but that was before it was… Read more »

You’re right, RC. This is too hard.

It isn’t something one can do with R code, there’s not enough to automate. The calculations and checks depend on operator decisions in every case and are time-consuming. Our team mainly used spreadsheets and wrote a little code but would never publish it. They have no plans to build a commercial-quality RS93 app.

The journal didn’t expect any code. Maybe McIntyre doesn’t appreciate the lack of utility from a programmed algorithm. It would require a great effort for little return.

A related question might be: where is the SI for Mullan (2012), M10, RS93, S81, etc.?

Richard C (NZ)

>”They have no plans to build a commercial-quality RS93 app.” Nice to have, overkill to my mind, not necessary for 7SS, but it’s what Steve is used to and there’s the Mann thing that Steve’s been immersed in. I don’t think the Mann thing is relevant in this case because the RS93/de Freitas et al method is transparent, but I’m not at all sure this is a proper distinction. I doubt BOM built their code from scratch, they may have, but I’m inclined to think they’ve picked up a package from off the shelf and modified it, possibly RHtestsV4 which is Quartile Matching. BOM haven’t released their code (see below). Still looking into this over time, little by little, but seems to me that RHtestsV4 could be adapted to Percentile Matching. I suspect we’ll find this is what BOM has done but I could be wrong. >”A related question might be: where is the SI for Mullan (2012), M10, RS93, S81, etc.?” Yes I put that to Steve but his response was: “Again, the fact that Rhoades and Salinger didn’t show code is no reason for you not to show your code. I… Read more »

Richard C (NZ)

>”The calculations and checks depend on operator decisions in every case”

Exactly RT. This has been picked up on by myself and others in AU discussions, BOM have automated the process, made adjustments for “statistical” reason only (no recourse to station histories), and human input has been neglected.

It has only been after the release of the list of adjustments that BOM have had to rush around retroactively to try to find local reasons for the automated statistical adjustments they’ve made when sceptics pointed out the glaring problems.

I’ve briefly outlined the respective approaches for Steve, viz:

NIWA/NZCSC: site change identification => breakpoint analysis => adjustment criteria
BOM: breakpoint analysis => adjustment criteria => site change identification as an afterthought,

I get the impression that Steve is oriented only towards an automated process. I think it will take some time to get through to everyone, including the likes of Steve, just what the real issues are.

Richard C (NZ)

>”BOM haven’t released their code (see below)”

Forgot the main “see below” bit. In my initial comment upthread I said:

“It might be more productive, if you are interested enough, to acquire BOM’s code (if they’ll release it) and apply that to both ACORN-SAT Max/Min and eventually NZ 7SS Mean and Max/Min in the manner you have laid out. That would be interesting.”

If Steve’s chasing code, that’s the code to chase.


>”BOM do have code for ACORN-SAT implementing PM-95 (Percentile Matching), but they have not released it even though they promised to do so (they’ve only just released their adjustments).”

The agreement is here (page 7 pdf):

Bureau of Meteorology response to recommendations of the Independent
Peer Review Panel
15 February 2012

C2. The computer codes underpinning the ACORNSAT
data-set, including the algorithms and protocols
used by the Bureau for data quality control,
homogeneity testing and calculating adjustments to
homogenise the ACORN-SAT data, should be made
publicly available. An important preparatory step
could be for key personnel to conduct code walkthroughs
to members of the ACORN-SAT team.

Agreed. The computer codes underpinning the
ACORN-SAT data-sets will be made publicly available
once they are adequately documented. The Bureau
will invest effort in improving documentation on the
code so that others can more readily understand it.

[For some reason this got stuck in moderation. – RT]


You quote Steve McIntyre: “Again, the fact that Rhoades and Salinger didn’t show code is no reason for you not to show your code. I don’t understand why you are arguing about this.”

I’d like to see the whole conversation; did it take place online or privately?

Richard C (NZ)

>”I’d like to see the whole conversation; did it take place online or privately?”

In the BH thread. Unfortunately I can’t link to the comment directly but it’s at Nov 4, 2014 at 5:21 PM about 4 up from the bottom:

Thanks, RC.

Post Navigation