• Guest post •
— by Bob Dedekind
I chuckled at Gareth Renowden’s attempt to rebut our paper, for two reasons: he makes many mistakes and whoever is feeding him bits of information seems to let him down.
I printed out and highlighted his mistakes so I could deal with them individually. However, when I had finished the whole article was one big highlighted blob, so I’ll focus just on the most glaring mistakes.
Gareth makes the remarkable and ignoble assertion that a respected scientific journal could be influenced by a local New Zealand professor into bypassing established processes of peer review in order to publish a favoured paper—and he says that, even though the paper in question is sceptical of established AGW claims—when such papers are always subjected to much greater scrutiny.
Did he learn nothing from Climategate, how NIWA employees colluded with the AGW cabal to stop the publication of papers and in fact hounded editors when they dared to allow free scientific debate to occur in their journals?
Even a cursory glance at the paper metadata shows that our paper went through extensive peer review. It was received December 2013 and not accepted until October 2014. Contrast this with Mullan’s 2012 paper in a local journal (received May 2012, revised June 2012, published July 2012).
I can assure you, the peer review was extensive. Gareth and his mates had better come up with some good arguments, because those reviewers know what’s what and they are probably already less than amused about Gareth’s allegations.
He calls them ‘gullible’, he asserts they are not ‘self-respecting’, that they have no ‘credible peer-review process’, that they are not as ‘relevant’ as other journals, and more.
I wonder if a sudden chill may descend on NIWA from certain quarters. After all, we don’t know who the peer-reviewers were, do we? But they know who NIWA is.
Source of 7SS
Now, moving on to specific errors made by Gareth in his rush into print. The funniest is one of the first.
“dFDB 2014 repeats the old canard that NIWA’s Seven Station Series (7SS) before the 2010 review was based on the adjustments made in Jim Salinger’s 1981 thesis. This was a key claim in the NZ Climate Science Education Trust’s evidence to the High Court and so transparently at odds with written reports and papers from 1992 onwards that it was easy for NIWA to refute.”
Really? How about the Parliamentary Questions, where the head of NIWA told us that the 7SS adjustments were based on Salinger’s thesis? Wow. Or NIWA’s Review itself, where the techniques are shown to match Salinger’s thesis?
Richard has already dealt with this in detail, so I won’t go further into it here. Suffice it to say that there is zero evidence to show that the pre-2010 7SS was ever based on a correct application of RS93, apart from the assertions of some at NIWA. When questioned, they claimed all the evidence was lost. Uh huh.
Ignoring NIWA’s work
“dFDB 2014 derives a warming rate of +0.28ºC per century, by claiming to apply a method published by Rhoades and Salinger in 1993 (RS93). It claims to create a new benchmark record by reapplying an old technique — essentially ignoring all the work done by NIWA in deriving the current 7SS.”
Difficult to untangle the confusion apparent on this one. Firstly, the current 7SS uses the old technique, based on Salinger’s 1981 thesis. We applied a new technique (RS93) to it for the first time.
Secondly, as for ignoring NIWA’s previous work, Gareth has clearly not even read our paper. In section 3, at the end, we clearly state:
“…we have used the same comparison stations as M10 for all adjustments. In the result, our methodology and data inputs wholly coincide with M10 except in two respects:
(a) the use of RS93 statistical techniques to measure differences, as opposed to S81/M1 measurement techniques (Table 1), and
(b) acceptance of the findings of Hessell’s paper regarding the contamination of Auckland and Wellington raw data.”
M10 is the NIWA Review of the current 7SS. So in Gareth’s mind, basing our paper on all the work done by NIWA in deriving the current 7SS is the same as ignoring all the work done by NIWA in deriving the current 7SS! One weeps.
Workings or SI
According to Gareth:
“The paper as published contains no workings or supplemental material that would allow reproduction of their results…”
Again, surely Gareth hasn’t even read the paper. It contains extensive workings and references in Section 5 and even a worked example in Section 6 to step the reader through the process. All interim results are recorded in detail in Table 3. How is that not reproducible?
Periods for comparison
“dFDB 2014 claims that RS93 mandates the use of one year and two year periods of comparison data when making adjustments for a station change, but RS93 makes no such claim. RS93 uses four year periods for comparison, in order to ensure statistical significance for changes — and no professional working in the field would use a shorter period.”
Now at last we’re getting some criticism of the actual science in the paper as opposed to ad hominem attacks.
There is little doubt that RS93 recommends the use of short time periods before and after a site change. They specifically mention one and two year periods either side and in their worked example in section 2.4 they use two years. RS93 do not use four year periods for comparison, except in another part of the paper dealing with isolated stations (not relevant here).
Any assertion that makes the claim that RS93 does not use one or two year periods is false. Any assertion that RS93 uses four year periods is false.
Of course, it’s more than likely that Gareth’s vision is somewhat blurry on this point. Perhaps he is confused whether it’s two years before and after a change or four years in total? Who knows? But if he wants to wriggle out via that tunnel, then he should be aware that he would be confirming the two-year approach.
As for the claim that no professional working in the field would use a shorter period, then is Gareth now claiming that Dr Jim Salinger (the co-author of RS93) is not a professional, since he clearly uses it in section 2.4 of RS93? What about Dr David Rhoades? Should we write and tell them that?
One last point: we also used three-year periods in our paper on those rare occasions when the results from one and two year periods were contradictory. I felt it was the best way to break the deadlock. Nobody to date has criticised that approach.
Gareth also makes this assertion, based on, well, nothing at all:
“The choice to limit themselves to one and two year comparisons seems to have been deliberately made in order to limit the number of adjustments made in the reconstructed series.”
No Gareth, the choice of one and two year comparisons was made by RS93. It’s a good paper—you should read it some day.
He (or his minder) claims:
“Limiting the comparison periods makes it harder for adjustments to reach statistical significance, leading dFDB 2014 to reject adjustments even in cases where the station records show site moves or changes!”
This is just confused: “…reject adjustments even in cases where the station records show site moves or changes!?” I suspect Gareth hasn’t yet cottoned on to what we’re doing here. All our checks are because station records show site moves or changes! What we’re doing is checking to see what the magnitude of the temperature shift is and whether it is a statistically significant shift.
It’s not really hard to grasp.
Minimum and maximum temperatures
Right, moving on.
“But perhaps the most critical flaw in dFDB 2014 — one that should have been sufficient to prevent publication in any self-respecting journal operating a credible peer review process — is that their method ignores any assessment of maximum and minimum temperatures in the adjustment process. This was pointed out to the authors in NIWA’s evidence in the High Court. One of these adjustments will almost always be larger than that for the mean, and if that change is significant, then the temperature record will need to be adjusted at that point – it doesn’t matter if the mean temperature adjustment is statistically significant or not.”
If this is the most critical flaw in our analysis, then why, in NIWA’s Review of the 7SS, did they not do this? Why did they use the mean, as we did? We followed their lead, after all.
By the way, nothing in anything we’ve done precludes NIWA doing their own RS93 analysis. Why have they not done this yet?
“For example, in the “audit”, they infill a month of missing data (May 1920 in the Masterton series) by choosing an unrealistically warm temperature based on an average of years around the adjustment date. This ignores the fact that May 1920 was one of the coldest Mays on record, at all sites involved in the adjustment calculation.”
It was pointed out to NIWA that Mullan had misunderstood how we derived the estimate for that missing month. Regardless of that, and because of NIWA’s criticism, I changed the method to follow NIWA’s estimate technique exactly. We use the average anomaly from surrounding reference sites to calculate our missing anomaly. So if Gareth wants to criticise our paper’s technique, he criticises NIWA at the same time.
The 11SS was thrown together hastily to try to lend support to the original 7SS and has been thoroughly debunked elsewhere. It has never been published as an ‘official’ series, unlike the 7SS [nor has it been peer reviewed – ed.].
“dFDB 2014 fails to acknowledge the existence of or address the issues raised by NIWA scientist Brett Mullan’s 2012 paper in Weather & Climate (the journal of the Meteorological Society of NZ), Applying the Rhoades and Salinger Method to New Zealand’s “Seven Stations” Temperature series (Weather & Climate, 32(1), 24-38), despite it dealing in detail with the method they claim to apply.”
I’ll repeat a comment I made earlier on this:
“Mullan (2012) is far from a refutation of RS93. In order to show that RS93’s two-year method is incorrrect, Mullan would have to prove statistically, and therefore mathematically, that k=2 is insufficient. This he has failed to do – all he has done is provide examples where the results change with longer time periods.
But this reinforces the valid point made in RS93 that gradual effects in other stations introduce inaccuracies with longer time periods. So all he’s done is strengthen the case for shorter periods.”
Sea surface temperatures (SST)
According to Gareth:
“dFDB 2014 also fails to make any reference to sea surface temperature records around the country and station records from offshore islands which also support warming at the expected level…“
Really, does it? You’re sure of that, Gareth? How about in Section 7 of the paper you clearly haven’t read:
“Folland and Salinger  estimated 1871–1993 SST variations for an area including New Zealand at about 0.6 °C/century but acknowledged that there is low confidence in the data in the crucial pre-1949 period.”
As for station records offshore, why would we include those? NIWA didn’t in M10, and we followed M10.
How many scientific papers have you published, Gareth?