Analysis of Renowden’s analysis of our reanalysis

• Guest post •

— by Bob Dedekind

Introduction

I chuckled at Gareth Renowden’s attempt to rebut our paper, for two reasons: he makes many mistakes and whoever is feeding him bits of information seems to let him down.

I printed out and highlighted his mistakes so I could deal with them individually. However, when I had finished the whole article was one big highlighted blob, so I’ll focus just on the most glaring mistakes.

Pal review

Gareth makes the remarkable and ignoble assertion that a respected scientific journal could be influenced by a local New Zealand professor into bypassing established processes of peer review in order to publish a favoured paper—and he says that, even though the paper in question is sceptical of established AGW claims—when such papers are always subjected to much greater scrutiny.

Did he learn nothing from Climategate, how NIWA employees colluded with the AGW cabal to stop the publication of papers and in fact hounded editors when they dared to allow free scientific debate to occur in their journals?

Even a cursory glance at the paper metadata shows that our paper went through extensive peer review. It was received December 2013 and not accepted until October 2014. Contrast this with Mullan’s 2012 paper in a local journal (received May 2012, revised June 2012, published July 2012).

I can assure you, the peer review was extensive. Gareth and his mates had better come up with some good arguments, because those reviewers know what’s what and they are probably already less than amused about Gareth’s allegations.

He calls them ‘gullible’, he asserts they are not ‘self-respecting’, that they have no ‘credible peer-review process’, that they are not as ‘relevant’ as other journals, and more.

I wonder if a sudden chill may descend on NIWA from certain quarters. After all, we don’t know who the peer-reviewers were, do we? But they know who NIWA is.

Source of 7SS

Now, moving on to specific errors made by Gareth in his rush into print. The funniest is one of the first.

“dFDB 2014 repeats the old canard that NIWA’s Seven Station Series (7SS) before the 2010 review was based on the adjustments made in Jim Salinger’s 1981 thesis. This was a key claim in the NZ Climate Science Education Trust’s evidence to the High Court and so transparently at odds with written reports and papers from 1992 onwards that it was easy for NIWA to refute.”

Really? How about the Parliamentary Questions, where the head of NIWA told us that the 7SS adjustments were based on Salinger’s thesis? Wow. Or NIWA’s Review itself, where the techniques are shown to match Salinger’s thesis?

Richard has already dealt with this in detail, so I won’t go further into it here. Suffice it to say that there is zero evidence to show that the pre-2010 7SS was ever based on a correct application of RS93, apart from the assertions of some at NIWA. When questioned, they claimed all the evidence was lost. Uh huh.

Ignoring NIWA’s work

“dFDB 2014 derives a warming rate of +0.28ºC per century, by claiming to apply a method published by Rhoades and Salinger in 1993 (RS93). It claims to create a new benchmark record by reapplying an old technique — essentially ignoring all the work done by NIWA in deriving the current 7SS.

Difficult to untangle the confusion apparent on this one. Firstly, the current 7SS uses the old technique, based on Salinger’s 1981 thesis. We applied a new technique (RS93) to it for the first time.

Secondly, as for ignoring NIWA’s previous work, Gareth has clearly not even read our paper. In section 3, at the end, we clearly state:

“…we have used the same comparison stations as M10 for all adjustments. In the result, our methodology and data inputs wholly coincide with M10 except in two respects:

(a) the use of RS93 statistical techniques to measure differences, as opposed to S81/M1 measurement techniques (Table 1), and

(b) acceptance of the findings of Hessell’s[10] paper regarding the contamination of Auckland and Wellington raw data.”

M10 is the NIWA Review of the current 7SS. So in Gareth’s mind, basing our paper on all the work done by NIWA in deriving the current 7SS is the same as ignoring all the work done by NIWA in deriving the current 7SS! One weeps.

Workings or SI

According to Gareth:

“The paper as published contains no workings or supplemental material that would allow reproduction of their results…”

Again, surely Gareth hasn’t even read the paper. It contains extensive workings and references in Section 5 and even a worked example in Section 6 to step the reader through the process. All interim results are recorded in detail in Table 3. How is that not reproducible?

Periods for comparison

Gareth claims:

“dFDB 2014 claims that RS93 mandates the use of one year and two year periods of comparison data when making adjustments for a station change, but RS93 makes no such claim. RS93 uses four year periods for comparison, in order to ensure statistical significance for changes — and no professional working in the field would use a shorter period.”

Now at last we’re getting some criticism of the actual science in the paper as opposed to ad hominem attacks.

There is little doubt that RS93 recommends the use of short time periods before and after a site change. They specifically mention one and two year periods either side and in their worked example in section 2.4 they use two years. RS93 do not use four year periods for comparison, except in another part of the paper dealing with isolated stations (not relevant here).

Any assertion that makes the claim that RS93 does not use one or two year periods is false. Any assertion that RS93 uses four year periods is false.

Of course, it’s more than likely that Gareth’s vision is somewhat blurry on this point. Perhaps he is confused whether it’s two years before and after a change or four years in total? Who knows? But if he wants to wriggle out via that tunnel, then he should be aware that he would be confirming the two-year approach.

As for the claim that no professional working in the field would use a shorter period, then is Gareth now claiming that Dr Jim Salinger (the co-author of RS93) is not a professional, since he clearly uses it in section 2.4 of RS93? What about Dr David Rhoades? Should we write and tell them that?

One last point: we also used three-year periods in our paper on those rare occasions when the results from one and two year periods were contradictory. I felt it was the best way to break the deadlock. Nobody to date has criticised that approach.

Gareth also makes this assertion, based on, well, nothing at all:

“The choice to limit themselves to one and two year comparisons seems to have been deliberately made in order to limit the number of adjustments made in the reconstructed series.”

No Gareth, the choice of one and two year comparisons was made by RS93. It’s a good paper—you should read it some day.

He (or his minder) claims:

“Limiting the comparison periods makes it harder for adjustments to reach statistical significance, leading dFDB 2014 to reject adjustments even in cases where the station records show site moves or changes!”

This is just confused: “…reject adjustments even in cases where the station records show site moves or changes!?” I suspect Gareth hasn’t yet cottoned on to what we’re doing here. All our checks are because station records show site moves or changes! What we’re doing is checking to see what the magnitude of the temperature shift is and whether it is a statistically significant shift.

It’s not really hard to grasp.

Minimum and maximum temperatures

Right, moving on.

“But perhaps the most critical flaw in dFDB 2014 — one that should have been sufficient to prevent publication in any self-respecting journal operating a credible peer review process — is that their method ignores any assessment of maximum and minimum temperatures in the adjustment process. This was pointed out to the authors in NIWA’s evidence in the High Court. One of these adjustments will almost always be larger than that for the mean, and if that change is significant, then the temperature record will need to be adjusted at that point – it doesn’t matter if the mean temperature adjustment is statistically significant or not.”

If this is the most critical flaw in our analysis, then why, in NIWA’s Review of the 7SS, did they not do this? Why did they use the mean, as we did? We followed their lead, after all.

By the way, nothing in anything we’ve done precludes NIWA doing their own RS93 analysis. Why have they not done this yet?

Missing data

Gareth states:

“For example, in the “audit”, they infill a month of missing data (May 1920 in the Masterton series) by choosing an unrealistically warm temperature based on an average of years around the adjustment date. This ignores the fact that May 1920 was one of the coldest Mays on record, at all sites involved in the adjustment calculation.”

It was pointed out to NIWA that Mullan had misunderstood how we derived the estimate for that missing month. Regardless of that, and because of NIWA’s criticism, I changed the method to follow NIWA’s estimate technique exactly. We use the average anomaly from surrounding reference sites to calculate our missing anomaly. So if Gareth wants to criticise our paper’s technique, he criticises NIWA at the same time.

The 11SS

The 11SS was thrown together hastily to try to lend support to the original 7SS and has been thoroughly debunked elsewhere. It has never been published as an ‘official’ series, unlike the 7SS [nor has it been peer reviewed – ed.].

Mullan 2012

“dFDB 2014 fails to acknowledge the existence of or address the issues raised by NIWA scientist Brett Mullan’s 2012 paper in Weather & Climate (the journal of the Meteorological Society of NZ), Applying the Rhoades and Salinger Method to New Zealand’s “Seven Stations” Temperature series (Weather & Climate, 32(1), 24-38), despite it dealing in detail with the method they claim to apply.”

I’ll repeat a comment I made earlier on this:

“Mullan (2012) is far from a refutation of RS93. In order to show that RS93’s two-year method is incorrrect, Mullan would have to prove statistically, and therefore mathematically, that k=2 is insufficient. This he has failed to do – all he has done is provide examples where the results change with longer time periods.

But this reinforces the valid point made in RS93 that gradual effects in other stations introduce inaccuracies with longer time periods. So all he’s done is strengthen the case for shorter periods.”

Sea surface temperatures (SST)

According to Gareth:

“dFDB 2014 also fails to make any reference to sea surface temperature records around the country and station records from offshore islands which also support warming at the expected level…“

Really, does it? You’re sure of that, Gareth? How about in Section 7 of the paper you clearly haven’t read:

“Folland and Salinger [8] estimated 1871–1993 SST variations for an area including New Zealand at about 0.6 °C/century but acknowledged that there is low confidence in the data in the crucial pre-1949 period.”

As for station records offshore, why would we include those? NIWA didn’t in M10, and we followed M10.

Parting shot

According to Gareth, I lack “any publication track record.” Really, a quick check on Google Scholar reveals a few, at least. How about this one, Gareth, or this one? There are others.

How many scientific papers have you published, Gareth?

 

Visits: 260

19 Thoughts on “Analysis of Renowden’s analysis of our reanalysis

  1. Richard C (NZ) on 02/11/2014 at 10:11 am said:

    >”whoever is feeding him bits of information seems to let him down”

    First task for David Wratt on Monday – memo to NIWA’s camp followers:

    Please stop helping.

  2. Richard C (NZ) on 02/11/2014 at 11:06 am said:

    Bob, take a look at Robin’s relative emissivity update graph:
    http://www.kiwithinker.com/2014/10/an-empirical-look-at-recent-trends-in-the-greenhouse-effect/

    Not what I expected. I thought that with measurement error it would be random, but the derivation really does track OLR. There’s a subtle difference, S-B RE vs OLR that I’ve commented on (will appear after moderation delay). I’m hoping you might be able to add some insight, please, at Kiwi Thinker.

    • BobD on 02/11/2014 at 11:33 am said:

      Definite track, yes, but is it an artifact of the method used to calculate emissivity?

      Just a thought – I haven’t looked into it in detail.

  3. Richard C (NZ) on 02/11/2014 at 11:37 am said:

    [Gareth] – “….essentially ignoring all the work done by NIWA in deriving the current 7SS.”

    >”Difficult to untangle the confusion apparent on this one”

    Understandable under the circumstances. I would point out that Gareth has inadvertently touched on a key issue identified by J Venning which he (Venning) could have pursued by chose to avoid. I highlighted this in the previous post here:
    https://www.climateconversation.org.nz/2014/10/renowden-on-the-reanalysis/#comment-1217463

    The issue centres on the word “derived”. Except I’m sure Gareth was oblivious to this when he chose the word above:

    JUDGMENT OF VENNING J
    [78] As noted, the Trust contends that, rather than apply the best recognised scientific opinion to produce the 7SS, NIWA applied the thesis. NIWA’s position however, is that the methodology relied on to produce the 7SS was in fact derived from the same methodology found in RS93. There is a stark conflict between the parties on this point. It is essentially a factual dispute which does not require the Court to decide which of two tenable scientific opinions should be preferred.

    My annotation from linked comment – Derived yes, arbitrarily. Made clear in the ‘Statistical Audit’ but oh, that “evidence is of little assistance to the Court” [according to J Venning in [54] ].

    You either adhere to the methodology or you don’t. de Freitas et al (2014) does, NIWA doesn’t.

    • Richard C (NZ) on 02/11/2014 at 10:23 pm said:

      [Venning] – “There is a stark conflict between the parties on this point. It is essentially a factual dispute which does not require the Court to decide which of two tenable scientific opinions should be preferred.

      The preferred scientific opinion (question of science) is out of the Judge’s domain but the question of fact is not.

      Question of Fact

      An issue that involves the resolution of a factual dispute or controversy and is within the sphere of the decisions to be made by a jury.

      A question of fact receives the same treatment in a bench (non-jury) trial as it does in a jury trial. The only difference is that in a bench trial the same person resolves both questions of law and fact because the fact finder is the judge. Nevertheless, in a bench trial, a judge may not decide material questions of fact without first affording the parties the process of a trial.

      http://legal-dictionary.thefreedictionary.com/Question+of+Fact

      “the fact finder [was] the judge” in NZCSET v NIWA but J Venning didn’t establish fact wrt the evidence (‘Statistical Audit’), NIWA’s methodology, and RS93 (the established scientific opinion). There is no scientific judgement in this, just fact.

      de Freitas et al (2014) resolves the question of fact and the question of science.

  4. Richard C (NZ) on 02/11/2014 at 11:51 am said:

    [Gareth] – “their method ignores any assessment of maximum and minimum temperatures in the adjustment process”

    Huh? So apparently the entire 7SS rationale is wrong. NIWA is wrong. De Freitas et al (2014) is wrong. Wrong, wrong, wrong.

    The right and only acceptable approach is Max/Min as per BOM’s HQ or ACORN then. But I don’t recall BOM objecting to NIWA’s Mean approach in their review of NIWA’s 7SS. They had the opportunity to overturn the paradigm then, why didn’t they?

    Perhaps Gareth should alert BOM to their oversight.

  5. Richard C (NZ) on 02/11/2014 at 11:59 am said:

    >”The 11SS”

    Suffers from the same methodological aberration prior to 1970 as the 7SS does. To say it corroborates the 7SS is circular reasoning.

  6. Richard C (NZ) on 02/11/2014 at 12:15 pm said:

    [Gareth] – “dFDB 2014 also fails to make any reference to sea surface temperature records around the country and station records from offshore islands which also support warming at the expected level…“

    What is the “expected level” for sea surface vs air over land that would confirm 0.96 C/century air over land?

    CRUTEM4 SH vs HadSST3 SH, 1970 – present
    http://www.woodfortrees.org/plot/crutem4vsh/from:1970/plot/hadsst3sh/from:1970

  7. Richard C (NZ) on 02/11/2014 at 12:28 pm said:

    >”How about this one, Gareth, or this one?”

    Do I hear Touché ?

  8. Richard C (NZ) on 02/11/2014 at 2:11 pm said:

    >”All our checks are because station records show site moves or changes!”

    This highlights the different approaches on each side of the Tasman.

    For the NZ 7SS the approach has been:

    Site change identification => breakpoint analysis => adjustment criteria

    For the AU ACORN the BOM approach is:

    Breakpoint identification => adjustment criteria => non-climatic change identification as an afterthought.

    BOM have only now made public the adjustments under pressure.

    So for ACORN this has led to adjustments for undocumented site changes, Rutherglen Research being the case study. BOM makes an adjustment to Min but none to Max. Apparently the Min temperature on one side of a rise (about 20m) was/is 1.8 C cooler than the other (BOM adjustments are huge, none for less than 0.3C) but the Max temperature was/is the same (think about that). The only way to prove this and resolve the contentiousness as I see it is to install an AWS in the vicinity of the previous unknown site to prove, or otherwise, their PM-95 adjustment method. This because the site was moved from one side of the rise to the other as can only be known by inference from records. There is no actual documentation of the site move.

    Here’s the thing(s). The adjustment drags down the entire Min series prior to the breakpoint by 1.8 C. But the adjustment is made without consideration of anything local, only by analysis of neighbours. Then when the adjusted series is graphed against its neighbours, it has the greater trend i.e. it’s an outlier (think about that too).

  9. Andy on 03/11/2014 at 8:51 am said:

    The media seem to have left the paper unnoticed. Maybe they are too busy repeating the “stark” and “grim” IPCC synthesis report that has just been published. We don’t want to confuse people, after all .

  10. Richard C (NZ) on 03/11/2014 at 11:51 am said:

    The WUWT link for the record:

    http://wattsupwiththat.com/2014/11/01/new-zealnds-temperature-record-challenged-by-new-skeptical-paper/

    Not sure why Rutherglen graphs featured as RT comments.

  11. Richard C (NZ) on 04/11/2014 at 12:44 pm said:

    How does this work?

    Gareth November 4, 2014 at 10:26 am
    http://hot-topic.co.nz/nz-cranks-finally-publish-an-nz-temperature-series-but-their-papers-stuffed-with-errors/#comment-45072

    “And you are — as in so many things, Manfred — presenting a completely misleading picture of global land ice and its response to current warming.”

    See graph from comment: World Glacier Monitoring Service
    http://www.wgms.ch/mbb/mbb13/Fig2_2012.jpg

    The glacier “response to current warming” seems to have been an inflexion at 2000 i.e. more warming less glacier MB reduction. less warming more glacier MB reduction.

    What am I missing?

  12. Robin Edwards on 06/11/2014 at 12:06 pm said:

    Has anyone else downloaded the Excel file of the NZ7 data – a 17 column and 111 (effective) row data set? I recommend that you do. It is enlightening. I’ll not discuss the minutiae of how the data have been gathered, adjusted, modified etc, which seems to exercise many people, but merely the content of the (final?) file. To be very brief, the final temperature data column, labeled “Composite” represents the seven locations fairly well. The data have a very clear and highly significant slope (warming, from whatever cause). But naive fitting of a least squares line misses the most interesting and perhaps important message that lies slightly hidden in the numbers.

    This is that in 1954 there was a sudden increase in temperature, and It occurs in all the individual site data columns. There’s no need to bother with “anomalies”, just look at the temperature data. The step change was about 0.4 deg C, and occurred immediately after the 1953 data. Before 1954 the data are effectively constant. It’s easy to compute the slopes of the lines and their standard errors and thus the confidence intervals. Do this and you will find that statistically that the existence of significant slopes is very doubtful. Now repeat this exercise for the 1954 onward data. Again the slopes have low probability levels – the data are effectively constant. But the level of the line is about 0.4 C higher for the post 1954 data.

    This step change is most readily found by forming the cumulative sums of the time series. The results are striking. All the locations exhibit the same cusum pattern, with a break point at 1954. This cannot be chance. It is a fundamental property of the scientific observations, and it should be actively considered and contemplated by the professionals who pronounce on the interpretation of the data. If they ignore this very obvious finding they are not doing a proper job!

    I could set out the numerical analyses, but it would be a very long post. I can demonstrate all this in graphics, but do not know how to post them here. Help, please!

    • Hi Robin, you present interesting observations and your statistical comments seem significant, but I have some questions. What is the duration of the step? January? January to March? January to December? Do you assess the cause of the step increase? Is it the only significant step you observe? For example, is it different from the steps I can see at 1915/16, 1923/24, 1927/28, 1937/38, 1953/54, 1961/62, 1962/63 (downwards), 1969/70, 1971/72 (downwards), 1975/76, 1977/78, 1990/91, 1991/92, 1997/98 and 2004/05 at http://www.niwa.co.nz/climate/information-and-resources/nz-temperature-record? They vary from 0.5°C to about 1.2°C. Please forgive me if my questions ignore your statistical remarks; I’m a beginner statistician.

    • Richard C (NZ) on 06/11/2014 at 1:46 pm said:

      Robin,

      >”the interpretation of the data”

      The linear trend over the entire dataset is completely misleading. Applying a moving average or polynomial curve represents the fluctuating data better statistically than a straight line. The nature of the series is not linear.

      Basically, the temperature regime change has simply been warm => cool => warm. And yes there was an abrupt change of regime as you observe, Salinger addresses this in his earlier papers (can dig you out a cite if you need it). You can see both moving average vs linear trend on the ‘Seven-station’ series temperature data (archive) graph:

      http://www.niwa.co.nz/sites/niwa.co.nz/files/styles/large/public/sites/default/files/imported/attachments/105273/nztemp7_annual_smoothed_2009.png?itok=wcb48nPF

      ‘Seven-station’ series temperature data (archive) page:
      http://www.niwa.co.nz/our-science/climate/information-and-resources/nz-temp-record/review/changes/seven-station-series-temperature-data

      Note this is the old 7SS that starts well before the current one at 1909. Of course NIWA insist on placing a linear trend on that portion of the graph after 1909 though. But the adjusted series suffers from the same issues as the new series i.e. the linear trend portion is one third less than on that graph.

    • Richard C (NZ) on 06/11/2014 at 2:07 pm said:

      >”the linear trend portion is one third less than on that graph”

      What I’m getting at is when the post 1909 data is adjusted to a 0.3 C/century linear trend instead of NIWA’s overblown 0.9 C/century, the entire pre-1909 series is pulled up 0.6 C. This puts the moving average at 1860 on a par with the 2000s. This is consistent with other archives:

      de Freitas et al (2014):

      “Extant 1868 archives record the national normal mean surface temperature at 13.1 °C (when converted from degrees Fahrenheit) being the average of 10+years read at six representative weather stations.”

      NIWA NZT7
      2010, 13.1
      2011, 12.8
      2012, 12.5
      2013, 13.4

      It’s the same in Australia, the archives show 1800s temperatures that were much the same as the present.

Leave a Reply to Robin Edwards Cancel reply

Your email address will not be published. Required fields are marked *

Post Navigation