Perrott puts his foot in his mouth

An ancient foot in the mouth

Then Renowden joins him

Our most vocal critic, Ken Perrott, has chanced upon a file I just posted, containing the unadjusted temperature data which was the subject of our paper, Are We Feeling Warmer Yet? (AWFWY), published here in November, 2009.

His response is to claim we made a big error. However, without realising it, Perrott actually accuses Dr Jim Salinger and NIWA itself of that error, because we just copied what Salinger did; what NIWA still does.

Most of Ken’s article at Open Parachute is pious ad hominem nonsense. There’s no reason to respond to all the arm-waving, so the sole point at issue is how to present an annual series with missing data.

The purpose of AWFWY was to compare the NIWA-adjusted Seven-station Series (7SS) with the unadjusted data. It was therefore necessary to use the same techniques as NIWA, insofar as they had been disclosed or were discernible. We had Salinger’s spreadsheet of adjusted readings, and we just did what Salinger did — he averaged years with missing data according to the number of available stations. Exactly what Perrott complains about.

You can see that from the spreadsheet. If you doubt that fact, just ask NIWA.

So, contrary to Perrott’s assertion, NIWA regularly calculates averages over years of missing data. He claims they “infilled” the missing data, but, in fact, they did not. In response to AWFWY, NIWA published its Eleven-station Series (11SS), which commences in 1931 with eight stations missing and has full data for only one year out of the first twenty-four. Shocking!

The NZ Climate Science Coalition was very critical of the 11SS for this very reason: how dare NIWA claim a “series” for a dataset that was blatantly incomplete? See NZ climate crisis gets worse.

It was not until they asked the Australian Bureau of Meteorology to review its replacement for the 7SS (the NZT7) that NIWA realised the enormity of its mistake. For all mention of the 11SS has been omitted from the Review Report and the NZT7 itself has been recalculated to apply to a “composite series”. See BoM the Terminator.

Ken cements his opinion of our “mistake” with these comments:

A notable feature [of the spreadsheet Treadgold just released] is the large amount of missing data – especially in early times. Even in later years there are data gaps. Notice how Treadgold has calculated the average anomaly value? Simply by taking the average of the anomalies [sic] all 7 stations! Even when some data is missing. Even when he has values for only 1 station! This completely invalidates Treadgold’s analysis.

So now we know that NIWA, by using, say, the average of three stations, “completely invalidates” the New Zealand temperature record. Is that right, Ken? Does James Renwick agree with that?

Then, by way of contrast, Perrott provides his opinion of what his scientific idols at NIWA have done:

How did NIWA handle the problem of missing values? In their original presentation of this data (the one attacked by Treadgold in his report) they did not make this mistake. Instead they used a “7-station composite” – effectively a reconstruction based on estimating missing data. In their most recent presentation (see Painted into a corner?) they removed the early data (where a lot of values are missing) because it’s reliability was questionable. They also did not have the problem of missing data in more recent times which Treadgold had.

Really? How do you know that, Ken? Considering we were working from NIWA’s original data, the set produced by Jim Salinger, then they certainly did have the problem of missing data, even in “more recent times.” They absolutely did not “estimate missing data.” They solved the problem by averaging whatever data they had. So did we. Why on earth do you criticise it?

It’s quite funny that Gareth Renowden, at Hot Topic, obviously without even looking, has accepted Perrott’s analysis holus-bolus. For our entertainment, he enumerates our errors:

Treadgold makes no allowances for missing data, makes no attempt to create a valid composite series, simply averages the numbers and plots them on a graph. There are a lot of gaps in the data — especially in the early years — so the “NZ” temperature is in some years just Dunedin, or Dunedin plus Wellington, or Wellington plus Auckland, and so on. Treadgold’s incredible statistical naivete allows him to not just compare apples to oranges, but to feijoas and konini berries as well.

He’s amusing when he gets going, don’t you think? It just happens that he’s barking up the wrong tree. He goes on, all oblivious to his mistake:

A whole political campaign has been constructed on the back of this statistical idiocy. Variations of Treadgold’s claim have been used in questions in Parliament. Valuable scientists’ time and tax payer money has been wasted pursuing his folly. The Climate “Science” Coalition are still desperately trying to keep the issue alive, hoping that if they can create enough smoke everyone will assume there’s a fire somewhere. Unfortunately for Barry Brill and his colleagues, Treadgold’s statistical incompetence undercuts their whole campaign. Do they really think the NZ public and politicians will take the word of a bunch that sling mud and smear scientists, when they are incapable of doing their own simple sums?

So who is not doing their sums? And we’re not slinging mud, we’re pointing out the simple truth: this is how the New Zealand temperature record has been calculated.

What Gareth says is a good criticism of the NZT7, but it should be directed, of course, at NIWA, not at me or the Coalition.

Delicious. Hoist with their own petard, indeed! Apologies from Gareth and Ken should not be long in coming. But is anyone holding their breath?

And how will NIWA respond to their supporters’ stern criticism of them? Justify their unscientific methods, or apologise, or fix things, or deny everything, or what?

David Wratt, James Renwick: it’s time to be accountable! It’s not just us: your own supporters disagree strongly with what you have done.


I should acknowledge Gareth Renowden’s apology. I appreciate that, Gareth, thanks. However, I must say that, when you go straight on with your criticism of AWFWY (which, quite apart from anything else, thoroughly weakens the fabric of your apology), you overlook yet again that its purpose was simply to compare, for each of the stations, the unadjusted with the adjusted data.

Here’s Auckland:

See the effect of the adjustments? From blue to red, the warming greatly increases. “The oldest readings have been cranked way down and later readings artificially lifted to give a false impression of warming … There is nothing in the station histories to warrant these adjustments and to date … NIWA have not revealed why they did this.” NIWA still haven’t.

Six out of seven stations had their warming increased over the evidence of the thermometer readings. An analysis by Barry Brill showed that 80% of the adjustments favoured a warming trend. Just unlucky, was it?

Remember that, at the time, NIWA hadn’t told anyone there were any adjustments. Looking at the official graph on the web site with the supporting information, you’d think you were looking at raw temperature readings. It wasn’t until we compared their graph with the data they made available that the truth became obvious.

So our simple exercise discovered that all of the warming was produced by these hitherto-unknown adjustments.

No bad thing, you might say, if the adjustments were properly done, and I agree. But that discovery made it crucial that every adjustment be rigorously justified, yet NIWA has never answered our crucial question: “Why did you make those adjustments and how big were they?” Which means that nobody knows whether they were properly done. Not even NIWA—because if they could have justified them, they wouldn’t have gone to any further expense.

Instead, they announced a brand-new “reconstruction” worth $70,000! By doing this, they as good as admitted that they couldn’t answer our crucial question. Now, presumably, they can answer it, having just done the exercise anew. New Zealand now has (we hope) a properly scientific temperature record.

Of course, we can’t judge it yet, we’re waiting for publication and NIWA’s estimate of error margins.

But well done, them; well done, too, to the Coalition and the CCG.

It would have been nice to have your support, Gareth, for this public-spirited venture, but we managed without it. I can’t begin to imagine why you were ever against it. I hope you aren’t too disappointed that NIWA gave in to the logic of our criticisms. It might have been the right thing to do, but it must have been a bitter disappointment to their supporters who thought NIWA could do no wrong! Much like a kick in the teeth, I guess.

38 Thoughts on “Perrott puts his foot in his mouth

  1. Richard C (NZ) on February 10, 2011 at 8:34 pm said:

    Where’s the Climate Science Rapid Response Team when you need them?


  2. Australis had the last word at HT

    Perrott has shot himself in both feet. He supports the 7SS and 11SS, in which NIWA average data over multiple years including those that have missing data. He has been defending them with total myopia and dedication for over 12 months. Now, he has belatedly decided that he doesn’t like NIWA’s approach.

    The 11SS is most egregious example ever of a series whose trend is WHOLLY driven by missing data. Seven stations missing in 1931, three missing in 1941, one missing in 1951 and 1991. There is no trend at all shown by the years which have no missing data.

    Compare this with Perrott’s criticism of Treadgold’s paper – where the missing data moved the trend by less than 0.1°C.

    Seemed to go very quiet after that.

  3. Richard, have a look at the original spreadsheet and do a few calculations. These will show you that the composite temperature is not the average temperature when data is missing. It is constructed by calculating an “average” anomaly (yes, not really very good) and adding this to the mean temperature caculated from the average temperatures over the years 1971- 2000. In this way they tried to compensate for missing information by using information in the years 1971-2000.

    I did not describe the details in my post because it’s complexity would have been a distraction.

    Their spreadsheet doesn’t include the actual graph so its not possible to check how they used the composite temperature to get an anomaly used in the graph. If they did it just by subtracting the value they already added then they were simply going around in circles and should be criticized for that. It is silly and naive.

    Now clearly that method of data correction for missing values may be OK when it is occasional but it is hopeless for large blocks of missing data. This would be one of the reasons NIWA took the decision to exclude such poor data in the latest independent reconstruction. Wise I think.

    Any trend calculated including such large blocks of shonky data would be suspect. This is what you did in your recent comment to get a value of 0.06 per century. Deleting the shonky data produced a trend of 0.23.

    Your only defence for this mistake seems to be your claim that NIWA did it! Well, if they did they were also wrong.

    The method is faulty. I have shown that by examples on my blog discussion and this seems to have silenced your mates.

    A mistake is a mistake whoever makes it and I suspect you agree that this data with missing stations should not be included. (Do you, by the way?)

    Whatever the method NIWA used on the old spreadsheet they certainly are not repeating that mistake in the new.

    Perhaps you could explain how you understand NIWA obtained their original graph. I know they took averages of the anomalies where station data was missing. The spreadsheet shows it was used to obtain composite temperatures. Can you provide evidence of how these were used to calculate data for the graph?

  4. Huub Bakker on February 10, 2011 at 9:03 pm said:

    And so we see that when people really start to look at the science they find it lacking. Had Ken thought that this mistake was NIWA’s though, I very much doubt he would have mentioned it all. Just left it quietly and continued believing in NIWA’s infallibility.

    It is in the nature of true scientific endeavor to be sceptical and not simply accept. Of course the irony is that warmists are sceptical of our claims but not the establishment’s.

  5. Good summary Richard. An amusing episode all round.

    Ken, why are you so far behind what’s going on? Barry wrote this ages ago:

    And once again, you still completely miss the point of AWFWY. It was about looking at the individual station histories, to see how each had been adjusted. The composite graph was a summary at the end, not the whole document.

    “Deleting the shonky data produced a trend of 0.23.”
    Uhh, no. The unadjusted trend of 0.23C/century was always there for 1908-2008. Why is this suddenly a huge discovery? We’ve mentioned it numerous times here in comments.

  6. These will show you that the composite temperature is not the average temperature when data is missing. It is constructed by calculating an “average” anomaly (yes, not really very good) and adding this to the mean temperature caculated from the average temperatures over the years 1971- 2000. In this way they tried to compensate for missing information by using information in the years 1971-2000.

    Umm, Earth to Ken? That’s precisely what point 2) in Richard C’s post was talking about. We always knew that, what are you going on about?

    See here, and try to actually read it this time instead of thrash-typing before you think.

    I’ll even reproduce it:
    “2) 7-Station Composite Temperature = 7-Station Anomaly + Average of 7-Station 1971-2000 climatologies
    E.g., for 1909 when there are 4 sites, the 7-Station Composite Temp is NOT the average of the Wellington, Nelson, Lincoln and Dunedin values.”

  7. Alexander K on February 10, 2011 at 10:41 pm said:

    Obviously, the sycophants of NIWA don’t do irony – they probably don’t do apologies either.
    I didn’t realise that following this thread would give me such a good chuckle. Which reminds me, and I know it’s wickedly off-topic, but prince chuckles has appeared in the EU parliament to plead for the lowering of the standard of living for all of his would-be subjects! Check out today’s Telegraph (London).

  8. Anthropogenic Global Cooling on February 10, 2011 at 11:26 pm said:

    Ken’s incompetent screwup doesn’t surprise me at all. He’s too interested in religiously waving the AGW flag despite the abundant evidence refuting it. The hypocritical double standards of those promoting this AGW rubbish is an embarrassment to science, but it’s very funny watching them shooting themselves in the foot repeatedly.

  9. Australis on February 11, 2011 at 12:57 am said:

    “Your only defence for this mistake seems to be your claim that NIWA did it! Well, if they did they were also wrong”.

    Yes, Ken, they certainly were. And you thought it would be impolite to point it out – especially as you seem to be having major problems getting your head around the rights and wrongs of the matter. So get this:

    1. NIWA’s 7SS methodology was wrong in innumerable ways.

    2. AWFWY picked up many of the mistakes. The response to missing data was a relatively small error so the paper simply accommodated it.

    3. NIWA immediiately hit back with the execrable 11SS. This one is dominated by missing data, but NIWA treated it as if every year had 11 stations. It is so bad that most readers find it difficult to believe that the absurd results were merely stupid errors rather than a deliberate effort to deceive.

    4. The 11SS went straight past you, Ken. How could that happen?

    5. The 11SS went straight past John Morgan and the senior team at NIWA. Morgan even found it necessary to make a press statement defending it.

    6. When BoM arrived, the 11SS was smuggled out of sight. It receives not a single mention in the 169-page Review report. But Mr Morgan has not yet withdrawn his endorsement or apologised for the dreadful ‘science’.

  10. To be fair, where data is missing, or sparse, trying to fill in the “holes” is a difficult and complex task, and is always vulnerable to criticism, justified or otherwise. In NIWA’s case with the present 7 station series they were faced with a dilemma because there is considerable missing data. For example, the Albert Park record between 2 January 1951 and 31 December 1989 measurements for 247 days (not all of which are consecutive) are missing, which makes endeavouring to cross-correlate with other stations difficult, particularly as data may well be missing from the other station sets. NIWA describe, in Appendix 2 to their report on arriving at composite data for Masterton, how they in-fill the data:

    “First, climatologies and anomalies are calculated for maximum temperatures at
    Waingawa in each calendar month from 1943 to 1972. This is the 30-year period
    following the 1942 site change at Waingawa. An annual climatology for the whole
    1943-72 period is then calculated by averaging the monthly climatologies. The annual
    anomaly is then calculated for each year from 1943 to 1972, by averaging the
    anomalies of the non-missing months. The annual maximum temperature for the
    missing years is then estimated by adding each calculated annual anomaly to the
    annual climatology. This process is then repeated for minimum temperatures over the
    same period. Finally, the annual mean temperature in the missing years is calculated
    by taking the average of the annual maximum and minimum temperatures. This
    method takes advantage of all the monthly temperature data available at the station.”

    The only questions that need to be answered then are is this a valid approach, and has it been correctly applied.

    It is worth noting that this is a quite different problem to others which have occupied our attention, namely how and why adjustments have been applied to the raw data.

  11. Richard C (NZ) on February 11, 2011 at 10:54 am said:

    Gary, there is no attempt to infill missing data. The central point of the quote is this:-

    “by averaging the anomalies of the non-missing months”

    They neglect missing data, and rightly so I think. This is much better than making stuff up Hansen fashion.

    That LITTLE issue I think should be re-framed not as a missing data issue but as a spatial balance issue i.e. if there was a missing year for 1station in the NI (say Masterton) then 1 SI station (say Lincoln or Hokitika) should be dropped for that year to maintain spatial balance.

    NIWA do however, make stuff up in a prior process.

    What everyone is missing is the prior process of obtaining data for the years that are not apparently missing. I have already demonstrated in comments under “NZ Temperature Record – A Brief History” that data is present for Masterton NZT7 that is not present in CliFlo. From my reading of Barry Brill’s explanation of the use of neighbouring stations and remote stations in “NIWA’s maverick methodology”, my thinking is that data is introduced from other stations to infill missing station data. So although the NZT7 may present the illusion that there is only a few missing data years, this is far from the case.

    That, is the BIG missing data issue – not Ken’s deluded ramblings, although in this latter real issue his reasoning IS in terms of NIWA’s methodology because they ARE (I think) infilling data. It’s just that his understanding of the temperature series compilation is extremely shallow at this point.

    In summary: everyone is focussing on the wrong missing data. The data gaps in NZT7 are not the issue and averaging the remaining data as NIWA does is perfectly reasonable (despite Ken’s delusion). Focus instead on the makeup of the data that you do see and you will discover that much of it is actually infilled data. This is the REAL issue along with adjustments.

    A review of Barry Brill’s “NIWA’s maverick methodology” should be required reading, particularly, “Neighbour Stations” and “Remote Stations” which deals with adjustments but I posit that those stations are also used to infill data because how else can the Masterton situation be explained?

  12. Richard C (NZ) on February 11, 2011 at 11:30 am said:

    Here’s the CliFlo data for these Masterton stations 2000-2009:- 36735,37662,7578,17466,2446 (only 7578 and 17466 provide data)

    Masterton Intermediate School 2000 2 12.7
    Masterton, Te Ore Ore 2000 2 12.3
    Masterton Intermediate School 2001 2 12.7
    Masterton, Te Ore Ore 2002 2 12.4
    2003 missing
    Masterton, Te Ore Ore 2004 2 12.2
    Masterton, Te Ore Ore 2005 2 13
    Masterton, Te Ore Ore 2006 2 12.4
    2007 missing
    2008 missing
    2009 missing

    Here’s the NZT7 record for the same period:-

    2000 12.79
    2001 12.90
    2002 12.67
    2003 12.62
    2004 12.17
    2005 13.11
    2006 12.40
    2007 12.67
    2008 12.86
    2009 12.31

    I rest my case, the missing data must have been introduced (infilled) by data from neighbouring or remote stations (East Taratahi, Carterton is neighbouring which is in turn infilled by remote stations Auckland, Christchurch, Wellington, Nelson. – I think) because it’s not interpolated or extrapolated.

    East Taratahi is this CliFlo station:-

    2612 D15064 01-Jan-1982 31-Oct-2009 100 East Taratahi Aws

    East Taratahi Aws 2000 2 12.5
    East Taratahi Aws 2001 2 12.7
    East Taratahi Aws 2002 2 12.5
    East Taratahi Aws 2003 2 12.5
    East Taratahi Aws 2004 2 12.3
    East Taratahi Aws 2005 2 13
    East Taratahi Aws 2006 2 12.4
    East Taratahi Aws 2007 2 12.6
    East Taratahi Aws 2008 2 12.9

    There is no data for 2009 so the NZT7 value for that year came from a remote stations.

    The other East Taratahi station 2610 D15062 01-Jul-1972 31-Jul-1978 100 East Taratahi that stopped in 1978.

    Go figure everyone.

  13. Richard C (NZ) on February 11, 2011 at 2:04 pm said:

    Sorry everyone, the NZT7 data was for the NZ composite – not Masterton. Here’s the entire CliFlo record for Masterton with the corresponding NZT7 data. There are two CliFlo values available for 2000 (12.3 and 12.7), CliFlo is left column, NZTR is right, _____ is missing CliFlo data.

    1924 12.5 12.82
    1925 11.7 11.82
    1926 11.8 11.84
    1927 11.7 11.96
    1928 12.2 12.53
    1929 11.7 11.94
    1930 10.9 10.99
    1931 11.5 11.44
    1932 11.3 11.37
    1933 12.0 12.12
    1934 12.1 11.97
    1935 11.9 12.39
    1936 11.3 11.75
    1937 11.3 11.60
    1938 12.6 12.87
    1939 11.6 11.94
    1940 11.4 11.39
    1941 11.6 11.67
    1942 12.0 11.86
    1943 11.6 11.85
    1944 11.6 11.65
    1945 11.3 11.58
    1946 11.6 11.70
    1947 11.7 11.82
    1948 12.2 12.35
    1949 11.7 12.03
    1950 11.9 11.93
    1951 11.5 11.71
    1952 12.0 12.19
    1957 12.4 12.50
    1958 12.6 12.54
    1959 11.8 11.90
    1960 11.8 11.95
    1993 11.2 11.39
    1995 12.4 12.72
    1996 12.1 12.33
    1997 12.0 11.93
    1998 13.4 13.53
    1999 13.0 13.08
    2000 12.3 12.7 12.47
    2001 12.7 12.77
    2002 12.4 12.47
    2004 12.2 12.33
    2005 13.0 13.03
    2006 12.4 12.41

    The NZT7 values are generally warmer than the CliFlo data so any adjustment seems to be up – not down (or have I got this wrong).

    There are 98 records, 52 (53%) of the NZT7 records are from neighbouring or remote stations.

  14. Of course the neighbouring stations are used, Richard. They say elsewhere that they use them. Te Aroha was one of the stations used as a comparison to assist in generating the Auckland composite (as was Dunedin!). Te Aroha has 96 days missing in the time span I mentioned. On the basis that such a comparison could only be valid if the statistics of the data sets were similar I compared the distributions of the Albert Park and Te Aroha sets. Not only are the distributions not normal, but tend to being bi-modal, and the modality is different in each case. I was particularly interested in the Te Aroha set because it is only a few km from where I live and our weather is very different to that of Te Aroha. It was the combination of the 247 days missing for Albert Park and 96 days for Te Aroha that made attempting a cross-correlation difficult without infilling the missing data. How could I do that? I considered the NIWA approach, but I am not convinced it is the correct approach. Neither is merely adding “smoothed” values in. I might add, I was looking at the raw daily maxima and minima for this purpose.

    For what it is worth an autocorrelation on each site indicated, as you would expect a sinusoidal result with a period of 364/5/6 days, the fuzziness being due to, I think, missing data. The autocorrelation was much stronger for Albert Park than was that for Te Aroha, which I believe is because the day-to-day variability is more marked at Te Aroha than at Albert Park.

    You may draw your own conclusion as to what that means as a basis for considering Te Aroha to a good indicator for what should happen at Albert Park. You can test the information as I did – the raw data is available in the CliFlo database.

  15. Richard C (NZ) on February 11, 2011 at 3:30 pm said:

    I’ve just realized after re-reading Gary’s first comment on this thread that there are more stations for Masterton than the ones I retrieved. I just searched for “Masterton” when I should have searched a radius about a Lat Lon for Masterton. This would have pulled in Waingawa and whatever other stations are available.

    CliFlo is a bit of a learning curve I confess.

  16. I quite like that radius feature.

  17. Richard C (NZ) on February 11, 2011 at 3:58 pm said:

    I now realize that there were several other neighbouring stations about Masterton that would be data sources. I just searched “Masterton” but should have searched a radius about a Lat Lon for Masterton (what Lat Lon, is the Reference station central? and what radius to pull in “neighbouring”).

    I would have thought that Te Aroha was “remote” from Albert Park rather than “neighbouring” and that seems to be confirmed by your stats comparison.

    Where do the “adjustments” (the big issue) come into play? i.e. At what stage of the process. I can’t see that infilling has much influence on long-term trend and in the case of Albert Park, Te Aroha values with perhaps less UHI might tend to pull down Albert Park’s trend in later years but wont do anything in the mid years and very little in the early years (I think).

    I can see why the BOM audit was not a reanalysis because it’s a huge job and probably unnecessary but it’s the “adjustments” that are at issue and at this stage I’m no wiser as to how and when the “adjustments” are applied

  18. Richard C (NZ) on February 11, 2011 at 4:05 pm said:

    Bob, could you look at my reply to Gary below. I don’t what the criteria is for “neighbouring” i.e. What radius about what centre point will pull in all the “neighbouring” stations for Masterton?

  19. (what Lat Lon, is the Reference station central? and what radius to pull in “neighbouring”

    Well, I often use a 50km range, then reduce if necessary. Regarding the Lat Lon, I get that from a central station (Station Details has it if I remember correctly).

    Neighbouring should mean the same climate surely, not just physically close.

    I don’t think there are any adjustments or infilling in CliFlo. If you have found large data gaps it’s most likely you have the wrong station.

  20. Richard C (NZ) on February 11, 2011 at 4:36 pm said:

    Using Te Aroha as a remote station for Albert Park would be reasonable if there were no neighbouring alternatives, but it would be an odd choice if there were and surely there are.

  21. Richard C (NZ) on February 11, 2011 at 4:48 pm said:

    Thanks Bob. Yes a remote station could be considered neighbouring if it had the same climate so using a radius search is no help in that case. There must be some search criteria for pulling in all the stations that contribute to the Masterton record say (neighbouring and remote). Perhaps I should read the BOM/NIWA review for clues.

    It was not Cliflo specifically that I was wondering about re adjustments. I just don’t know how and where the controversial “adjustments” are applied. Maybe the BOM/NIWA review is the answer to that also.

  22. Clarence on February 11, 2011 at 6:15 pm said:

    For those who don’t waste time reading Hot Topic, you might be interested in yesterday’s new posting by Mr Renownden.

    After acknowledging that NIWA has consistently averaged whatever data was available, including in the 7SS –

    “Treadgold was therefore following established practise, in that one respect. I therefore apologise to Richard for echoing that specific allegation without first checking the data.”

    Now we know what Will Shakespeare meant with – “sound and fury, signifying nothing.”

  23. Richard C (NZ) on February 11, 2011 at 7:17 pm said:

    Renowden still makes this allegation:-

    “However, this does not get him off the hook for the rest of his “analysis”, nor prompt me to change my overall conclusions. I accused him of “statistical idiocy” and that charge stands — not least because he derives his anomalies by taking unadjusted or raw station data and relating it to a 1971-2000 baseline derived from different stations at different locations using different measurement equipment, and then pretends that he’s made the warming disappear”

    My understanding (someone please check this).

    It doesn’t matter what baseline the departure of an anomaly is calculated from. It could be 0 K or the boiling point of water at standard temperature and pressure (100 C) or any other arbitrary value. A local actual temperature baseline is required in order to compute actuals. It is, in the Masterton case at least (contrary to Gareths blurb) compiled by NIWA from several local stations (“different stations, different locations and different measuring equipment”).

    A climatological baseline is only valid for 30 years and then it’s moved along 30 years so it’s a check that there’s not some bias occurring somewhere. An actual temperature calculated by the anomaly method therefore, has obviously been subject to data smoothing and is NOT an ACTUAL actual temperature.

    If I am correct, Renowden now has both feet in his mouth.

  24. Richard C (NZ) on February 11, 2011 at 7:25 pm said:

    There’s the possibility, that one or more of the stations used to compile the baseline (perhaps the one that provides the most data) is defective – what then?

  25. Richard C (NZ) on February 11, 2011 at 7:50 pm said:

    CTG at HT says this:-

    “The big problem with Treadgold’s analysis remains that he splices together raw data from different stations as if they were one series (what he calls “unadjusted”), which is quite a neat trick to hide the incline.”

    Huh? So does Salinger. CTG could equally say this:-

    “The big problem with Salinger’s analysis remains that he splices together raw data from different stations as if they were one series (what he calls “adjusted”), which is quite a neat trick to hide the slight incline.”

  26. Richard C (NZ) on February 11, 2011 at 8:10 pm said:

    East Taratahi AWS (the Masterton Reference Station) provides only 15 years of the 30 for the 1971-2000 Masterton baseline..

  27. Perhaps David Wratt, & James Renwick could be the first to get dealt to by Bill English & the Waffle busting programme he is proposing .

  28. Richard C (NZ) on February 11, 2011 at 11:22 pm said:

    “That LITTLE issue I think should be re-framed not as a missing data issue but as a spatial balance issue”

    Climate balance would be a better term than spatial balance e.g Lincolns climate is more compatible with Masterton than Hokitika (see the BOM review Masterton section).

  29. Richard C (NZ) on February 11, 2011 at 11:34 pm said:

    The BOM review Auckland section indicates that the use of Te Aroha as a comparison site for Albert Park was specifically “to diagnose the source of non-climatic warming” i.e. urban heating.

  30. Richard C (NZ) on February 11, 2011 at 11:36 pm said:

    Figure 5 Masterton section that should be (shows NZ climate relative to Masterton).

  31. Richard C (NZ) on February 12, 2011 at 12:06 am said:

    Should have gone back to re-read the BOM review long ago. It has all the contributing stations for each location tabulated making it easy to retrieve the contributing stations from CliFlo using station codes. The review also states “Masterton Sites 1 to 6 are all less than 10 km from the reference East Taratahi site (Site 7)” so a 10 km radius search centred on East Taratahi would pull them all in.

    The review also catalogues the adjustment process so now I’m much wiser but I will have to study in detail the CUMULATIVE STEP ADJUSTMENTS as plotted in Figure 9 Masterton section to really understand what is going on because that is where the progressively increasing downward adjustment towards the early years takes place that pulls down the early years and creates a steeper trend.

    The CUMULATIVE part is crucial because there will have to be a very sound reason to apply the adjustments cumulatively, otherwise if there isn’t, the steep rising trend disappears.

  32. Richard C (NZ) on February 12, 2011 at 12:50 am said:

    The last cumulative adjustment for Masterton is -0.55 C at Waingawa Workshop Road 2473.

    But the difference in the average of all East Taratahi Aws 2612 and the average of all Workshop Road 2473 1909-2010 from CliFlo data is only 0.17 C (12.55652 – 12.38551).

    Cumulative adjustment -0.55 C

    Actual CliFlo difference -0.17 C

    Is my reasoning flawed? It not, NIWA has introduced a cumulative error of -0.38 C.

  33. Yes, I noticed they do that a lot, same as Salinger did. The need for the adjustment may be there, but the magnitude of the adjustment is usually way beyond common sense.

  34. Barry Brill on February 12, 2011 at 3:09 pm said:

    “Neighbouring stations” means “stations subject to identical local weather conditions”.
    See Rhoades & Salinger (1995)

  35. Barry Brill on February 12, 2011 at 3:15 pm said:

    The Review Report uses Riverhead Forest and Waiuku Forest (both much closer than Te Aroha) but then admits (p36, footnote 29) that they are subject to a ‘Forest Heat Island’ effect. That will explain why they were useless in a comparison seeking UHI effects.

  36. Australis on February 12, 2011 at 3:23 pm said:

    NIWA also derives its anomalies by taking ADJUSTED data and relating it to a 1971-2000 baseline derived from different stations at different locations using different measurement equipment.

    The only difference is that NIWA adjusts the data from older stations and AWFWY does not. As these random adjustments are supposed to produce random results, they should cancel out over time, and make no difference to the trend. BUT NIWA’s adjustments mainly flow in one direction and accumulate.

    In any event, you are quite right in saying that a change in a benchmark cannot affect the trend. Subtracting a constant from all values leaves the trend unaffected.

  37. Richard C (NZ) on February 12, 2011 at 5:26 pm said:

    “As these random adjustments are supposed to produce random results, they should cancel out over time, and make no difference to the trend. BUT NIWA’s adjustments mainly flow in one direction and accumulate.”

    They’re not random though, they’re systematic and the accumulation is a result of that. NIWA’s formal term is “cumulative step adjustments” e.g. Masterton, see Table 1. The baseline is East Taratahi Aws and the cumulative steps are Waingawa Subtation (-0.08), Essex Street (-0.34), Workshop Road (-0.55). NIWA went to a lot of trouble to derive those steps using remote comparison stations, but if you extract the data for the last accumulation from CliFlo, 1912-1919, average it and subtract it from the baseline, you get -0.15 C.

    So the further back in time, the greater the accumulation (the steps are a time sequence spanning 1991-1920).

    The local isolated difference at the last step doesn’t coincide with the remote accumulated difference – and it’s a lot less.

  38. Richard C (NZ) on August 4, 2012 at 12:06 pm said:

    My question up-thread was eventually answered over a year later as in the following recount:-

    I’ve had a nagging thought that there’s a loose end somewhere in the lead up to the “Staistical Audit’ so I went back through my email “NZT7” archive and sure enough, I found this at the start:-

    Cumulative step adjustments in the NZT7 BOM/NIWA review report

    Hello Bob and Gary,

    Could you guys do me a favour and check my reasoning in this comment at CCG
    The last cumulative adjustment for Masterton is -0.55 C at Waingawa Workshop Road 2473.

    But the difference in the average of all East Taratahi Aws 2612 and the average of all Workshop Road 2473 1909-2010 from CliFlo data is only 0.17 C (12.55652 – 12.38551).

    Cumulative adjustment -0.55 C

    Actual CliFlo difference -0.17 C

    Is my reasoning flawed? It not, NIWA has introduced a cumulative error of -0.38 C.

    What I was presenting was a simple check-sum for a cumulative calculation in a Masterton case study. As any engineer, surveyor, construction technician or drafty knows, cumulative calcs can lead to some embarrassment come construction time.

    Things got rolling but after Bob came back with “I’ve been thinking about this all afternoon” everything went ballistic culminating in the ‘Statistical Audit’ but I never really got a conclusive answer except for Bob’s “From my initial reading of the Masterton data, it seems that you are right in one sense, and not so much in another”

    But now with the ‘Statistical Audit’ I’ve received the most comprehensive answer to a question that has probably ever been offered to any question ever asked.

    There in the Masterton section on page 22 Table 3: Comparison between NIWA and R&S results I see Site 4 Waingawa (2473) cumulative R&S sum 0.00 so my question is answered and I have a comparison for my check-sum:-

    -0.55 C NIWA Review

    -0.17 C Check-sum

    0.00 C R&S by Bob, Gary et al, ‘Staistical Audit’

    I think the check-sum is closer to R&S (0.17 not being significant IMO) than NIWA because the first and last (that being the reference) Masterton stations are non-UHI, non-altitude distorted. The same check-sum would not work if they were I don’t think.

    If NIWA had calculated a check-sum for Masterton, they would have realized there was something wrong with their cumulative sum.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation