NIWA’s maverick methodology

maverick

A sober analysis from an indefatigable leader of our Campaign for True Temperatures. Barry’s careful, professional reticence stands in stark contrast to the concerns emerging over the work of our premier climate institution. — Richard Treadgold


“NIWA uses internationally accepted techniques” — Hon Dr Wayne Mapp, Minister of Research, Science and Technology.

The principal methodology used by NIWA in calculating adjustments to historical data for both the Seven-station Series (7SS) and its provisional replacement, the New Zealand Temperature Seven (NZT7), is by comparison with other temperature stations. This is well explained in the Review Report published December 2010 (page 11) as follows:

  • Micro-climates exist: Within a general region, taking Wellington as an example, there are many micro-climates, and thus temperatures vary from place to place. This is because of Wellington’s varied topography, meaning that the sites have different exposures and aspects and are at different altitudes. All these factors can influence the measured temperature. There is no such thing, therefore, as “the” Wellington temperature; there are many Wellington temperatures, and they are all different.
  • Neighbouring sites vary together: Comparison of temperatures from neighbouring sites shows again and again that trends and interannual variations at nearby sites are very similar. So although the base level temperatures may be different at two sites (due to micro-climate effects), the variations are almost in ‘lock-step’, with occasional exceptions. (See examples in the seven-station documents and on the NIWA website).

If it were not for this ‘lock-step’ variation, it would not be appropriate to join temperature records from different sites at all. But once such parallel variation is recognised, it is a simple matter in principle to adjust temperatures from one site to the same base level as at another site. [It would be almost as simple in practice too, were it not for missing data].

These passages describe the situation where the principal weather station within a city is moved from one site to another, e.g., from Albert Park to Auckland Airport. The temperatures from the two sites may be merged, as long as the difference between their “base level temperatures” can be identified. This is best done by running the two sites in parallel for a lengthy period and then comparing averages from the two sets of data. The interannual variations (being in lockstep) cancel out and the difference in base levels is revealed.

Note that there are “occasional exceptions” when a shift in local conditions affects the two sites differently. Although this happens frequently enough, it matters only when it coincides with a site change. To exclude that possibility, analysts compare the before-and-after patterns of the two sites, and also seek corroboration from other nearby sites.

But, as NIWA notes, the problem is with “missing data.” In most instances of old records, MetService readings at the first site stop dead as soon as they commence at the replacement site. The required parallel dataset from the decommissioned site is “missing” and has to be estimated by using data from reference sites.

Development of a missing time series from neighbouring sites (which I will call the Nearby Stations Comparison (NSC) method) is well recognised in the international literature, being cited in both the Review Report and on the NIWA website.

A comprehensive review of commonly used methods and techniques for homogeneity adjustments was published in the International Journal of Climatology by Peterson, Easterling et Ors (1998): This paper has 21 authors, amongst whom are Torok and Nicholls (Australia), P. Jones (UK) and J. Salinger (New Zealand). See Extracts from the Scientific Literature.

The paper divides techniques into objective and subjective categories. Amongst the latter are the Australian Torok & Nicholls (1996) approach using a “subjective decision on the position and magnitude of adjustments, based on an objective statistical test.”

The New Zealand section was based on the paper Rhoades & Salinger (1993). This was also categorised as subjective, but did not offer an objective test. See Extracts from the Scientific Literature.

Both Australian and New Zealand sections described the NSC method in detail, and both stressed the necessity for the “nearby” comparator station to be subject to the same local weather conditions as the subject station.

Neighbour Stations

The Review Report (above) refers to “neighbouring sites” and “nearby sites.” Throughout all the scientific literature regarding homogeneities, all references to comparison stations invariably involve the adjectives “neighbour” or “nearby.” If such neighbours are not available, adjustments involve quite different techniques and much greater uncertainty.

Although “neighbour” is undefined, there are numerous references to “local” and “regional” climates. If a comparator station does not experience local climatic conditions, and is not in the same region as the candidate station, it is of little or no use.

The Review Report notes at p.21 footnote 17 that: “The stations to be used in comparisons (‘comparison stations’) ideally ought to have experienced the same broad climatic influences as the Auckland sites (‘candidate stations’).”

The use of the word “ideally” reveals the author’s opinion that similar climatic influences would be a bonus, rather than a necessity. There is nothing in the literature to justify this cavalier view.

The Review then proceeds to utilise comparison stations from other regions, other coasts and other islands. In contrast to the NSC mentioned above, I will call this technique the Remote Stations Comparison (RSC) method.

Remote Stations

The Review uses RSC for adjustments to six of the seven stations in the NZT7:

Albert Park:

Dargaville, Te Aroha, Hamilton, Christchurch, Dunedin, Wellington.

East Taratahi:

Auckland, Christchurch, Wellington, Nelson.

Kelburn:

Auckland, Christchurch, Nelson,Taihape, Masterton.

Nelson Aero:

Auckland, Wellington, Dunedin, Hokitika.

Hokitika Aero:

Auckland, Christchurch, Nelson, New Plymouth, Palmerston North, Invercargill.

Lincoln EWS:

(None)

Musselburgh:

Auckland, Christchurch, Wellington, Invercargill, Nelson, Timaru.

Note that the Lincoln adjustments follow Rhoades & Salinger (1993) in using neighbouring stations – Christchurch Gardens, Christchurch Aero, Darfield, Winchmore and Ashburton. But there the similarity ends.

Dargaville abuts the Tasman Sea and is some 1140 kilometres from Dunedin, which fronts the Pacific Ocean on the opposite coast. They are separated by Cook Strait, several mountain ranges and ten degrees of latitude.

Rhoades & Salinger (1993)

This is the only peer-reviewed paper in the international literature dealing with adjustments to the NZ temperature record. References to it pepper the Review Report, as well as the NIWA website and answers to Parliamentary Questions. NIWA claims R&S as the primary authority for its methodology, and relies heavily upon it for legitimacy.

R&S is firmly based on NSC and has no truck with NIWA’s RSC method. The paper itself is divided into two parts, respectively headed “Adjustments of Stations With Neighbours” and “Adjusting An Isolated Station.” See Extracts from the Scientific Literature.

“Neighbours” means “subject to similar local weather patterns.”

The paper defines “Isolated Station” as “a station that has no near neighbours, e.g., early records,” and says “such an adjustment involves much greater uncertainty than the adjustment of a station with many neighbours.”

The “Conclusion” of the paper reiterates the message: “For stations with several neighbours, the decision to adjust for a site change can be taken with some confidence. The same cannot be said for isolated stations.”

The R&S methodology also requires “a symmetrical interval before and after the site change, selecting only those neighbouring stations that have no site changes over the period of comparison. The standard error is based on the variation of a set of [monthly] differences.”

Note that NIWA’s Review methodology does not achieve symmetrical intervals, uses remote stations that have site changes, and relies upon annual rather than monthly differences.

R&S discusses the question of how to weight neighbouring stations, including the respective merits of using an exponential distance formula or the squares of the correlations between stations. The chosen example uses weighting proportional to the fourth power of the correlation.

In brief, the simple annual averages used in the Review are yet another example of NIWA’s maverick methods which owe nothing to the scientific literature.

Urban Heating Effects

The Review Report declares, without reference to authority or evidence, that Auckland is the only location “significantly influenced by urban heating effects.”

This ignores the finding by Hessell (1980) that Christchurch and Wellington showed obvious signs of urban heat island (UHI) effects, and that “only a few” sites were free of it. See Extracts from the Scientific Literature.

By applying the RSC method, the Review concludes that Albert Park warmed relative to other locations ‘at least’ between 1928 and 1960 due to tree growth in the park and increased urbanisation around the park. The Review estimates that such warming would have artificially raised the apparent 1909-2009 trend at Auckland by 0.38°C.

This figure seems surprisingly low, and out of phase with recent work undertaken by the Bureau in major Australian cities. But what is even more surprising is that NIWA decides to ignore the estimated UHI figure altogether, rather than subtract it from Auckland’s super-steep adjusted temperature curve.

No explanation is offered other than a throwaway line (p.37) mentioning the desirability of further research. Nowhere in the scientific literature is there any authority for continued use of temperature data from a site known to be significantly influenced by UHI. Once again, NIWA displays its maverick tendencies.

Then it gets worse. The artificially-inflated warming trend at Auckland is chosen as the benchmark for adjusting the trends at Dunedin, Wellington, Hokitika, Masterton and Nelson (see above) so that the entire NZT7 is heavily contaminated by known UHI.

Without the UHI cross-infections, the New Zealand-wide warming trend might well reduce to about 0.53°C/century—a much more plausible figure.

Uncertainties

The R&S paper concludes:

“Whatever adjustment procedures are used, the presence of site changes causes an accumulating uncertainty when comparing observations that are more distant in time. The cumulative uncertainties associated with site change effects, whether adjustments are made or not, are often large compared with effects appearing in studies of long-term climate change. For this reason it is a good idea to publish the standard errors of site change effects along with homogenized records, whether adjustments are made or not.”

NIWA accepts the need to publish the standard errors of the adjusted records. At page 5, the Review Report confirms that

“further research is under way to quantify how the accumulating adjustments influence the uncertainty in the trend estimates.”

Almost any method offering clues to the content of “missing data” has some potential to be useful. Whether the method is best described as a guess or as scientific certainty is an issue which can and should be determined by statistical “hypothesis testing.” In light of the basic principles described in the Review Report (above) the null hypothesis for the RSC method should be:

“During any period, the trends and average interannual variations of temperatures at any two New Zealand weather stations (neighbouring or not) will be almost in lockstep, with only occasional exceptions.”

Of course, it will then be necessary to define “almost in lockstep” and “occasional exceptions” and to quantify the uncertainties associated with those terms.

Visits: 61

7 Thoughts on “NIWA’s maverick methodology

  1. Andy on 05/02/2011 at 5:14 pm said:

    If these temperature records are so intertwined with dependencies, then it is very hard to pull apart the threads to see where the warming signal is coming from.

    Thanks to Barry for another interesting analysis.

  2. Richard C (NZ) on 05/02/2011 at 7:23 pm said:

    “The temperatures from the two sites may be merged, as long as the difference between their “base level temperatures” can be identified. This is best done by running the two sites in parallel for a lengthy period and then comparing averages from the two sets of data. The interannual variations (being in lockstep) cancel out and the difference in base levels is revealed. ”

    Fine, but if there is not sufficient overlap then it’s just subjective guessing, and if a similar process was not followed for the 31 remote station comparisons then they are just making it up. No need to be a climate scientist to do that, any amateur effort will do because how can the results be verified except to reinstall stations for the purpose of comparison in retrospect?

    • Barry Brill on 05/02/2011 at 11:54 pm said:

      “if there is not sufficient overlap then it’s just subjective guessing”.

      Well, it’s a bit better than that. When there is no actual overlap, NIWA sets out to create a “virtual overlap” by reference to other stations. If another acceptable station (or the average of several such) experienced an inter-annual temperature shift at the relevant time, then it is assumed that the subject site would have experienced an identical shift.

      The problem is that there are no firm rules governing the selection of the reference stations – or the weighting to be ascribed to each one. The Review discards 51 of the 52 adjustments made by Salinger, and disagrees with his selections. This ensures that the outcome is determined by the subjective decisions of the particular analyst.

    • Richard C (NZ) on 06/02/2011 at 8:51 am said:

      “assumed……….” being the key element. Who is to say that assumed adjustments are incorrect?

      A review can only discard subjective adjustments by weight of opinion but that does not make their assumed replacement values any more correct than Salinger’s assumed values.

      Even if there were firm rules for selection of reference stations, that would not make the process any more than an assumption – it’s would just be an assumption with constraints.

  3. Alexander K on 06/02/2011 at 4:19 am said:

    I had experience of wildly differing microclimates between Albany and Warkworth on a daily basis for a number of years and know that from that experience that any person who insists that weather stations in New Zealand which are widely separated will give similar readings is either barking mad or absolutely in ignorance about NZ’s climate, neither something one would expect of NIWA.
    Playing statistical games to ensure that the mythical warming occurred is hardly objective science – inference can never be empirical, observed data.
    Lord Rutherford would NOT approve of NIWA’s approach to science!

  4. Richard C (NZ) on 06/02/2011 at 2:06 pm said:

    Earth the same temperature now as 30 years ago — and decreasing

    http://www.heliogenic.net/2011/02/03/earth-the-same-temperature-now-as-30-years-ago-and-decreasing/

Leave a Reply to Richard C (NZ) Cancel reply

Your email address will not be published. Required fields are marked *

Post Navigation