Letters to the Editor

Get fair dinkum

quill pen

To the Editor
Climate Conversation

15th February 2016

If governments truly believed that man’s production of carbon dioxide causes dangerous global warming, they would ban the use of motor cars, motor trucks, tractors, motor homes, motor bikes, motor mowers, motor launches and petrol-driven chain saws. These all pump out the two dreaded greenhouse gases: carbon dioxide and water vapour. Horses, bullocks, wagons, bicycles, scythes, row-boats and axes are the true green tools. All were good enough for our pioneers.

They would also close all coal, oil and gas-fired power stations, and cover the land and buildings with solar panels and windmills. (Smart people would also stock up on candles and firewood for those cold still nights and cloudy windless days.)

Fair dinkum climatists would also ban all tourism advertising. It just encourages people to jump into cars, buses, trains, aeroplanes and ships to go somewhere else, consume local resources, produce tonnes of CO2 and then come home again (passing in transit all the other people doing the same trips in reverse). We should surely be instructed to stay home and watch David Attenborough on battery-powered TV.

What about all the government-promoted fireworks displays, motor rallies, sports extravaganzas and never-ending world games and expos? These all require millions of people to go somewhere, consume things and then return home, producing heaps of carbon dioxide. With the modern magic of NBN, every Australian could have a ringside seat at every world circus without leaving the comfort of their own lounge chair.

And if governments were Fair Dinkum, they would have already nominated a region to pilot-test the costs and benefits of their true-green society (I nominate Tasmania).

Today’s politicians are not Fair Dinkum.

If they were Fair Dinkum, they would confess that carbon dioxide is innocent and all this has nothing to do with controlling climate, but everything to do with controlling people.

Viv Forbes


forbes [at] carbon-sense [dot] com

Views: 124

20 Thoughts on “Letters to the Editor

  1. Richard C (NZ) on 17/02/2016 at 2:59 pm said:

    From back in 2009 at the ‘Climate Change’ blog (Chris Colose):

    ‘Re-visiting climate forcing/feedback concepts’

    One “Richard Treadgold” was interested in comments:

    Richard Treadgold | October 16, 2009 at 10:17 pm |

    “What you say here is useful and, though my grasp of the maths is insecure, it’s rare to find such an accessible description, Chris, so thank you.” [continues]


    # # #

    Lots of theory in the post but no recourse to observations. Also internally contradictory CO2 vs solar).

    Scroll down to the “Planck response” passages after the Radiative Forcing Components graph. Colose says:

    The temperature response can then be linearly related to a forcing

    delta T = factor F

    Where [factor] is the Planck-feedback factor described above. It is important to note now that this is an equilibrium formula, meaning that we don’t see the full temperature response to show up right away if we instantly double CO2, since it takes time for the radiative imbalance to go to zero (it’s hard to heat up the oceans quickly!). We’ll see that when we actually allow other things like clouds,water vapor, albedo, etc to vary with the climate response (as opposed to the unrealistic stefan-boltzmann only feedback), then lambda becomes a function of all those things, and describes how the total forcing is connected to the temperature response. This formula implies that for a 4 Watt per square meter forcing (remember, about a doubling of CO2 equivalent), you get roughly a 1 K temperature

    The Planck-feedback factor is the warmy comeback as to why the observed TOA energy imbalance is not conforming to theory. The reasoning gets a bit woolly, for example I got this back at me at Climate Etc:

    Pierre-Normand Houle | August 30, 2015 at 12:53 am |

    Richardcfromnz, the FAQ definition is simplified for the layman reader. Section 2.2. in WG4, WG1 is more precise:

    “The definition of RF [Radiative Forcing] from the TAR and earlier IPCC assessment reports is retained. Ramaswamy et al. (2001) define it as ‘the change in net (down minus up) irradiance (solar plus longwave; in W m–2) at the tropopause after allowing for stratospheric temperatures to readjust to radiative equilibrium, but with surface and tropospheric temperatures and state held fixed at the unperturbed values’. ”

    It is defined thus because an instantaneous variation in radiative forcing causes an imbalance that eventually yields a compensating variation in surface and tropospheric temperature that tends to cancel this imbalance (the so called Plank response). But this subsequent reduction of the imbalance isn’t itself a variation in the forcing. The forcing change governs the ultimate adjustment in surface and tropospheric temperature that will cancel the initial imbalance that it causes. If the imbalance itself were to govern the surface temperature, as you seem to understand the concept of forcing to imply, then you would get the absurd result that the climate system could accumulate or lose energy forever (as a result of some fixed imbalance) without this producing any change in surface temperature.


    The Ramaswamy et al. (2001) RF definition is theoretical, the FAQ defintion is more realistic. Real world planetary energy flows don’t conform to the Ramaswamy definition. So too is CO2 “instantly doubled” a theoretical construct. Theoretical CO2 forcing as at 2015 was already at 1.9 W.m-2 (C 400ppm, Co 280ppm) i.e. CO2 has already increased by a factor of 1.43 since 1750 so why not consider that first? The observed TOA imbalance, is however fluctuating around a constant 0.6 W.m-2. In other words, the climate system has been accumulating energy as a result of a fixed imbalance producing a change in surface temperature, contrary to Pierre-Normand Houle above.

    Problem is: the TOA imbalance is only a third of theoretical CO2 forcing and trendless, and the change in temperature is less than a valid CO2 would imply (think CO2-forced climate models). And the Planck-feedback factor is accounted for over time in the measured imbalance.

    The TOA energy imbalance observed by satellites is simply the difference between incoming solar radiation (UV-A/B, Vis, IR-A/B) and outgoing IR-C radiation measured at the top of atmosphere, irrespective of the Ramaswamy et al definition of Radiative Forcing (RF), which is a subtly different concept to the TOA energy imbalance.

    The important distinction is whether a “forcing” (ANY forcing) acts via the ocean or not. Solar forcing obviously acts via the ocean. The IPCC speculates (for 25 years now) that CO2 acts via the ocean too via “air-sea fluxes” but has no evidence whatsoever to back up the speculation. Colose in comments obviously subscribes to the CO2-heats-the-ocean idea but doesn’t provide any evidence either.

    Back to Colose’s post:

    >”we don’t see the full temperature response to show up right away if we instantly double CO2, since it takes time for the radiative imbalance to go to zero (it’s hard to heat up the oceans quickly!)”.

    This is valid (sort of, no-one really knows if zero imbalance actually occrs and for how long) in respect to solar change and forcing but not valid in respect to theoretical CO2 forcing. If CO2 theory was valid, instantly doubled CO2 would mean a 4 W.m-2 RF i.e. 4 W.m-2 of radiative energy that (supposedly) has no egress to space from the troposphere and would instantly produce a TOA energy imbalance (hypothetically) of 4 W.m-2. That has nothing to do with the ocean in the case of CO2 because theoretical CO2 forcing can only act on outgoing radiative energy, CO2 is not an ocean heating agent, the sun is. CO2 warming the oceans is IPCC speculation, it is not fact. A physical impossibility anyway but ‘nuther story (but see as follows).

    LW radiation is a COOLING effect at the surface (OLR – DLR = net OLR, about -50 W.m-2). And of DLR which can be 400 W.m-2 24/7 in the tropics, only about 6 – 7 W.m-2 is the CO2 component. The forcing at the surface has already occurred via sun and ocean. About 24 W.m-2 solar accumulation in the tropics and about -11 W.m-2 dissipation in the Southern Ocean, average 0.6 W.m-2. A small solar change accumulating over time (say 400 years) easily accounts for the average imbalance at the surface, and therefore TOA, and therefore temperature (see solar-temperature model below).

    A valid radiative “forcing”, if it exists, is in respect to the tropopause and measured at the top of atmosphere, whether theory or observation. Any feedback or Plank response is accounted for over time. The current theoretical CO2 forcing is in respect to 1750 with an uptick around 1955. That’s enough time for a Planck response to CO2 forcing to occur and be accounted for (if there actually is one to theoretical CO2 forcing in reality),

    Colose then states some wild stuff in respect to solar change:

    To compute a radiative forcing for an increase in solar irradiance, we do

    F solar = So * percentage change / 100 * (1/4) * (0.7)

    where the 1/4 and 0.7 factor account for the geometry and albedo of the Earth, respectively. Depending on how radiative forcing is defined, this number can often be reduced further to account for ozone absorption of UV or other effects, but in general the forcing due to a realistic change in solar increase is very small. It follows that it would take about a 22 W/m2 change in solar irradiance to produce a 1 K change in global temperature. This is actually a very stable climate. This also demonstrates the intellectual bankruptcy of those who claim that the solar trend over the last half century (which has pretty much been a flat-line when you remove the 11-year oscillatory signal) is responsible for most of the observed late 20th century warming, and simultaneously argue for a low sensitivity.

    This is absurd – “a 22 W/m2 change in solar irradiance to produce a 1 K change in global temperature”.

    The IPCC attributes pre-1950 temperature change (in respect to 1750) to solar change of far less than several W.m-2. They threw out Shapiro et al’s estimate of 6 W.m-2 LIA – present in favour of least case solar scenarios but they still make the solar attribution prior to 1950.

    To put Colose’s “22 W/m2 change in solar irradiance” in perspective, the current solar accumulation in the tropics is in the order of 24 W.m-2 (Fairall et al 1976) which prduces a surface energy imbalance of 0.6 W.m-2 (Stephens et al 2012). This accumulation is year after year as long as solar levels are reasonably constant as they were from about 1958 to present. Colose implies that it would take an extra 22 W.m-2 solar accumulation in the tropical ocean, year after year, for years and years, to boost global temperatures 1 K. This is brainless. Historical TSI variation of far less than 22 W.m-2 is already adequate to account for a 1 K temperature boost as demonstrated by Jeff Patterson’s model below:

    ‘A TSI-Driven (solar) Climate Model’ February 8, 2016 by Jeff Patterson

    Chris Colose seems to have no conception as to the relative energetics (i.e. energy-per-photon) of solar radiation (UV-A/B, Vis, IR-A/B) vs DLR (IR-C), of thermal lag in the sun-ocean-atm system (even though he states delay due to Planck response in respect to CO2 theory), of solar change over the last 400 years, and how to actually apply historical solar change and forcing to the planetary system as for example Jeff Patterson’s model.

    The thermal lag in either planetary inertia/lag calcs or TSI-driven models is in the order of several decades. The exception is the Evans/Nova N-D solar model which only has an 11 year lag (which is unrealistic IMO). Dr Kevin Trenberth states that the ocean adds “10 – 100 years” delay to the climate system. It is now 30 years since TSI peaked in 1986 but the elevated level remained much the same until 2005. In other words, we are only now in 2016 experiencing the effect of 1986 solar levels.

    Like most warmies, Colose demands a “trend” in TSI over the latter part of the 20th century to conform to the temperature trend. But he doesn’t realize that the extremely elevated TSI level, albeit trendless, was providing oceanic energy accumulation year after year and an atmospheric temperature response is now occurring year after year decades later and will continue for some time. In short, Chris Colose is thermodynamically illiterate.

  2. Richard C (NZ) on 17/02/2016 at 3:12 pm said:

    >”the current solar accumulation in the tropics is in the order of 24 W.m-2 (Fairall et al 1976)”

    Should be Fairall et al (1996).

  3. Richard C (NZ) on 17/02/2016 at 3:59 pm said:

    >”The IPCC speculates (for 25 years now) that CO2 acts via the ocean too via “air-sea fluxes” but has no evidence whatsoever to back up the speculation.”

    In AR5 Chapter 3 Observations: Ocean, the IPCC admits that it cannot find evidence of said speculated “air-sea fluxes” of anthropogenic origin i.e. the speculated flux, even if it does exist, is so small it is undetectable.

    The body of observational literature defining energy at the AO interface isn’t bothered with DLR as distinct from OLR, let alone a miniscule speculated anthro component of DLR. Fairall et al (1996) mentioned previously only states “Rnl” which is net long-wave radiation i.e. OLR – DLR. The net is a large outgoing cooling flux of about 50 W.m-2.

    In the tropics the cooling fluxes, evaporation radiation and conduction, are not sufficient to balance the solar flux by an average deficit of about 24 W.m-2 (depends on surface conditions e.g. wind, clouds, etc). The accumulated energy is dissipated in the extratropics towards the poles.

    Point is, the IPCC are trying to find a negligible and ineffective flux among fluxes that are very significant and very effective. They actually concede this in Chapter 3 using similar language.

  4. Richard C (NZ) on 17/02/2016 at 4:56 pm said:

    >This is absurd – “a 22 W/m2 change in solar irradiance to produce a 1 K change in global temperature”.

    Jeff Patterson’s solar climate model (indicative if nothing else) TSI input data:

    Figure 2- (a) TSI reconstruction (Krivova 2010); (b) The input driving time series u(t)

    About 1 W.m-2 change in solar irradiance which after allowing for the lagged accumulation of oceanic heat in the system (and subsequent dissipation) reproduces a 1 K change in surface temperature:

    Figure 10- Modeled results versus observation

    This result using 1 W.m-2 from Krivova et al does tend to discount Shapiro et al’s 6 W.m-2 change in solar irradiance but keep in mind that Jeff’s model is a “work-in-progress”.

    Chris Colose says another 21 W.m-2 is required to do this. He doesn’t understand heat in respect to water.

  5. Richard C (NZ) on 17/02/2016 at 5:14 pm said:

    >”The thermal lag in either planetary inertia/lag calcs or TSI-driven models is in the order of several decades.”

    Jeff Patterson’s system lag (TSI – Temp) using Krivova (2010) TSI: 84 years

    Using an updated TSI series from comments: 73 years

    See comment:

    This is well in excess of the “30 – 40 years” solar-temperature lag found by Zhao and Feng in Antarctica over millennia.

  6. Richard C (NZ) on 18/02/2016 at 12:18 pm said:

    Enhanced levels of carbon dioxide are likely cause of global dryland greening, study says

    Written by Science Daily on 17 February 2016.

    Enhanced levels of atmospheric carbon dioxide are a likely key driver of global dryland greening, according to a paper published in the journal Scientific Reports.

    The positive trend in vegetation greenness has been observed through satellite images, but the reasons for it had been unclear.

    After analyzing 45 studies from eight countries, Lixin Wang, assistant professor of earth sciences in the School of Science at Indiana University-Purdue University Indianapolis, and a Ph.D. student in Wang’s group, Xuefei Lu, concluded the greening likely stems from the impact of rising levels of atmospheric carbon dioxide on plant water savings and consequent increases in available soil water.

    “We know from satellite observations that vegetation is greener than it was in the past,” Wang said. “We now understand why that’s occurring, but we don’t necessarily know if that’s a good thing or not.”


  7. Maggy Wassilieff on 18/02/2016 at 6:20 pm said:

    You might be interested in this posting on “No Tricks Zone”

    It contains a list of 250 scientific papers published in 2015 that cast doubt on the settled Climate Science story.
    The 250 papers are thoughtfully arranged under 19 different headings and brief summaries of each paper are presented.

  8. Richard C (NZ) on 19/02/2016 at 8:37 am said:

    Great resource Maggy, thanks. And that’s just from 2015!

    I note the first category, something I’m very interested in, is: Solar Forcing of Climate

    64 papers listed from 2015 alone. The IPCC assessment reports do not have a solar chapter i.e. very few, if any, of these papers would see the light of day in the next IPCC assessment (if there is one). The NIPCC on the other hand (Climate Change Reconsidered II), does have a solar chapter (Chapter 3).

    Proof that IPCC climate assessments are not exhaustive and merely CO2-centric.

    Next category: Cloud Radiative Forcing……

  9. Richard C (NZ) on 19/02/2016 at 10:06 am said:

    Ya gotta laugh. WUWT has a post on ‘local climate models’:


    The first sentence of the press release for the paper featured is this:

    “Global models can simulate the earth’s climate hundreds of years into the future, and have been used to evaluate climate impacts on water, air temperature, human health, extreme precipitation, wildfire, agriculture, snowfall, and other applications.”

    My immediate thought was “I bet this is one of the first things picked up in comments”.

    Sure enough, check out from the fourth comment on.

    # # #

    On ‘local climate models’ Janice Moore has an interesting comment:

    Janice Moore February 18, 2016 at 9:27 am

    Well said, SAMURAI.

    You other commenters have done a fine job already of cutting this Junk Science Gravy Train off at the pass, but, just for future reference, I’m going to go ahead and post this thought. At first glance (until you see just how off their methods and conclusions are), I thought: “Oh, I see. Now, they are going to try to fool people into thinking that the enormous gaps in their computer grid are no problem anymore…

    That is, that they were (I at first thought) trying to pretend that the following problems are solved by their above “methods”:

    “Computer analysis requires that the earth be ‘cut’ into small, separate areas (actually volumes), each being analysed for heat input/outputs and other gas/vapour fluxes. Even so the computational analysis domain size (basic computer grid elements) is huge, 150km x 150km by 1km high, with the current computer power. It is so large that the effects of even the very large clouds are not individually included; and that includes clouds in our visual horizon. The spatial resolution is therefore very poor. Supercomputers cannot give us the accuracy we need.”

    (From Dr. Geoffrey G. Duffy Report linked and quoted here: http://wattsupwiththat.com/2008/09/04/even-doubling-or-tripling-the-amount-of-co2-will-have-little-impact-on-temps/ )

    Also discussed in detail by Dr. Christopher Essex in his “6 Impossible Things” lecture video linked here:


    Dr. Essex in above lecture on physics equations not yet solved and computer math gross inadequacies for climate modeling {with approx. times in video}:

    {25:17} 1. Solving the closure problem. {i.e., the “basic physics” equations have not even been SOLVED yet, e.g., the flow of fluids equation “Navier-Stokes Equations” — we still can’t even figure out what the flow of water in a PIPE would be if there were any turbulence}

    {30:20} 2. a. Impossible Thing #2: Computers with infinite representation and math skill. {gross overestimation and far, far, misplaced confidence in the ability of computers to do math accurately (esp. over many iterations) — in this section he discusses the 100 km square gaps {specifically mentioned at about 46:00} (i.e., cell size) — e.g., to analyze air movement, the cell would need to be, per Komogorov microscale, 1mm (aerosols even smaller, microns)).

    At about 44:00, Dr. Essex discusses the fact that even IF the basic equations were known, there isn’t enough time since time began to calculate even just a TEN–year forecast, even at super-fast speeds it would take approx. 10 to the 20th power years (the universe is only 10 to the 10th power years old)}.


    All the above is just, for THIS particular thread, FYI and a heads up to WUWTers to be ready to refute any attempts by AGWers to wave “Oh, but, NOW we have much smaller grid cells for our ‘data’ analysis” scarves to pull a climate model rabbit out of a hat: that is, no matter how fine you slice it (computer grid, etc…), it’s still baloney.

  10. Mike Jowsey on 19/02/2016 at 7:47 pm said:

    RC: Gut reaction to “chapter” grid-size modelling is that each chapter is affected by the preceding chapter and affects the next chapter. So, if we are talking about CAGW, the G requires a global metric.

    Maggy W – thanks for the link. Brilliant.

  11. Mike Jowsey on 19/02/2016 at 8:19 pm said:

    *chapters s/b ‘volumes’, smb. (Doncha just love tlas). Should be, Sorry my bad. 😉

  12. Richard C (NZ) on 20/02/2016 at 10:12 am said:

    >”So, if we are talking about CAGW, the G requires a global metric.”

    Exactly Mike. Localized conditions are totally different to an averaged global metric. Firstly it is bogus to merge and average temperatures over land with sea surface temperature. Secondly just one regional localized anomaly can skew a global metric. The atmospheric temperature response to the latest El Nino was NOT “global”. GISTEMP shows the entire Southern Hemisphere south of the tropics was cooler in 2015 than preceding years i.e. the “warmest ever” record was not “global”. In other words, The 2015 record event was a NH-only phenomenon excluding Europe and North America which were only 2nd warmest (NZ was only 27th warmest in 2015 according to NIWA).

    And there is more warming and certainly more oscillation in the NH than there is in the SH.

    Same with sea level rise and ocean heat content. the global SLR metric is skewed by the western Pacific north of Australia and New Guinea. The global OHC metric is skewed by the Indian Ocean. There are vast areas of the eastern Pacific where SL is either flat or falling (negative) over the last 20 years.

    So yes, the whole CAGW meme falls apart at a local level unless the locality is one of those where there is an extreme climate anomaly which gives the impression of a “global” disaster (Kiribati’s Pres Tong is milking this for all ts worth). Neither is it appropriate to apply “global’ predictions at a local level. Case in point: “global” satellite SLR of 3.3mm/yr does not apply around NZ and neither do the predictions of a greater rise than historical which is less than 2mm/yr.

    NIWA has a “local” climate model which is a downsized version of UKMO’s Global Climate Model. Big fanfare when they set up their High Performance Computing Facility (HPCF) and supercomputer but I’ve only heard of one climate modeling job coming from it. That was snow predictions for the Skiing Association (and doubtful results too). It is still a grid size that precludes anything meaningful at a local level and a bit finer resolution is not going to change that. Its best use seems to be medical modeling.

    And of course, if we are talking about CAGW, the “A” requires a non-natural boost in a global metric – not happening. Same is required in any local metric. I defy anyone to find a non-natural boost in a NZ climate metric e.g. air temperature, rainfall or drought (which is it?), or sea level rise. It is just not happening in reality.

    CAGWt is only happening in theoretical CO2-forced global climate models, They are going to have some difficulty importing all that to a local level where current and historical conditions are known in detail. In NZ, that detail can be down to individual farm and orchard level where weather and climate is critical. A smaller grid cell resolution doesn’t validate a local climate model. A model is only validated when it conforms to actual measurements, and in NZ there are plenty of those at a local level.

    The IPCC’s GCM’s are not validated by observations (I know of one radiative transfer module that is validated). The IPCC just does “intercomparison” projects i.e. they compare models to models (CMIP). That is NOT verification and validation (V&V). The IPCC even admits in AR5 Chapter 9 that 111 of 114 CMIP5 simulations DON’T model 21st century temperature. Why would a local CO2-forced model be any better?

    [I had to get “chapter” in there somewhere]

  13. Richard C (NZ) on 20/02/2016 at 10:51 am said:


    UN climate chief Christiana Figueres steps down – Lamented U.S. democracy as ‘very detrimental’ – Sought ‘centralized transformation’ – Lauded one-party ruled China for ‘doing it right’ on climate


  14. Richard C (NZ) on 20/02/2016 at 11:57 am said:


    “We now move into a phase of urgent implementation,”

    Howls of dissent when CSIRO tries to position for that. And a delay, at least, from the US Supreme Court for Obama’s Clean Power Plan. Not going well.

    Besides, no country has actually signed the Paris agreement yet. The signing window does not open until April 22nd.

  15. Richard C (NZ) on 20/02/2016 at 1:29 pm said:

    Regional modelling of New Zealand climate
    Published: 4 March 2009

    NIWA contacts: Dr Brett Mullan, Dr Sam Dean, Dr Abha Sood, Mr Stephen Stuart

    Developing probabilistic scenarios of expected future regional climate changes.

    The problem

    As the climate changes, changing risks of climate extremes will significantly challenge New Zealand society, the New Zealand economy, and our natural environment. At the same time, new climate-related opportunities will emerge. We will have to change how we use the land to profit from rather than just be hurt by these changes. It is critical that, as a nation, we have the best tools and information available to anticipate and plan. We need quantitative information on the likely kinds of changes expected in different regions of the country, such as the frequency and magnitude of extreme events.

    The solution

    This programme aims to better quantify climate changes over New Zealand, and to encourage better use of climate change scenario information in strategic planning. This will improve climate-related risk management in the New Zealand economy.

    First, detailed projections and data sets of future climate change are produced from regional climate models and from statistically downscaled global models. These projections can then be used to drive other environmental models that address issues relevant to water resources, tourism, and urban and coastal infrastructure.

    The key steps involve:

    # validating and improving NIWA’s regional climate model (RCM), and better quantifying natural variability of climate;

    # generating a range of RCM projections of future New Zealand climate under different emissions scenarios, forced by different global climate models;

    # improving statistical downscaling of projected New Zealand changes from a large set of IPCC Fourth Assessment global models. This will place the more limited sample of the regional model that are runs in to a broader probabilistic framework;

    # compiling paleoclimate proxy data from New Zealand and comparing inferred past climate with climate model simulations;

    # driving a range of environmental models with climate scenario data, in order to improve knowledge of how the cryosphere, rivers and sea-level could change under global warming this century; and

    # widely disseminating information and data sets of climate change to the end-user community, with a best-practice example being developed in collaboration with Auckland Regional Council.

    The result

    Research is progressing on correcting rainfall biases in the regional climate model, on a new downscaling technique for a larger suite of global models, and on unifying the river, snow and glacier models so they accept common data formats and interact properly with each other.

    Page last updated: 6 May 2015


    # # #

    Re ># validating and improving NIWA’s regional climate model (RCM),

    ‘Regional climate modelling in New Zealand: Comparison to gridded and satellite observations.’
    D. Ackerley, S. Dean, A. Sood and A.B. Mullan

    See next comment.

  16. Richard C (NZ) on 20/02/2016 at 2:00 pm said:

    ‘Regional climate modelling in New Zealand: Comparison to gridded and satellite observations.’

    D. Ackerley, S. Dean, A. Sood and A.B. Mullan

    The climate of New Zealand is highly variable, both spatially and temporally, due to a mixture of
    complex topography and location in the Southern Hemisphere, mid-latitude westerlies. The
    representation of New Zealand climate in General Circulation Models (GCMs) is too coarse to
    provide meaningful regional climate statistics. Therefore empirical-statistical or dynamical
    downscaling methods should be applied to global model data to understand regional climate in
    terms of the large scale flow. In this study, the focus is on dynamical downscaling where a Regional
    Climate Model (RCM) is used. The RCM is forced by both reanalysis and GCM data and run for
    thirty years in each case. Climate statistics from the model for 1980-1999 are compared with
    gridded observations. The geographical distribution of maximum and minimum surface air
    temperatures compare well with the gridded observational data (spatial correlation values >0.9)
    with low temperatures in upland areas and higher temperatures in lowland and northern areas.
    However, temperature biases are also evident, with maximum surface air temperature being too low
    and minimum surface air temperatures too high. The model also captures the west – east gradient in
    precipitation across the mountainous South Island very well (spatial correlation values >0.75).
    Biases in precipitation are also analysed and tend to be negative (too little precipitation), especially
    in winter. Biases in the GCM-forced regional model results are similar to, but slightly larger than,
    those in the reanalysis-forced run, owing to additional circulation errors coming from the global

    Table 1: The model resolution used in this study and Drost et al. (2007).
    Model Attribute, This Study, Drost et al. (2007)
    Horizontal Resolution, 0.27o (~27km), 0.36o (~40km)

    2.3 Observational data
    2.3.1 Virtual Climate Station Network (VCSN) data.
    In this study, we use the gridded fields of the Virtual Climate Station Network (VCSN) at NIWA, which was produced from observed station data using the methods described in Tait et al. (2006) and Tait (2008). We
    compare the daily maximum temperature (Tmax), daily minimum temperature (Tmin) and the daily accumulation of precipitation from the VCSN data with the RCM output over the 1980–1999 period.

    3. Results
    3.1 RCM1 compared to gridded and satellite observations
    3.1.1 Maximum surface air temperature (Tmax)
    3.1.2 Minimum surface air temperature (Tmin)
    3.1.3 Precipitation


    # # #

    Their validation period was 1980–1999. IPCC predictions (projections) are from that baseline nominally centred on 1990.

    So NIWA has no validation (not published that I know of) of their projections in the 21st century i.e. just because the model is validated over 1980–1999, it does NOT necessarily follow that the model is valid over 2000-2019.

    NIWA now have VCSN data to 2015. NIWA should be carrying out subsequent validation exercises every 5 years e.g. 2000-2004, 2005-2009, 2010-2014, 2015-2019 etc. If the model is getting steadily inconsistent with observations then NIWA have a problem with their model.

    Given NIWA’s predictions on their prediction page, which is from the 1990 base, are diverging from observations in the 21st century (i.e. 2000-2015) I cannot see how their regional model can possibly be validated unless CO2 forcing is reduced considerably or eliminated altogether.

    No noise from NIWA lately in regard to their regional climate model probably speaks volumes.

  17. Richard C (NZ) on 21/02/2016 at 9:15 am said:

    >”Localized conditions are totally different to an averaged global metric”

    Jim Steele on this at WUWT:

    “Dr. Trenberth, via his well-groomed media conduits, preaches to the public that every extreme event – flood or drought, heat wave or snowstorm – is worsened by rising CO2. To fully appreciate the pitfalls of his “warmer and wetter” meme, you need to look no further than Trenberth’s pronouncements regards the devastating Moore, Oklahoma tornado. Although Trenberth admits, “climate change from human influences is difficult to perceive and detect because natural weather-related variability is large”, in a Scientific American interview, arguing only from authority he cavalierly attributed CO2 climate change to a “5 to 10 percent effect in terms of the instability and subsequent rainfall, but it translates into up to a 33 percent effect in terms of damage.” But in contrast to Trenberth’s “warmer and wetter world” assertions, there was no warming contribution. Maximum temperatures in Oklahoma had been cooler since the 1940s.

    [see graph]

    Clearly Trenberth’s simplistic “warmer and wetter” world assertion cannot be applied willy-nilly to every region. Climate change is not globally homogenous. It is regionally variable and the global average temperature is a chimera of that regional variability. Furthermore his claim of a “wetter world” is a hypothetical argument not supported by evidence.”


  18. Richard C (NZ) on 21/02/2016 at 10:48 am said:

    Jim Steele again:

    ‘The Kevin Trenberth Effect: Pulling Science Back to the Dark Ages – Part 1 Droughts and Heat waves’

    A bank account serves as a good analogy to illustrate drought stress. Financial (hydrologic) stress results from changes in income (rain and snow) versus withdrawals (evaporation and runoff) and the buffering capacity of your reserves (lakes, wetlands and subsurface water). Old school science would demand researchers eliminate all confounding factors affecting hydrological stress before claiming any effect by a single variable like CO2.

    Here are a few confounding factors that are seldom addressed in papers that blame a greenhouse effect for higher temperatures and stronger heat waves and droughts.

    i.) Clear dry skies increase shortwave (solar) insolation, while simultaneously decreasing downward long wave radiation (i.e. decreasing the greenhouse effect). Reasons for this were discussed in an essay Natural Heat Waves and have been verified by satellite data (Yin 2014). Higher temperatures happen despite a reduced greenhouse effect.

    ii.) In arid and semi-arid regions like the American Southwest, precipitation shortfalls not only decrease the hydrologic “income” but also decrease evaporation. If there is no rain, there is nothing to evaporate. The decrease in evaporative cooling raises temperatures (Roderick 2009, Yin 2014). Drier surfaces have a lower heat capacity so that incoming energy that was once converted to latent heat of evaporation is now felt as sensible heat that rapidly raises temperatures. Trenberth’s global warming claims often have the tail wagging the dog by assuming higher temperatures cause drier soils. Drier soils cause higher temperatures.

    iii.) Natural cycles cause decadal oscillations between dry and wet years. Recent research (Johnstone 2014) report the past 110 years of climate change in northwestern North America can be fully accounted for by the multi-decadal Pacific Decadal Oscillation (PDO). The PDO alters the pattern of sea surface temperatures, which alters atmospheric circulation affecting transportation of warmth from the south, and moisture from the ocean. The PDO produces dry cycles that not only reduce rainfall but can increase temperatures via mechanisms i and ii. The negative PDO experienced over the last 15 years promoted more La Ninas that make California drier.

    iv.) The buffering effect of hydrologic reserves has increasingly dwindled. Wetlands have been drained and degraded watersheds have drained subsurface waters resulting in reduced evapotranspiration. The loss of California wetlands since 1820 has been dramatic (Figure 9) generating a decreasing trend in evaporative cooling. Furthermore spreading urbanization has relegated natural streams to underground pipelines. Urbanization has increased runoff (hydrologic withdrawals) as rainfall is increasingly shunted into sewer systems and no longer recharges whatever remaining landscapes are not paved over with heat retaining materials. This increasing reduction in our moisture “reserves” increases regional dryness and has not been balanced by irrigation.


    # # #

    >”i.) Clear dry skies increase shortwave (solar) insolation, while simultaneously decreasing downward long wave radiation (i.e. decreasing the greenhouse effect). Reasons for this were discussed in an essay Natural Heat Waves and have been verified by satellite data (Yin 2014). Higher temperatures happen despite a reduced greenhouse effect.”

    The “greenhouse effect” being total DLR. Jim is saying total DLR decreases and that is due to no water vapour and no clouds. The total DLR reduction can be 10s of Watts per square metre.

    Except, the CO2 component of total DLR is only about 6 – 7 W.m-2 since the late 1970s (i.e. less than the change in total DLR) and that doesn’t change at all in the short-term average (neglecting diurnal and seasonal change). The observed (Oklahoma & Alaska) CO2 change is only about 0.2 W.m-2 per decade. Calculated estimate 0.3 W.m-2/decade.

    In other words, not only is a large change in total DLR not the driver of temperature swings, the CO2 component of DLR is completely inactive.

  19. Maggy Wassilieff on 25/02/2016 at 9:26 pm said:

    Another posting from NoTricksZone with links to 48 papers published in 2016 (and its still only February) that support the important role natural cycles/events have in influencing climate.

  20. Richard C (NZ) on 26/02/2016 at 9:06 am said:

    Direct link to Maggy’s NTZ post:

    ‘2016: Already Almost 50 New Peer-Reviewed Papers Refuting Alarmist CO2 Science …Show Natural Cycles Indisputable!’
    – See more at: http://notrickszone.com/2016/02/23/2016-already-almost-50-new-peer-reviewed-papers-refuting-alarmist-co2-science-show-natural-cycles-indisputable/#sthash.iOu7UKX3.dpuf

    I suspect this, along with 250 papers in 2015, is the response to the IPCC’s AR5 finding that natural variation has been neglected in the global climate models.

    So now there’s a plethora of natural variation papers. This puts the IPCC in a bit of a bind because now they have to address the scope of an issue that they have not covered yet after 25 years and 5 assessment reports (the NIPCC has though). This warrants a new Natural Variation chapter including Solar Forcing of Climate and Multidecadal Variability (MDV).

    But what does this mean for the global climate models? And the man-made climate change conjecture?

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation