Fatal deficiencies destroy scientific case for climate catastrophe

snow flake

As a member of the NZ Climate Science Coalition, I am frequently privy to learned conversations. Occasionally I publish excerpts, suitably altered to preserve privacy. The conversation below emerged sedately over several weeks and expertly defines the fatal deficiencies in the believers’ case for alarm. It deserves a big audience.

A:

I like what is in effect your invitation to the climate science community to contemplate the absence of a) any substantive empirical data that dangerous global climate warming is occurring, and b) a single refereed paper that contains data (not untested models) which invalidate the hypothesis “The climate change that we observe today, like past climate change, has natural causes.” The burden of proof is on those who promote alarmist statements on global warming.

B:

This is an interesting question: in matters of science where does the burden of proof lie? In criminal matters it is on the crown—in some civil matters (defamation, for example) it is on the defendant. But in science? Applying the NZ Royal Society Code the burden rests on the individual (whichever side of the fence he or she may sit) to ensure that their views and opinions are based on ALL the available evidence or are reasonable deductions of projections based on ALL the available evidence. The problem lies in defining the ALL.

C:

No amount of experiments can prove me right; one experiment can prove me wrong. – Albert Einstein

D:

It simply makes me weep how hard it is to get these simple, rock-solid aspects of science methodology considered in the debate. As we approach Paris the pervasive press cover is if anything becoming less rather than more scientific in tone. I guess we just have to be prepared to weather the storm.

The first speaker (A) points out there’s no evidence of dangerous global climate warming, no evidence that the fault lies with humanity and not nature, and reminds us that it is up to the believers in warming to prove their case—their demands that sceptics refute a vaguely stated argument are both unscientific and logically wrong. With these three vital scientific principles unfulfilled, the alarmist case fails—no matter what the temperature record shows.

Speaker (B) outlines the difficulty of establishing a case in science but allows everyone the authority to make a case.

Speaker (C) conveys Einstein’s insight; contrast the alarmist hubris that turns a blind eye to refutations.

The fourth speaker, (D), despairs that the true principles of science are most abandoned by those who would most earnestly adopt its authority.

Who would declare the truth, first admit the truth.
Who would be free, first free the mind.

178 Thoughts on “Fatal deficiencies destroy scientific case for climate catastrophe

  1. Richard C (NZ) on August 22, 2015 at 7:11 pm said:

    >”The algebra upthread is only just High School level I (probably Intermediate School without looking, maybe Primary)”

    Now that I think of it, I distinctly remember using colour coded wooden sticks to do arithmetic (sums) in Year 1 Primary School . So in terms of Macias et al and Equation (1):

    Total gray sticks = red sticks plus black sticks
    GMST natural [gray sticks] = ST [red sticks] + MDV [black sticks] (1)

    In terms of Kosaka & Xie (HIST) and Equation (3):

    Total blue sticks = red sticks plus purple sticks
    Model mean [blue sticks] GMST natural + theoretical man-made = (ST [red sticks] + TRF) [purple sticks] (3)

    This is Year 1 Primary School arithmetic. Unless anyone failed Year 1 Primary School arithmetic and never progressed, in which case you are excused, there is no excuse for not understanding this basic maths.

    Applied Maths, which is the application of math fundamentals to quantify numerical problems (Andy may have a better definition) is in this case, the process of applying colour coded stick basics to global mean surface temperature and the components of it. But it is still Year 1 Primary School basics that are being applied.

  2. Richard C (NZ) on August 22, 2015 at 7:48 pm said:

    >”Something to look at, maybe tomorrow now, is that the modelers have fooled themselves (and the rest of the world) i.e. the model runs neglect MDV therefore their model mean without MDV will NEVER conform to observations. Model mean to observations is not apples-to-apples. Observations include MDV, model simulations neglect MDV. The modelers “tweak” their non-MDV simulations to conform to observations, this is specious. First MDV must be added in. I suspect that the simulations without the anthropogenic “forcing” component (TRF) actually conform to the natural secular trend (ST) as it should. As I recall, this “experiment” yields a profile that comes in BELOW current observations. It SHOULD do. There were some fuzzy charts of this in AR4 so I’ll bring those up but also see if there’s some better figures from the papers AR4 cites to support their argument.”

    The natural-only AR5 simulation mean (different to AR4) does come in BELOW observations, but too low and wrong trajectory. For background before getting into this, Judith Curry has a post on it with the AR5 charts included:

    ‘The 50-50 argument’

    by Judith Curry, Posted on August 24, 2014 | 832 Comments

    http://judithcurry.com/2014/08/24/the-50-50-argument/

    The glaring flaw in their [IPCC] logic is this. If you are trying to attribute warming over a short period, e.g. since 1980, detection requires that you explicitly consider the phasing of multidecadal natural internal variability [MDV] during that period (e.g. AMO, PDO), not just the spectra over a long time period. Attribution arguments of late 20th century warming have failed to pass the detection threshold which requires accounting for the phasing of the AMO and PDO. It is typically argued that these oscillations go up and down, in net they are a wash. Maybe, but they are NOT a wash when you are considering a period of the order, or shorter than, the multidecadal time scales associated with these oscillations.

    Further, in the presence of multidecadal oscillations [MDV] with a nominal 60-80 yr time scale, convincing attribution requires that you can attribute the variability for more than one 60-80 yr period, preferably back to the mid 19th century. Not being able to address the attribution of change in the early 20th century to my mind precludes any highly confident attribution of change in the late 20th century.

    Yes, glaring flaw(s) in their logic indeed.

  3. Richard C (NZ) on August 22, 2015 at 8:29 pm said:

    >”The natural-only AR5 simulation mean (different to AR4) does come in BELOW observations”

    Just to be clear, before moving on, the natural-only simulation mean still does NOT introduce MDV.

    Macias et al state:

    “ST represents 78.8% of the total energy of the series; MDV accounts for 8.8% of the energy and the reconstructed signal for 88%”

    “The series” is ST + MDV + fluctuations (noise). The “reconstructed signal” as a proportion of “the series” including noise is ST + MDV or 78.8 + 8.8 = 87.6% (rounded to 88%).

    Therefore MDV is 11.16% of the “reconstructed signal” i.e. neglecting MDV throws out a little over a tenth of the intrinsic signal of the series. This is not a basis, as Judith Curry corrctly points out, from which to make detection and attribution conclusions. The MDV signal MUST be modeled and introduced to the natural-only simulation. After 25 years of assessment reports, the IPCC still have not done this yet.

  4. Richard C (NZ) on August 22, 2015 at 9:43 pm said:

    >”Now that I think of it, I distinctly remember using colour coded wooden sticks to do arithmetic (sums) in Year 1 Primary School .”

    Year 1 or maybe Year 2. This was a teaching aid novelty but still of traditional arithmetic operations. From Wiki:

    Arithmetic in education

    “Primary education in mathematics often places a strong focus on algorithms for the arithmetic of natural numbers, integers, fractions, and decimals (using the decimal place-value system). This study is sometimes known as algorism.”

    “The difficulty and unmotivated appearance of these algorithms has long led educators to question this curriculum, advocating the early teaching of more central and intuitive mathematical ideas. One notable movement in this direction was the New Math of the 1960s and 1970s, which attempted to teach arithmetic in the spirit of axiomatic development from set theory, an echo of the prevailing trend in higher mathematics.[18]”

    https://en.wikipedia.org/wiki/Arithmetic

    So then we were hit with New Math until it was abandoned. From Wiki again:

    New Math

    Praise
    “Boolean logic and the rules of sets would later prove to be very valuable with the onset of databases and other formations of data that were emerging in society. In this and other ways the New Math proved to be an important link to the computer revolution, as well as the Internet. This naturally includes all manner of programming. In this sense, the New Math was ahead of its time.”

    Criticisms
    In the Algebra preface of his book Precalculus Mathematics in a Nutshell, Professor George F. Simmons wrote that the New Math produced students who had “heard of the commutative law, but did not know the multiplication table.”

    In 1965, physicist Richard Feynman wrote in the essay “New Textbooks for the ‘New Mathematics'”:[4]

    “If we would like to, we can and do say, ‘The answer is a whole number less than 9 and bigger than 6,’ but we do not have to say, ‘The answer is a member of the set which is the intersection of the set of those numbers which is larger than 6 and the set of numbers which are smaller than 9’ … In the ‘new’ mathematics, then, first there must be freedom of thought; second, we do not want to teach just words; and third, subjects should not be introduced without explaining the purpose or reason, or without giving any way in which the material could be really used to discover something interesting. I don’t think it is worth while teaching such material.”

    https://en.wikipedia.org/wiki/New_Math

  5. Richard C (NZ) on August 23, 2015 at 8:52 am said:

    Judith Curry in The 50-50 argument:

    The IPCC notes overall warming since 1880. In particular, the period 1910-1940 is a period of warming that is comparable in duration and magnitude to the warming 1976-2000. Any anthropogenic forcing of that warming is very small (see Figure 10.1 above). The timing of the early 20th century warming is consistent with the AMO/PDO (e.g. the stadium wave; also noted by Tung and Zhou). The big unanswered question is: Why is the period 1940-1970 significantly warmer than say 1880-1910? Is it the sun? Is it a longer period ocean oscillation? Could the same processes causing the early 20th century warming be contributing to the late 20th century warming?

    Not only don’t we know the answer to these questions, but no one even seems to be asking them!

    Solar-centric MMCC sceptics such as myself are certainly asking these questions and think we have the answers but there is so much uncertainty in historical solar metrics (the IPCC admits this) that it is hard to make a solidly conclusive case.

    >”Why is the period 1940-1970 significantly warmer than say 1880-1910? Is it the sun?”

    Yes, and not just 1940-1970. The Modern or Current Solar Grand Maximum began just prior to the late 1950s at SC 19 and except for SC 20 has remained at very high levels until the recession started around 2006 in SC 23. But it is still at a historically high level, and in conjunction with thermal lag (see below), hence the current constant energy imbalance at TOA and Surface.

    >”Is it a longer period ocean oscillation?”

    No, but oceanic thermal lag time is certainly the key component that has been neglected (along with MDV). To get to the atmosphere, energy from the sun must go through the ocean (mostly). There’s a body of literature on planetary thermal lag but it cannot be characterized by a specific number. The planetary thermal lag period is always a range e.g. 10 – 100 years (Trenberth), 30 – 40 years (Zhao & Feng), or 8 – 20 years (Abdussamatov, ocean alone centred around 20). The lag is often expressed as the nominally centred point of the range.

    Given this uncertainty why is this issue is not being given prominence?

    Judith Curry is probably unaware that oceanic thermal lag was HOTLY debated in the Evans/Nova Solar N-D Model series of posts. In my opinion David Evans has the lag way too short, I think I recall he puts it at 1 or 2 years. The same hot debate has NEVER occurred in respect to the GCMs.

    No literature, letter, or essay that I know of gives an oceanic, and therefore planetary, thermal lag time range for the model simulations whether CO2-forced or natural-only. In my opinion (FWIW) this is a gaping gap in model simulations. If there is a paper I would very much like to see it.

    For reference IPCC AR5 Figure 10.1:

    Figure 10.1 from AR5 WGI (a) is with natural and anthropogenic forcing; (b) is without anthropogenic forcing:
    https://curryja.files.wordpress.com/2014/08/slide1.jpg

    Eventually I’ll find the paper(s) this graph is derived from. It(they) deserves much scrutiny but I don’t recall anyone dissecting it(them).

  6. Richard C (NZ) on August 23, 2015 at 9:56 am said:

    Judith Curry again in The 50-50 argument in respect to IPCC AR5 Figure 10.1:

    “Note in particular that the models fail to simulate the observed warming between 1910 and 1940.”

    Figure 10.1 from AR5 WGI (a) is with natural and anthropogenic forcing; (b) is without anthropogenic forcing:
    https://curryja.files.wordpress.com/2014/08/slide1.jpg

    A week ago before really thinking about this I would have agreed with Judith Curry. But now with the fact fixed firmly in my mind that the models have no MDV component, I look at (a) and (b) somewhat differently.

    To a degree up until 1940 both (a) and (b) DO simulate the observed warming because MDV is neglected. The observations SHOULD oscillate about the simulation mean – and they do so.

    But after about 1940 BOTH (a) and (b) go horribly wrong. There has been too much “tweaking” for volcanic activity and such like, the models are just not that sensitive (if they don’t model MDV, why include volcanism?). The models should have been left to run. The drop just after 1960 should NOT occur (precipitous in (b) – stupidly so). The profile trajectory of BOTH (a) and (b) SHOULD be ABOVE observations around the 1970s because the observations include MDV but the model simulations don’t.

    The MDV cycle can be seen in Macias et al Figure 1 (blue line):

    Figure 1. SSA reconstructed signals from HadCRUT4 global surface temperature anomalies.
    The annual surface temperature (gray line), multidecadal variability (MDV, blue line), secular trend (ST, red line) and reconstructed signal (MDV+ST, black line) are indicated.
    file:///C:/DOCUME~1/User1/LOCALS~1/Temp/journal.pone.0107222.g001-1.png

    Irrespective of anthro or non-anthro forcing:

    1) The observations SHOULD cross the model simulations, both (a) and (b), at 1955 from ABOVE.

    2) The observations SHOULD be WELL BELOW the model simulations, both (a) and (b), at 1970.

    3) The observations SHOULD be on a trajectory to cross the model simulations, both (a) and (b), at around 1985.

    Respective of anthro or non-anthro forcing:

    4) The model simulations SHOULD be diverging by ONLY the factor of TRF from 1950s onwards.

    Obviously not the case for 1, 2, 3, or 4. The simulation (b), without anthropogenic forcing, is particularly laughable.

  7. Richard C (NZ) on August 23, 2015 at 10:10 am said:

    3) The observations SHOULD be on a trajectory to cross the model simulations, both (a) and (b), at around 1985 [from BELOW].

  8. Richard C (NZ) on August 23, 2015 at 11:45 am said:

    [Judith Curry] – “The big unanswered question is: Why is the period 1940-1970 significantly warmer than say 1880-1910? Is it the sun?”

    Horst-Joachim Lüdecke, Dr. Alexander Hempelmann and Carl Otto Weiss think so:

    Study: German Scientists Conclude 20th Century Warming “Nothing Unusual” …Foresee “Global Cooling Until 2080″!

    By P Gosselin on 21. August 2015

    – See more at: http://notrickszone.com/2015/08/21/study-german-scientists-conclude-20th-century-warming-nothing-unusual-foresee-global-cooling-until-2080/#sthash.NRtJCZNt.Tcv78JRp.dpuf

    The German trio of scientists says the 0.7°C of warming occurring since the late 19th century is the result of the increase in the De Vries / Suess solar cycle [ST] and that the well-known oceanic AMO/PDO oscillations can also be seen [MDV]. “These two cycles practically determine by themselves the earth’s temperature.[ST + MDV]”

    And,

    Compared to the maxima and minima of the past, the current minima and maxima show that there is nothing unusual happening today. The scientists say today’s temperature changes are within the normal range. The German authors write: “Especially the 20th century shows nothing out of the ordinary.”

    Nothing out of the ordinary for the last 2500 years.

    Figure 1: Temperature changes of the past 2500 years (with linear regression).
    http://kaltesonne.de/wp-content/uploads/2015/08/zyk1.jpg

    Figure 2: Sinusoidal representation of solar activity and 3 proxy datasets. Red: solar activity using 10Be proxy, Sine period = 208 years, correlation 0.68. Green: Büntgen data series [4], Sine period= 186 years, correlation 0.49. Brown: Christiansen/Ljungqvist data series [5], sine period = 189 years, correlation 0.58. Blue: Cook data series [6], sine period= 201 years, correlation 0.41.
    http://kaltesonne.de/wp-content/uploads/2015/08/zyk2.jpg

    Figure 3: Sinusoidal behavior shown from the datasets by Christiansen/Ljungqvist [5] (brown) and Büntgen [4] (green) together with the Antarctic series [7] (blue) confirms that the De Vries / Suess cycle acts globally and that cooling is to be expected for the future.
    http://kaltesonne.de/wp-content/uploads/2015/08/zyk3.jpg

    Figure 4: Central Europe temperature (black, smoothed, agrees with the Antarctic temperatures) and the sum of the 6 strongest cycles (red), as found with the cycle analysis of the black curve. The perfect agreement between red and black shows that non-cyclic impacts (such as the steadily increasing atmospheric CO2) play no role for the temperature. Only the cycles correctly reflect the measured temperatures.
    http://kaltesonne.de/wp-content/uploads/2015/08/zyk4.jpg

    “Results have been confirmed”

    Strong doubts will certainly be fired at the findings by Lüdecke, Hempelmann and Weiss. But they remind us that in the solar physics literature other authors have already used their findings and arrive at practically the same conclusions (see the footnotes at the end of their two papers). The three German scientists sharply criticize the IPCC for refusing to acknowledge the sun as an obvious driver of the global climate.

    Finally the German scietists say that in view of the increase in CO2 seen thus far, 50% of the temperature increase expected to happen by 2100 should have taken place by now – if such a CO2 warming were true. The scientists say that the way things stand now, if the CO2 effect were real, the future warming up to the year 2100 could be at most 0.7 °C.

    [2] H.-J. Luedecke, A. Hempelmann, and C. O. Weiss: Multi-periodic climate dynamics: spectral analysis of long term instrumental and proxy temperature records, Clim. Past 9, 447 – 452 ( 2013 ); http://www.clim-past.net/9/447/2013/cp-9-447-2013.pdf

    [3] H.-J. Luedecke, C. O. Weiss, and H.Hempelmann: Paleoclimate forcing by the solar De Vries / Suess cycle, Clim. Past Discuss. 11, 279 (2015); http://www.clim-past-discuss.net/11/279/2015/cpd-11-279-2015.pdf

    # # #

    [Judith Curry] – “Not only don’t we know the answer to these questions, but no one even seems to be asking them!”

    Judith Curry is on-the-ball for much of climate issues but on this she seems to be totally ignorant unfortunately.

  9. Richard C (NZ) on August 23, 2015 at 12:54 pm said:

    >”Eventually I’ll find the paper(s) this graph [IPCC AR5 WGI Figure 10.1] is derived from. It(they) deserves much scrutiny but I don’t recall anyone dissecting it(them).”

    The full set of 6 graphs can be seen directly here (click to zoom):

    Figure 10.1 | (Left-hand column) Three observational estimates of global mean surface temperature (GMST, black lines) from Hadley Centre/Climatic Research Unit gridded surface temperature data set 4 (HadCRUT4), Goddard Institute of Space Studies Surface Temperature Analysis (GISTEMP), and Merged Land–Ocean Surface Temperature Analysis (MLOST), compared to model simulations [CMIP3 models – thin blue lines and CMIP5 models – thin yellow lines] with anthropogenic and natural forcings (a), natural forcings only (b) and greenhouse gas (GHG) forcing only (c). Thick red and blue lines are averages across all available CMIP5 and CMIP3 simulations respectively. CMIP3 simulations were not available for GHG forcing only (c). All simulated and observed data were masked using the HadCRUT4 coverage (as this data set has the most restricted spatial coverage), and global average anomalies are shown with respect to 1880–1919, where all data are first calculated as anomalies relative to 1961–1990 in each grid box. Inset to (b) shows the three observational data sets distinguished by different colours. (Adapted from Jones et al., 2013.) (Right-hand column) Net adjusted forcing in CMIP5 models due to anthropogenic and natural forcings (d), natural forcings only (e) and GHGs only (f). (From Forster et al., 2013.) Individual ensemble members are shown by thin yellow lines, and CMIP5 multi-model means are shown as thick red lines.
    http://www.climatechange2013.org/images/figures/WGI_AR5_Fig10-1.jpg

    From IPCC AR5 WGI Chapter 10, page 879 pdf:
    http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter10_FINAL.pdf

    The exercise is the CMIP3/5 process so the graphs are not “derived” from papers. Actually the opposite, the papers coming out of this CMIP process (derived from it) are Jones et al., 2013 and Forster et al., 2013 above.

    For now though, zooming in to Figure 10.1 gives plenty of graphic detail. The papers and all the forcings can be looked at once the issues have been identified. Issues for starters from upthread:

    Irrespective of anthro or non-anthro forcing:

    1) The observations SHOULD cross the model simulations, both (a) and (b), at 1955 from ABOVE.

    2) The observations SHOULD be WELL BELOW the model simulations, both (a) and (b), at 1970.

    3) The observations SHOULD be on a trajectory to cross the model simulations, both (a) and (b), at around 1985 from BELOW.

    Respective of anthro or non-anthro forcing:

    4) The model simulations SHOULD be diverging by ONLY the factor of TRF from 1950s onwards.

    Obviously not the case for 1, 2, 3, or 4. The simulation (b), without anthropogenic forcing, is particularly laughable.

    So next we need Figure 10.6 from Chapter 10:

    Figure 10.6 | (Top) The variations of the observed global mean surface temperature (GMST) anomaly from Hadley Centre/Climatic Research Unit gridded surface temperature data set version 3 (HadCRUT3, black line) and the best multivariate fits using the method of Lean (red line), Lockwood (pink line), Folland (green line) and Kaufmann (blue line). (Below) The contributions to the fit from (a) El Niño-Southern Oscillation (ENSO), (b) volcanoes, (c) solar forcing, (d) anthropogenic forcing and (e) other factors (Atlantic Multi-decadal Oscillation (AMO) for Folland and a 17.5-year cycle, semi-annual oscillation (SAO), and Arctic Oscillation (AO) from Lean). (From Lockwood (2008), Lean and Rind (2009), Folland et al. (2013 ) and Kaufmann et al. (2011), as summarized in Imbers et al. (2013).)
    http://www.climatechange2013.org/images/figures/WGI_AR5_Fig10-6.jpg

    On page 887 of Chapter 10 the IPCC says this of Figure 10.6:

    “A range of studies have used statistical methods to separate out the influence of known sources of internal variability, including ENSO and, in some cases, the AMO, from the response to external drivers, including volcanoes, solar variability and anthropogenic influence, in the recent GMST record: see, for example, Lockwood (2008), Lean and Rind (2009), Folland et al. (2013 ), Foster and Rahmstorf (2011) and Kaufmann et al. (2011). Representative results, as summarized in Imbers et al. (2013), are shown in Figure 10.6. These consistently attribute most of the warming over the past 50 years to anthropogenic influence, even allowing for potential confounding factors like the AMO. While results of such statistical approaches are sensitive to assumptions regarding the properties of both responses to external drivers and internal variability (Imbers et al., 2013), they provide a complementary approach to attribution studies based on global climate models.

    So much of this is not actually in the models, ENSO and MDV (“AMO and other”) certainly isn’t. Except ENSO is just noise but MDV has huge warming and cooling phases which Folland (green line in a) doesn’t reproduce.

    Then when you look at the “estimated contribution” of solar (d) and “AMO and other” MDV (f) to temperature, the graphs are ludicrous. Compare them, for example, to the Lüdecke, Hempelmann and Weiss graphs just upthread where the entire temperature profile for the last 2500 years is reproduced just from solar and MDV signals.

  10. Richard C (NZ) on August 23, 2015 at 8:05 pm said:

    Among papers the IPCC cites for “A range of studies have used statistical methods to separate out the influence of known sources of internal variability” is Foster and Rahmstorf (2011). This is to support their statement:

    “These consistently attribute most of the warming over the past 50 years to anthropogenic influence, even allowing for potential confounding factors like the AMO.”

    This is IPCC scientific fraud and deception on a grand scale. The Foster and Rahmstorf (2011) residual goes through 2010:

    Foster and Rahmstorf (2011) Observations vs Residual animation
    http://www.skepticalscience.com/pics/FR11_All.gif

    Rahmstorf, Foster and Cazenave (2012) Figure 1 Observations vs Residual
    http://www.skepticalscience.com/pics/RFC12_Fig1.jpg

    The CO2-forced CMIP3/5 model mean goes through heavily smoothed early 2000s to mid 2000s observations then passes WELL ABOVE 2010:

    IPCC AR5 WGI Figure 10.1 (a)
    http://www.climatechange2013.org/images/figures/WGI_AR5_Fig10-1.jpg

    Since MDV SHOULD oscillate about the CO2-forced model mean but MDV is absent from the models, the model mean SHOULD pass WELL BELOW the early 2000s and cross 2010 observations from BELOW just as the Foster and Rahmstorf (2011) and Rahmstorf, Foster and Cazenave (2012) residuals do – but no, the CO2-forced model mean passes WELL ABOVE the residuals at 2010.

    Worse, The observations are the secular trend signal PLUS the MDV signal (ST + MDV). Therefore, the secular trend of the observations is BELOW the residuals and observations at 2010. Macias et al (2014) confirms this:

    Macias et al (2014) Figure 1
    file:///C:/DOCUME~1/User1/LOCALS~1/Temp/journal.pone.0107222.g001-1.png

    The SkS Foster and Rahmstorf (2011) animation states “Solar Removed”. This is deception. All F&R have done is remove solar fluctuation but the major solar TSI input is still inherent in the observations secular trend (ST) which is BELOW observations (ST + MDV) and WELL BELOW the CO2-forced model mean.

    In summary there are 3 tiers of “underlying trend” starting from the top:

    Tier 1 – Model mean, CO2-forced, no MDV, as per IPCC AR5 WGI Figure 10.1 (a)
    Tier 2 – Foster and Rahmstorf (2011) residual, passed off as being CO2-forced but not.
    Tier 3 – Observations secular trend (ST) as per Macias et al (2014), natural, solar-forced

    Again, this is IPCC scientific fraud and deception on a grand scale.

  11. Simon still missing in action huh? Not a man of his word, but it’s not surprising at all.

  12. Man of Thessaly on August 24, 2015 at 4:16 pm said:

    No, it’s not at all surprising. “Conversation group”? It’s more like watching a broken fire hydrant. Except they’re usually fixed quite quickly.

  13. Man of Thessaly might find it “fascinating” that nearly 300 people have signed up to the Canterbury Coastal residents Facebook page in just 3 days.

    We expect most of these to summit proposals against Christchurch City Council against their agenda to depopulate the east side of the city.

    Man of Thessaly might find it “fascinating” that such ignorant common people could be challenging The Scientists and The Government and The Science

    man of Thessallly might find it “fascinating” that there will be a legal challenge in the courts, and judging by the Kapiti Coast result, this will be in the “deniers” favour.

    I define “denier” as anyone who doesn’t uncritically accept all government science and doesn’t uncritically bend over and grab their ankles to get uncritically get rogered by bureaucrats and lose all their equity and retirement savings

    man of Thessaly will no doubt find this amusing and pulsate with excitement.

  14. Sorry if I mistook Man of Thessaly for a drive by from Hot Topic who seem to revel in such misfortunes. I will amend my views accordingly and offer my apologies if offense was taken.

  15. Man of Thessaly on August 24, 2015 at 10:10 pm said:

    No worries Andy! I was about to respond to say that you seemed to have pigeonholed me without cause, but I’m not offended. No, I’m not a “drive by from Hot Topic” – never posted there, but if they’re talking about Chch sea level planning I’ll pop over for a read.

    This discussion should really be over in the other thread, but what the hell, let’s bring it here.

    I was reading about the CCRU today, and had a look at the Facebook and form submission. Congratulations for getting involved. If “The Government and The Science” can’t stand scrutiny from “ignorant common people” (your words not mine), then what would it be good for? I do find it fascinating and will be following closely.

    I don’t believe that the CCC has an “agenda to depopulate the east side of the city”. Their motives, like yours, are reasonable – to do due diligence in planning with the best information available; to manage future liability; to obey the law, and to get the opinions of ratepayers, which they are doing. Clearly locals don’t want to lose all they own, but here’s a hard truth: if land is likely going to be eroded or flooded too often to maintain infrastructure, then it doesn’t have much value. There’s no point in pretending otherwise. Chch residents know this better than the rest of the country. Unfortunately, unlike earthquakes, there is no national policy or precedent for compensation or insurance for coastal hazards. Perhaps discussions like you are having with the CCC will get us there, slowly.

    If there’s a legal challenge, I wouldn’t be so sure it’ll go the same way as the Kapiti case. They are now only awaiting updated science to put the coastal hazard zones back in the plan; they’re not gone for good. Do you think the T&T report will be treated the same way as Shand? It will be interesting.

  16. Legal processes are already underway.
    There are many issues here, and like you say, this is the wrong thread.

    Anyway, thanks for taking my off colour comment in good faith.

  17. Alexander K on August 25, 2015 at 10:46 am said:

    Richard C:
    Those coloured wooden rods are ‘Cuisonnaire Rods’ (which I may have mis-spelt) which were once standard equipment in every NZ Primary classroom. Those rods were invaluable for teaching concepts of number with ‘concrete’ materials. When I trained as a primary teacher donkey’s years ago, I found that these rods gave children a concept of number appropriate to each child’s individual stage of intellectual development.
    I find it sad that this simple but brilliant resource is no longer available to round out children’s mathematical education.

  18. Richard C (NZ) on August 25, 2015 at 12:06 pm said:

    Alaexander

    >”Those coloured wooden rods are ‘Cuisonnaire Rods’ (which I may have mis-spelt) which were once standard equipment in every NZ Primary classroom. Those rods were invaluable for teaching concepts of number with ‘concrete’ materials.”

    Interesting, I didn’t realize this. I thought it was a novelty aid that just came and went maybe in only a few schools and mine happened to be one. I suppose from my perspective of doing same. I did think there was a name for the method but had no idea what it was. I vividly remember working with those rods but am having trouble remembering as to what year (“donkey’s years” ago too). I know it was very early Primary, probably Year 1 or 2.

    I also remember having trouble finding “half” numerically, which exasperated my folks at home. For example 8 rods in a row, but half is 4 numerically (8/2 = 4). I baulked at this, hey! how can that be? There is no rod at the half point between rod 4 and rod 5 when you split the 8 rods, 4 on one side, 4 on the other i.e. half on one side half on the other so half was somewhere in between in my mind at the time. Then we moved on to the number line, decimals etc, and my education resumed (4.5 + 4.5 = 9, not 8).

    Except using 9 rods, the “half” rod could be said to be rod 5 because 4.5 is contained within the 5th rod so I may actually have been right by the Cuisonnaire concept.

  19. Richard C (NZ) on August 25, 2015 at 12:57 pm said:

    >”Tier 2 – Foster and Rahmstorf (2011) residual, passed off as being CO2-forced but not.”

    Foster and Rahmstorf inadvertently make a very good case when they take volcanic activity out of GMST that volcanic activity should be left out of model consideration instead of being introduced. Three reasons, first it is random and cannot be predicted, second it is a short-term effect, and third the modelers make a complete hash of post 1940 temperature both anthro-forced and non-anthro-forced when they include it in their model mean as per demonstration upthread.

    ENSO activity is not in the models so removing it from GMST has some merit for comparison to the model mean (Except F&R do not compare their residual with the model mean, neither does the IPCC).

    But problem is: the whole residual approach comes crashing down if the 2015 El Nino is now to be removed. Because of 21st century GMST flatlining once ENSO is discounted, the Foster and Rahmstorf approach would result in a completely different and new residual than the one they were left with in their 2011 paper. That went through 2010 observation data. That would be impossible now. The axis of the residual would still be the same but the point at which it crosses observations will have moved on to 2015 but sans 2015 El Nino effect.

    In other words, the residual approach (Tier 3) is bogus.

    What’s in and what’s out of the F&R residual?

    Tier 2 – Foster and Rahmstorf (2011) residual
    In – MDV (therefore this must also be removed to compare to model mean – not done so)
    In – Long-term solar forcing on a multi centennial/millennial scale, roughly the secular trend (ST).
    Out – CO2 forcing (F&R and IPCC say otherwise but the residual does NOT conform to model mean)
    Out – ENSO
    Out – Volcanic activity
    Out – Short-term solar fluctuations (erroneous – see below)

    Solar cannot be removed. Two reasons. First it is the energy input to the entire system however much it fluctuates, second it is impossible to know exactly what solar fluctuation caused what temperature fluctuation when there is 10 – 100 years lag (Trenberth) of solar energy through the ocean to the atmosphere. Yes there are atmospheric responses shorter than 10 years but those are minor. Foster and Rahmstorf are kidding themselves (and taking in the IPCC with them) if they think they have “removed” the effect of short-term solar fluctuation from GMST.

    There is an entire section of AR5 Chapter 10 devoted to the residual approach (Tier 3). It is DEAD WRONG, as are the conclusions reached in it.

  20. Richard C (NZ) on August 25, 2015 at 1:09 pm said:

    >”But problem is: the whole residual approach comes crashing down if the 2015 El Nino is now to be removed. Because of 21st century GMST flatlining once ENSO is discounted, the Foster and Rahmstorf approach would result in a completely different and new residual than the one they were left with in their 2011 paper. That went through 2010 observation data. That would be impossible now. The axis of the residual would still be the same but the point at which it crosses observations will have moved on to 2015 but sans 2015 El Nino effect.”

    I’m wrong come to think. The original residual from the 2011 paper stays the same and just makes an abrupt turn at 2010 to horizontal i.e. the hiatus commences in the residual at 2010, the trajectory is flatline, and is completely divergent from the CO2-forced model mean.

    Highly embarrassing for Foster and Rahmstorf – and for the IPCC – and for Simon.

  21. Richard C (NZ) on August 25, 2015 at 2:01 pm said:

    >”The original residual from the 2011 paper stays the same and just makes an abrupt turn at 2010 to horizontal i.e. the hiatus commences in the residual at 2010, the trajectory is flatline, and is completely divergent from the CO2-forced model mean.”

    You can see this clearly in GISTEMP from 2010 onwards (even with subsequent version “adjustments”). Keep in mind that the 2015 El Nino effect still has time to run i.e. there is still more El Nino peak than what is shown to be “removed” by the residual (Tier 3) approach and that there was no La Nina in the intervening years. The last data[point in this GISTEMP graph is 2015.5:

    http://www.woodfortrees.org/plot/gistemp/from:2010/plot/gistemp/from:2010/trend

    The Foster and Rahmstorf residual is ABOVE the green trend line at 2010 i.e. only the tip of 2010 is removed in the F&R residual (see RFC12 Fig 1 upthread). The El Nino effect around 2015 must be “removed” by the residual approach which probably lops off to where the green trend line is at 2015.5 (maybe lower).

    So the 2010 – 2015.5 Tier 3 residual is just a flatline hiatus – no CO2 effect whatsoever.

    Again, highly embarrassing for Foster and Rahmstorf – and for the IPCC – and for Simon.

  22. Richard C (NZ) on August 25, 2015 at 2:29 pm said:

    >”(see RFC12 Fig 1 upthread)”

    That’s this:

    Rahmstorf, Foster and Cazenave (2012) Figure 1 Observations vs Residual
    http://www.skepticalscience.com/pics/RFC12_Fig1.jpg

    Compare to the GiSTEMP analysis last comment:

    GISTEMP 2010 – 2015.5
    http://www.woodfortrees.org/plot/gistemp/from:2010/plot/gistemp/from:2010/trend

    Easy to see that the RFC12 residual trajectory passes WELL ABOVE a 2015 residual after EL Nino is removed. There is no observation data on that trajectory – it does not exist, even with an “adjustment” that puts the 2015 peak a few hundredths of a degree higher than the 2010 peak.

    The F&R trajectory is rising about 0.17 C/5 years (I’ll look up the exact trend) i.e. that would put an extrapolated F&R residual up around the top of the woodfortrees chart (maybe off it),

  23. Alexander K on August 25, 2015 at 2:52 pm said:

    Re: Foster & Ramsdorf.
    So many years, so much BS from these two. I am chuckling at their silliness.

  24. Richard C (NZ) on August 25, 2015 at 3:50 pm said:

    I should point out that Rahmstorf, Foster and Cazenave (2012) Figure 1 is highly misleading i.e. scientifically and ethically fraudulent.

    Their residual projection SHOULD stop where it last crosses the 2010 observations. Instead they assume there will be subsequent non-ENSO data on their residual that supports their assumption that the residual will continue on its upward trajectory. This was one of my main criticisms of the graph from the outset in 2012. The post 2010 residual makes up a profile where no data exists to justify it. Same for 1984 and 2006.

    As it turns out, the subsequent data after 2010 does NOT support their assumption.

    Well, that’s enough from the “broken fire hydrant” for today, have to go to work now. “Fixed” for 24 Hours eh, Man of Thessaly? Must be frustrating, sitting on the fence not being able to do anything about the flow of water.

  25. Richard C (NZ) on August 26, 2015 at 2:12 pm said:

    >”The F&R trajectory is rising about 0.17 C/5 years (I’ll look up the exact trend)”

    Rahmstorf, Foster and Cazenave (2012)

    2. Global temperature evolution

    The removal of the known short-term variability components reduces the variance of the data without noticeably altering the overall warming trend: it is 0.15 °C/decade in the unadjusted and 0.16 °C/decade in the adjusted data. From 1990–2011 the trends are 0.16 and 0.18 °C/decade and for 1990–2006 they are 0.22 and 0.20 °C/decade respectively. The relatively high trends for the latter period are thus simply due to short-term variability, as discussed in our previous publication (Rahmstorf et al 2007). During the last ten years, warming in the unadjusted data is less, due to recent La Niña conditions (ENSO causes a linear cooling trend of −0.09 °C over the past ten years in the surface data) and the transition from solar maximum to the recent prolonged solar minimum (responsible for a −0.05 °C cooling trend) (Foster and Rahmstorf 2011). Nevertheless, unadjusted observations lie within the spread of individual model projections, which is a different way of showing the consistency of data and projections (Schmidt 2012).

    Figure 1 shows that the adjusted observed global temperature evolution closely follows the central IPCC projections, while this is harder to judge for the unadjusted data due to their greater short-term variability.

    http://iopscience.iop.org/1748-9326/7/4/044035/article

    First note that they are NOT comparing apples-to-apples even though the Figure 1 graph seems show they are. They are comparing trend values, NOT where those trends lie in absolute temperature measure even though they, supposedly, normalize the datasets at 1990. The model mean is much higher up the scale at 2010 than is the residual which seems to indicate something wrong in Figure 1 back at 1990.

    Where the comparison is wrong looks to start at the 1990 “zero” data. The RFC12 data at 1990 looks nothing like the 1990 data, either observations or models, in AR5 Figure 10.1:

    IPCC AR5 WGI Figure 10.1 (a)
    http://www.climatechange2013.org/images/figures/WGI_AR5_Fig10-1.jpg

    Anyway, back to the slope of the residual – “it is ……..0.16 °C/decade in the adjusted data”

    I was way out, that is 0.08 per 5 yrs. Problem is: that slope came to an abrupt end at 2010, the slope is now horizontal (a “hiatus”) with huge statistical uncertainty due to the short period of 5.5 years since i.e. the new residual slope is either: flat, warming a little, or cooling a little.

    Tragically, Foster and Rahmstorf are back with what they started with – a hiatus, completely divergent from the model mean.

    I made this note upthread in respect to MDV and ENSO which is probably confusing:

    What’s in and what’s out of the F&R residual?

    Tier 2 – Foster and Rahmstorf (2011) residual
    In – MDV (therefore this must also be removed to compare to model mean – not done so)
    […]
    Out – ENSO

    F&R remove recent sporadic ENSO activity, both El Nino and La Nina, therefore much of the MDV signal has been removed in the residual. Except the MDV signal is not sporadic, it is a smooth oscillating curve of period approximately 60 years (“The 60 year climate cycle”). So for 30 years MDV is above the secular trend (ST) and for the next 30 it is below. The MDV phase crossover points are neutral i.e. no MDV is added or subtracted from ST to arrive at a reconstructed smoothed GMST profile.

    Using nominal dates starting 1955, the MDV relationship in respect to ST is:

    1955 – neutral
    1970 – MDV maximum negative (-ve)
    1985 – neutral
    2000 – MDV maximum positive (+ve)
    2015 – neutral
    2030 – MDV maximum negative (-ve)

    At 2010 a small amount of the (+ve) MDV signal must be SUBTRACTED from the observations to arrive at the secular trend (ST) even though the sporadic ENSO activity has been removed. This places the ST a little below the F&R residual at 2010. It is this ST profile that must be compared to the model mean because the MDV signal is similarly absent from the models. F&R do not do this and therefore do not compare apples-to-apples, again (yes, they are doubly at fault). Neither is the ST linear, like the F&R residual is.

    At 2015 no MDV signal is either added to or subtracted from the observations, this is a phase crossover date i.e. neutral conditions. The 2015 El Nino will be smoothed out by either residual approach or signal analysis.

    At 2020 a small amount of the (-ve) MDV signal must be ADDED to the ST to arrive at observations.

    At 2030 MDV will be maximum negative i.e. in trough as opposed to peak at 2000. All of the MDV signal must be added to the ST to arrive at the observations.

    Post 2020 the ST will probably also be past peak and falling so the combination of falling ST and falling MDV means the resulting GMST profile, ST (going -ve) + (-ve) MDV, will drop rapidly i.e. GMST cooling.

    Post 2020 will confound the climate clowns completely.

  26. Richard C (NZ) on August 26, 2015 at 2:23 pm said:

    Wrong:

    “At 2020 a small amount of the (-ve) MDV signal must be [SUBTRACTED from] the ST to arrive at observations.”

    “At 2030 MDV will be maximum negative i.e. in trough as opposed to peak at 2000. All of the MDV signal must be [SUBTRACTED from] the ST to arrive at the observations.”

  27. Richard C (NZ) on August 26, 2015 at 4:20 pm said:

    >”Problem is: that [F&R residual] slope came to an abrupt end at 2010, the slope is now horizontal (a “hiatus”)”

    F&R, and subsequently RFC (poor Anny), just assumed their residual slope would continue beyond 2010 because they had no “signal” that there was a change in regime going on i.e. they were in the dark. Signal analysts had been “signalled” that the trajectory was changing so had a different mindset.

    Despite that deficiency in the residual approach, the change in the Tier 2 residual has been radical after 2010 – now flat. Conversely, the change in the Tier 3 secular trend is just a slight deflection in the slope, a small negative inflexion, very subtle, and nowhere near peak. As such the residual approach could be considered a leading indicator of the secular trend peak which has not yet been reached.

    And an indicator of radical proportions.

    Climate scientist’s, and the IPCC’s disdain for signal analysis explains their haplessness in regard to “natural variability” (MDV mostly). But now the IPCC’s own approach as per F&R is swinging around to bite them on the backsides – with very sharp teeth.

  28. Richard C (NZ) on August 26, 2015 at 8:48 pm said:

    Rahmstorf, Foster and Cazenave (2012) cite Hansen et al (1981) in respect to oceanic thermal lag. Hansen et al (page 3 at link below) put the initial atmospheric response at 6 years (as does Trenberth). But they add that heat exchange between the mixed layer and the thermocline may delay the response by “a few decades” (Trenberth says similar – “10 – 100 years”).

    A “few decades” oceanic thermal lag seems to be what the literature is returning recently, over 30 years after Hansen et al (1981) i.e. not much progress on this but the long lag makes more sense than only 6 years.

    Hansen et al is CO2-centric (due to miss-attribution – see below) but they concede that, in respect to the 20th century (page 5):

    “The time history of the warming obviously does not follow the course of the CO2 increase (Fig 1), indicating that other factors must affect global mean temperature”

    Well yes, the drivers are multi-millennial scale lagged solar change in the secular trend (ST) overlaid by a multi-decadal oscillation (MDV), which they neglect. Unfortunately, on the solar driver, Hansen say (page 7):

    “Solar variability is highly conjectural”

    Well yes, the IPCC agrees it is the largest uncertainty of all, there is huge uncertainty in solar change.

    Hansen et al then decide, rather than to consider and get right as much as possible the millennial-scale solar driver first (i.e. the energy input into the entire system) and to overlay MDV on it, to consider theoretical CO2 forcing and volcanic aerosols first and then to add solar “variation” as an afterthought to make up the numbers and neglecting MDV in the process (Figure 5, page 7).

    They first try an oceanic thermal lag from only the mixed layer but get an atmospheric response that is “larger than observed” (too much temp). Introducing longer lag from the deeper ocean as above gets an atmospheric temperature in “rough agreement with observations”. But that is with extremely dodgy solar (see next), neglected MDV, and an erroneous CO2 assumption.

    Now here’s the biggest clanger (page 7):

    “The hypothesized solar luminosity variation [48] [Hoyt 1979] improves the fit, as a consequence of the luminosity peaking in the 1930’s and declining into the 1970’s, leaving a residual variance of only 10%.

    The solar peak is now generally recognized to have been around 1986 and only began to decline in the 21st century i.e. solar peaked 50 years AFTER Hansen et al’s modeled solar peak and the atmospheric response to it is delayed “a few decades” by the ocean. The Hoyt79 solar “fit” was highly problematic as Hansen et al go on to concede:

    “The improved fit provided by Hoyt’s solar variability represents a posteriori selection, since other hypothesized solar variations that we examined [for instance (49)] degrade the fit. This evidence is too weak to support any specific solar variability.”

    Yes it does, in other words BIG BIG problems with solar that need to be rectified with a full range of scenarios. Too hard (and off-message) so they immediately drop the solar non-fit and go on to extol their CO2 non-fit next paragraph and for the rest of the text (MMCC a “fascinating experiment”, and send more money please).

    Much to be gleaned from Hansen et al (1981).

    Hansen et al (1981)
    http://www.atmos.washington.edu/~davidc/ATMS211/articles_optional/Hansen81_CO2_Impact.pdf

  29. Richard C (NZ) on August 26, 2015 at 11:22 pm said:

    To be fair to Hansen et al, there was no knowledge of MDV in 1981, neither were there readily implemented signal analysis tools and techniques.

    But this is no excuse for not re-visiting the paper as science progressed and correcting it appropriately.

    Remote possibility, the paper is probably a sacred manuscript by now,

  30. Richard C (NZ) on August 26, 2015 at 11:58 pm said:

    ‘Climate Models Fail: Global Ocean Heat Content (Based on TOA Energy Imbalance)’

    Bob Tisdale / 1 hour ago August 26, 2015

    “Obviously [see Figure 7], based on the energy imbalances in the climate models used by the IPCC for their 5th Assessment Report, there is no agreement on how much heat the oceans should be accumulating, or even if the oceans are accumulating heat. And the differences in the simulated ocean heat accumulation are so great that using the model mean to represent the models is very misleading, because the model mean gives the impression of a consensus when there is none.”

    http://wattsupwiththat.com/2015/08/26/climate-models-fail-global-ocean-heat-content-based-on-toa-energy-imbalance/

  31. Richard C (NZ) on August 27, 2015 at 12:13 am said:

    ‘Lags and Leads’

    Willis Eschenbach / 2 days ago August 24, 2015

    “thermal lag is generally modeled as an “exponential” lag”

    “the delay in the response is governed by a time constant called “tau”. The larger the time constant tau, the greater the lag time”

    “the longer the lag, the smaller the resulting thermal response”

    “If we are looking for the result of sinusoidally varying forcing, the thermal response will have the same shape as the input forcing, but it will occur later in time.”

    “any signal will decay to within a percent or two of zero within one cycle.”

    http://wattsupwiththat.com/2015/08/24/lags-and-leads/

    Follow up:

    ‘Wrong Again, Again’
    Willis Eschenbach / 2 days ago August 24, 2015
    http://wattsupwiththat.com/2015/08/24/wrong-again-again/

    # # #

    ‘Leads and Lags’ is good background for oceanic lag and atmospheric temperature response. Unfortunately far too short time frame to see how it works out for centennial scale climate.

  32. Richard C (NZ) on August 27, 2015 at 9:57 am said:

    Gross Suppression Of Science …Former NOAA Meteorologist Says Employees “Were Cautioned Not To Talk About Natural Cycles”

    By P Gosselin on 26. August 2015

    Former NOAA meteorologist David Dilley has submitted an essay below that has 2 parts: 1) How the government has been starving researchers who hold alternative opinions of funding, and 2) climate cycles show we are starting a cooling period.

    Readers will recall that David Dilley is a 40-year meteorology veteran and the producer of the excellent video: “Is Climate Change Dangerous?“, which first was presented at NTZ. Since then the video has been viewed more than 10,000 times and the NTZ story shared in social media over 800 times.
    ==================================
    Suppressing the Truth – the Next Global Cooling Cycle

    By David Dilley, former NOAA meteorologist

    “According to some university researchers who were former heads of their departments, if a university even mentioned natural cycles, they were either denied future grants, or lost grants. And it is common knowledge that United States government employees within NOAA were cautioned not to talk about natural cycles. It is well known that most university research departments live or die via the grant system. What a great way to manipulate researchers in Europe, Australia and the United States.”

    http://notrickszone.com/2015/08/26/suppression-of-science-former-noaa-meteorologist-says-employees-were-cautioned-not-to-talk-about-natural-cycles

  33. Richard C (NZ) on August 27, 2015 at 1:39 pm said:

    >”Using nominal dates starting 1955, the MDV relationship in respect to ST is:”

    Starting 1895 we have:

    1895 – neutral
    1910 – MDV maximum negative (-ve)
    1925 – neutral
    1940 – MDV maximum positive (+ve)
    1955 – neutral
    1970 – MDV maximum negative (-ve)
    1985 – neutral
    2000 – MDV maximum positive (+ve)
    2015 – neutral
    2030 – MDV maximum negative (-ve)

    This is in respect to HadCRUT4: http://www.woodfortrees.org/plot/hadcrut4gl

    If CO2 is the driver of the secular trend (ST) in GMST, the CO2-forced model mean MUST pass through all the MDV neutral observation dates because MDV is absent from the models. That is this spline:

    1895 – from below
    1925 – from above
    1955 – from below
    1985 – from above
    2015 – from below

    Obviously this is NOT the case in the current IPCC climate models which pass through 2000 – MDV maximum positive (+ve) and are then WELL ABOVE the 2015 observations making it impossible to pass through 2015 from below.

    Hansen et al (1981) have in Figure 3, firstly an observation profile that does not conform to HadCRUT4 in the early part of the 20th century

    Hansen et al (1981) Figure 3
    http://farm5.static.flickr.com/4121/4929599230_a153eda926.jpg

    Secondly in (b) they have CO2-only (b is the longer thermal lag option), which is almost MDV neutral apart from the temperature profile non-conformity with HadCRUT4 (CO2 passes through 1915 from above and 1960 from below).

    Problem is: there is no way CO2 will then pass through 1985 from above. Hansen et al then set to with dodgy sun and volcanoes which “tweak” the CO2-based profile to conform to the GMST profile.

    Problem is (again): they “tweak” to the wrong profile. Their observations include MDV, their model doesn’t. They should “tweak” to the MDV-neutral spline. Consequently, at 1970 – MDV maximum negative (-ve), their model is spot on observations (it should be above) and no hope of passing through MDV-neutral 1985 from above.

    The current crop of climate models make exactly the same mistake as Hansen et al did in 1981 i.e. there has been no penny drop in the minds of climate modelers over the ensuing 33+ years.

    Maybe over the next 33 years their error will dawn on them.

  34. Richard C (NZ) on August 27, 2015 at 5:13 pm said:

    >”Hansen et al [1981] then set to with dodgy sun and volcanoes which “tweak” the CO2-based profile to conform to the GMST profile.”

    I mean here that their solar input is dead wrong but not the volcanoes. The volcanoes however are completely irrelevant to the long-term secular trend of GMST on a millennial time scale, and don’t matter in the short-term either. The effect of Pinatubo has been and gone for example and it did not change the secular trend one bit, or upset MDV. And Pinatubo although thought large by today’s standards doesn’t even rate in the volcano rankings:

    List of largest volcanic eruptions
    https://en.wikipedia.org/wiki/List_of_largest_volcanic_eruptions

    Age (Ma) is this unit:

    Ma (for megaannus), is a unit of time equal to one million years. It is commonly used in scientific disciplines such as geology, paleontology, and celestial mechanics to signify very long time periods into the past or future. For example, the dinosaur species Tyrannosaurus rex was abundant approximately 66 Ma (66 million years) ago (ago may not always be mentioned; if the quantity is specified while not explicitly discussing a duration, one can assume that “ago” is implied; the alternative “mya” unit includes “ago” explicitly).

    If you click on the arrow at the top of the Age column you can change the ranking to get the largest most recent at the top. The top 3 recent are:

    Taupo Volcano—Oruanui eruption, 0.027 Ma, Taupo Volcanic Zone, New Zealand
    Lake Toba—Youngest Toba Tuff, 0.073 Ma, Sunda Arc, Indonesia
    Whakamaru, 0.254 Ma,Taupo Volcanic Zone, New Zealand

    In other words, the climate modelers fixate on the short-term things that don’t matter in the long-term secular trend (volcanoes) and improperly treat the thing that matters most in the long-term (solar). Then they neglect what matters most in the short-term (MDV).

    Hansen et al in 1981, and subsequent IPCC climate modeling, should neglect volcanoes entirely (Occams Razor) and get solar right in order to get the model mean conforming to the MDV-neutral spline as laid out previously. THEN they must introduce MDV in order to trace the GMST profile.

    Of course that means neglecting CO2 entirely too (Occams Razor again).

  35. Richard C (NZ) on August 27, 2015 at 7:33 pm said:

    >”…volcanoes however are completely irrelevant to the long-term secular trend of GMST on a millennial time scale, and don’t matter in the short-term either.”

    >”Hansen et al in 1981, and subsequent IPCC climate modeling, should neglect volcanoes entirely (Occams Razor) and get solar right in order to get the model mean conforming to the MDV-neutral spline as laid out previously. THEN they must introduce MDV in order to trace the GMST profile.”

    Once the modelers have got that right (long long way to go on that), only then can they turn their attention to volcanic “wiggles” in GMST (as if it will matter since CO2 will not be an issue then). But there’s a whole body of climate literature already fixated on volcanoes. For example:

    ‘Volcanoes may be responsible for most of the global surface warming slowdown’

    Dana Nuccitelli, Wednesday 3 December 2014

    “A new study [see below] estimates surface temperature cooling from volcanoes at 0.05–0.12°C since 2000”

    “A new study has found that when particulates from small volcanic eruptions are properly accounted for, volcanoes may be responsible for much of the slowdown in global surface warming over the past 15 years. ”

    http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/dec/03/volcanoes-responsible-for-lot-of-global-surface-warming-slowdown

    Note “small” in the second quote. Problematic as immediately realized in comments. Dana Nuccitelli evades the obvious in a subthread starting here:

    http://www.theguardian.com/environment/climate-consensus-97-per-cent/2014/dec/03/volcanoes-responsible-for-lot-of-global-surface-warming-slowdown#comment-44505469

    gretch, 4 Dec 2014 4:39

    Dana your article states: “Since the year 2000, the study estimates that volcanoes have had a cooling influence on global surface temperatures.” Question: Does the study state what cooling influence volcanoes had prior to the year 2000? And if not, why not? Your article continues: “The likely range of this volcanic cooling influence lies between 0.05 and 0.12°C.” So can we assume that all temperature readings since 1880 have a built in cooling influence from volcanoes of between 0.05 and .12C? Or do volcanoes only erupt during surface temperature hiatuses?

    DanaNuccitelli gretch, 4 Dec 2014 6:12

    This paper only considered data over the period 2000 to 2013.

    gretch DanaNuccitelli, 4 Dec 2014 6:27

    But presumably there have been volcanic eruptions prior to 2000 and presumably these eruptions of aerosol particulates have had some dampening effect on temperatures recorded prior to 2000. So what we really need is a study that shows whether or not the cooling effect during the recent 15 year hiatus is greater than, less than, or about equal to the cooling effect from prior to 2000.

    [continues]

    # # #

    The “study” was in respect to “small” volcanoes i.e. Agung, El Chichon and Pinatubo introduced to the models prior to 2000 were larger (“large”) than the study scope after 2000 so irrelevant to the discussion prior to 2000.

    An exercise in stupidity. Even the “large” volcanic eruptions are merely “wiggles” in GMST, irrelevant to both secular trend (ST) and multidecadal variability (MDV), let alone “small”. The paper is:

    ‘Total volcanic stratospheric aerosol optical depths and implications for global climate change’
    Ridley et al (2014)
    http://onlinelibrary.wiley.com/doi/10.1002/2014GL061541/abstract

  36. Richard,
    Multi-decadal variation is variation around the mean; it does not explain the warming that has occurred over the past century. They are patterns within a chaotic system and are not wholly predictable. This is especially true when the system is being perturbed.

  37. Richard C (NZ) on August 28, 2015 at 11:08 am said:

    Simon.

    >”Multi-decadal variation is variation around the mean; it does not explain the warming that has occurred over the past century.”

    EXACTLY Simon. Whatever made you think I did not know this? That is what I’ve been elucidating for days (or is it weeks now). The CO2-forced model mean is NOT on the secular trend (ST) – it SHOULD be because MDV is absent from the models and must be introduced now (the IPCC concedes this, sort of, in Chapter 9 – see upthread).

    Foster and Rahmstorf did NOT find the secular trend, neither did Macias et al but they are very close. The fact that the CO2-forced model mean is NOT on the ST indicates that the ST is driven by something else. That “something” driving the ST is on a multi-millennial time scale and it is the sun as Luedecke, Hempelmann, and Weiss demonstrate over the last 2500 years (from upthread):

    Luedecke, Hempelmann, and Weiss (2013) and (2015)
    http://notrickszone.com/2015/08/21/study-german-scientists-conclude-20th-century-warming-nothing-unusual-foresee-global-cooling-until-2080/#sthash.BXEr03W9.dpbs

    You cannot do that with CO2. Clearly, the solar signal is the driver of the secular trend (ST) in GMST. Overlayed on that is a trendless MDV signal as Luedecke, Hempelmann, and Weiss show for Central Europe from 1750.

    >”They are patterns within a chaotic system and are not wholly predictable”

    Piffle Simon. MDV is entirely predictable (within reason) for at least the next 60 years ahead based on 1850 – present as Macias et al demonstrate (blue line Figure 1):

    Macias et al (2014)
    http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0107222#pone-0107222-g005

    The blue MDV line is absent from the models. GMST = ST + MDV as Macias et al show. Therefore Equation (2):

    Model mean GMST natural + theoretical man-made = (ST + TRF) + MDV (2)

    Where TRF is theoretical radiative forcing.

    Obviously, given where the non-MDV CO2-forced model mean is now in respect to observations, Equation (2) returns a profile that can never conform to observed GMST (far too warm).

    >”This is especially true when the system is being perturbed.”

    Piffle again Simon. Look at Macias et al Figure 1 – no disruption whatsoever since the 1950s CO2 uptick (the theoretical perturbation you allude to) i.e. especially false in this case.

    BTW Simon, waiting for you to “happily modify your opinion” given both Magoo and Myself (IPCC TOA climate change criteria and water vapour feedback) have satisfied your criteria upthread, inescapably, undeniably, and unequivocally.

    Well?

  38. Richard C (NZ) on August 28, 2015 at 11:20 am said:

    Simon.

    >”The CO2-forced model mean is NOT on the secular trend (ST) – it SHOULD be because MDV is absent from the models”

    For the model mean to be on the secular trend it MUST pass though the non-MDV spline in GMST. From upthread, that is this spline of MDV-neutral dates:

    1895 – from below
    1925 – from above
    1955 – from below
    1985 – from above
    2015 – from below

    Obviously this is NOT the case in the current IPCC climate models which pass through 2000 – MDV maximum positive (+ve) and are then WELL ABOVE the 2015 observations making it impossible to pass through 2015 from below.

    The spline is central to this sequence:

    1895 – neutral
    1910 – MDV maximum negative (-ve)
    1925 – neutral
    1940 – MDV maximum positive (+ve)
    1955 – neutral
    1970 – MDV maximum negative (-ve)
    1985 – neutral
    2000 – MDV maximum positive (+ve)
    2015 – neutral
    2030 – MDV maximum negative (-ve)

    This is in respect to HadCRUT4: http://www.woodfortrees.org/plot/hadcrut4gl

    If CO2 is the driver of the secular trend (ST) in GMST, the CO2-forced model mean MUST pass through all the MDV neutral observation dates because MDV is absent from the models. Obviously it doesn’t therefore CO2 is NOT the driver of the secular trend in GMST.

    Hansen et al got the modeling wrong in 1981 (see upthread) and it is still wrong 33+ years later.

  39. Richard C (NZ) on August 28, 2015 at 11:37 am said:

    “mean” should be omitted from Equation (2).

    >”Model mean GMST natural + theoretical man-made = (ST + TRF) + MDV (2)”

    Should be just be,

    Model GMST natural + theoretical man-made = (ST + TRF) + MDV (2)

    An equation for the current model mean omits MDV:

    Model mean GMST natural + theoretical man-made = (ST + TRF) (5)

  40. Richard C (NZ) on August 28, 2015 at 1:25 pm said:

    >”Foster and Rahmstorf did NOT find the secular trend”

    This is actually a minor quibble. The F&R residual goes through 2010 from below. It SHOULD go through 2015 (or close) from below so F&R were 5 yrs out and their residual has now flatlined since 2010. Embarrassing for F&R, RFC, and the IPCC, but still a minor quibble because at least their residual passes through 2010 from below.

    MAJOR quibble is: contrary to the RFC12 comparison, the non-MDV CO2-forced CMIP5 model mean is nowhere near the F&R residual, or the actual GMST secular trend once MDV is removed. The model mean is WELL ABOVE both and it is impossible for it to pass through 2015 (or even 2010) from below (see IPCC graphs upthread).

    In other words, CO2 is not the driver of the secular trend in GMST despite all of Foster and Rahmstorf’s assertions.

  41. Richard C (NZ) on August 28, 2015 at 1:49 pm said:

    >”The [CO2-forced] model mean is WELL ABOVE both [F&R residual and ST] and it is impossible for it to pass through 2015 (or even 2010) [observations] from below (see IPCC graphs upthread).”

    IPCC AR5 WGI Figure 10.1 (a)
    http://www.climatechange2013.org/images/figures/WGI_AR5_Fig10-1.jpg

    The inescapable, undeniable, unequivocal evidence – from the IPCC.

    Note that the non-MDV model mean is actually quite good at the MDV-neutral dates (the spline) in the early part of the series:

    1895 – from below (Yes, valid)
    1925 – from above (Yes, valid)
    1955 – from below (Yes, valid)

    But the modeling breaks down (hopelessly) after 1955. The non-MDV model mean should NOT conform to MDV-included observations from 1955 – 2000. After 1955 the model mean SHOULD pass through the MDV-neutral dates (the spline) as shown previously:

    1985 – from above (No, invalid)
    2015 – from below (No, invalid)

    Obviously the models are DEAD WRONG after 1955.

  42. Richard C (NZ) on August 28, 2015 at 2:20 pm said:

    I should point out that the secular trend (ST) in GMST as defined by say Macias et al (2014) is really also just an oscillation over the last 2500 years at least, as per Luedecke, Hempelmann, and Weiss (2015) above.

    The secular trend in Ljungqvist (2010) is about -0.175 C/1000 yrs for the last 2000 years (see page 1 of comments). This is the multi-millennial spline.

    Multi-centennial variation (MCV) oscillates about that multi-millennial ST spline, as Simon correctly states in respect to MDV and MCV.

    The multi-millennial ST spline is probably an oscillation too i.e. MMV.

  43. Richard C (NZ) on August 28, 2015 at 6:38 pm said:

    >”An equation for the current model mean omits MDV:
    Model mean GMST natural + theoretical man-made = (ST + TRF) (5)”

    Should be Equation (3) as per page 1 comments:

    Model mean GMST natural + theoretical man-made = (ST + TRF) (3)

  44. Richard C (NZ) on August 28, 2015 at 6:45 pm said:

    >”MAJOR quibble is: contrary to the RFC12 comparison, the non-MDV CO2-forced CMIP5 model mean is nowhere near the F&R residual, or the actual GMST secular trend once MDV is removed. The model mean is WELL ABOVE both and it is impossible for it to pass through 2015 (or even 2010) from below (see IPCC graphs upthread).”

    Also perfectly clear from the graph previously posted in page 1 of comments:

    Climate models vs Global Average Surface Temperature
    http://www.drroyspencer.com/wp-content/uploads/90-CMIP5-models-vs-observations-with-pause-explanation.png

    Where:
    Model mean GMST natural + theoretical man-made = (ST + TRF) (3)
    GMST natural = ST + MDV (1)

  45. Richard C (NZ) on August 29, 2015 at 8:02 pm said:

    Lobbed this in to Climate Etc:

    richardcfromnz | August 29, 2015 at 3:59 am |
    http://judithcurry.com/2015/08/28/week-in-review-science-edition-19/#comment-727991

    Jim D, you have identified a non-MDV spline in GISTEMP that passes through 2015. Similar exists in HadCRUT4.

    The spline is central to this sequence:

    1895 – neutral
    1910 – MDV maximum negative (-ve)
    1925 – neutral
    1940 – MDV maximum positive (+ve)
    1955 – neutral
    1970 – MDV maximum negative (-ve)
    1985 – neutral
    2000 – MDV maximum positive (+ve)
    2015 – neutral
    2030 – MDV maximum negative (-ve)

    This is in respect to HadCRUT4: http://www.woodfortrees.org/plot/hadcrut4gl

    Now see relevant model mean vs observations comparisons:

    IPCC AR5 WGI Figure 10.1 (a)
    http://www.climatechange2013.org/images/figures/WGI_AR5_Fig10-1.jpg

    Model mean vs GMST (HadCRUT4)
    http://www.drroyspencer.com/wp-content/uploads/90-CMIP5-models-vs-observations-with-pause-explanation.png

    Model mean trajectory:

    1895 – from below (Yes, valid)
    1925 – from above (Yes, valid)
    1955 – from below (Yes, valid)
    1985 – from above (No, invalid)
    2015 – from below (No, invalid)

    Obviously the CO2-forced model mean does not pass through the MDV-neutral spline after 1955, the trajectory becomes much steeper. This is highly problematic. It implies that CO2 does not drive the secular trend in GMST after MDV is removed.

    More in page 2 of comments at Climate Conversations Group:
    http://www.climateconversation.wordshine.co.nz/2015/08/fatal-deficiencies-destroy-scientific-case-for-climate-catastrophe

  46. Richard C (NZ) on August 30, 2015 at 11:10 am said:

    Lobbed in the IPCC’s climate change criteria too for good measure:

    http://judithcurry.com/2015/08/28/week-in-review-science-edition-19/#comment-728167

  47. Richard C (NZ) on August 31, 2015 at 11:20 am said:

    For the record, given the IPCC’s TOA climate change criteria:

    ‘No Consensus: Earth’s Top of Atmosphere Energy Imbalance in CMIP5-Archived (IPCC AR5) Climate Models’

    Bob Tisdale / August 11, 2015

    http://wattsupwiththat.com/2015/08/11/no-consensus-earths-top-of-atmosphere-energy-imbalance-in-cmip5-archived-ipcc-ar5-climate-models/

    I think I referenced this upthread but good to have it next to the Climate Etc links.

  48. You really have to watch US Senator Ted Cruz demolish this climate activist who harasses him
    [Video]

    http://therightscoop.com/ted-cruz-schools-two-separate-climate-change-activists-who-were-trying-to-work-him-over/

  49. Richard C

    I’m not sure where we got to on this

    Dd we look at Tisdale’s post on WUWT
    http://wattsupwiththat.com/2015/08/11/no-consensus-earths-top-of-atmosphere-energy-imbalance-in-cmip5-archived-ipcc-ar5-climate-models/

    He quotes Trenberth who claims TOA energy imbalance is 0.5-1W / m2

    Sounds like the numbers you were quoting.

    Am I right?

  50. Richard C (NZ) on September 3, 2015 at 5:49 pm said:

    Andy

    >”I’m not sure where we got to on this. Did we look at Tisdale’s post on WUWT […link…] He quotes Trenberth who claims TOA energy imbalance is 0.5-1W / m2. Sounds like the numbers you were quoting. Am I right?”

    Yes that is correct Andy. Although the numbers I’m quoting (0.6 W.m-2) are from the IPCC Chapter 2 citations, Loeb et al (2012) and Stephens et al (2012) at both TOA and Sfc.

    Loeb et al Figure 1
    http://www.skepticalscience.com/pics/Loeb2012-TOAfluxvsOHC.jpg

    Kevin Trenberth’s reaction to the Loeb el al Letter is here:

    http://davidappell.blogspot.co.nz/2012/01/trenberth-response-to-todays-loeb-et-al.html

    I didn’t say much about Tisdale’s post upthread except to reproduce this quote:

    “Obviously [see Figure 7], based on the energy imbalances in the climate models used by the IPCC for their 5th Assessment Report, there is no agreement on how much heat the oceans should be accumulating, or even if the oceans are accumulating heat. And the differences in the simulated ocean heat accumulation are so great that using the model mean to represent the models is very misleading, because the model mean gives the impression of a consensus when there is none.”

    The Trenberth quote you refer to from Tisdale’s post is from a recent paper:

    Trenberth et al. (2014) Earth’s Energy Imbalance [Full text]
    http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-13-00294.1

    I find the first paragraph of the intro highly problematic (apart from it just being an activist propaganda statement that is):

    “With increasing greenhouse gases in the atmosphere, there is an imbalance in energy flows in and out of the earth system at the top of the atmosphere (TOA): the greenhouse gases increasingly trap more radiation and hence create warming (Solomon et al. 2007; Trenberth et al. 2009). “Warming” really means heating and extra energy, and hence it can be manifested in many ways. Rising surface temperatures are just one manifestation. Melting Arctic sea ice is another. Increasing the water cycle and altering storms is yet another way that the overall energy imbalance can be perturbed by changing clouds and albedo. However, most of the excess energy goes into the ocean (Bindoff et al. 2007; Trenberth 2009). Can we monitor the energy imbalance with direct measurements, and can we track where the energy goes? Certainly we need to be able to answer these questions if we are to properly track how climate change is manifested and quantify implications for the future.”

    This is just miss-attribution. The theoretical CO2 “forcing” is now 1.9 W.m-2 and increasing rapidly. Tisdale points to CERES “adjustments” but I’m just going by the IPCC science. You could argue that the TOA imbalance is 6.5 W.m-2, I would argue that Shapiro et al found a 6 W.m-2 solar difference between Maunder Minimum and Modern Maximum. I put it this way at Climate Etc:

    Fact remains, inescapably, undeniably, unequivocally, the IPCC’s criteria for climate change is the TOA energy imbalance. Therefore, a valid agent of climate change is one which moves the balance to its observed imbalance:

    0.6 W.m-2 – TOA imbalance 2000 – 2010, trendless
    1.0 W.m-2 – Shapiro et al solar forcing TOA, trendless
    1.9 W,m-2 – Theoretical CO2 forcing TOA, increasing.

    You be the judge.

    http://judithcurry.com/2015/08/28/week-in-review-science-edition-19/#comment-728444

    After repeating the same things over and over to 2 different guys I bailed out at that point. Having been in too many of these repetitious no-win thread discussions I decided it wasn’t worth the effort.

  51. Richard C (NZ) on September 3, 2015 at 5:58 pm said:

    >”This [Trenberth et al 2014 intro] is just miss-attribution.”

    The TOA imbalance forcing has already occurred at the surface by solar change on a millennial scale and which is lagged by oceanic thermal inertia. I detailed the lag at the Climate Etc thread linked above (twice).

    Theoretical CO2 “forcing” on the other hand, is instantaneous speed-of-light radiation between Sfc and TOA i.e. no oceanic lag.

  52. Richard C (NZ) on September 3, 2015 at 6:38 pm said:

    Notice in Trenberth et al (2014) that after “greenhouse gases increasingly trap more radiation and hence create warming” in the intro they neglect to apply GHG theory for the rest of the paper. This is no different to IPCC AR5 Chapter 10 Detection and Attribution. The criteria for a valid agent of climate change is stated elsewhere in Assessment Reports and AR5 Chapter 2 actually cites the observed TOA imbalance. But Chapter 10 neglects to apply the criteria to the observations.

    Easy to see why Trenberth et al (2014) didn’t want to go there. Their Figure 9 is the equivalent of Loeb et al Figure 1:

    Trenberth et al (2014) Fig. 9.
    Net radiation from the TOA from CERES [Energy Balanced and Filled (EBAF) Ed2.6r; http://ceres.larc.nasa.gov/products.php?product=EBAF%5D. The ASR (red) and OLR (blue) are given on the right axis and RT (ASR − OLR; black) is given on the left axis (W m−2; note the change in scale). For ASR, OLR, and RT, the ±1 standard deviation range is given in light red, blue, and gray. Also shown is the Niño-3.4 SST index (green; right axis, °C). The decadal low-pass filter is a 13-term filter used in Trenberth et al. (2007), making it similar to a 12-month running mean.
    http://journals.ametsoc.org/na101/home/literatum/publisher/ams/journals/content/clim/2014/15200442-27.9/jcli-d-13-00294.1/20140419/images/medium/jcli-d-13-00294.1-f9.gif

    At 2015 the theoretical net anthro “forcing” which is now around 2 W.m-2 (includes 1.9 W.m-2 CO2) is on the verge of going off the chart in respect to the imbalance (black line).

  53. Given the trendless nature of TOA imbalance, scenario RCP 8.5 looks like a very long shot.

    I’ve been a bit side-tracked by climate sensitivity. Since RCP 8.5 is having a direct effect on my life, and thousands of others, it has really focussed my mind on the key issue.

    So thanks Richard C. I am all ears and eyes.

  54. Richard C (NZ) on September 3, 2015 at 9:25 pm said:

    Andy.

    >”it [RCP forcing] has really focussed my mind on the key issue.”

    Yes, the primary climate change criteria is paramount. To reiterate in full:

    IPCC climate change criteria: radiative forcing “measured at top of atmosphere” (IPCC AR4 FAQ 2.1, Box 1 – “What is radiative forcing?”).

    FAQ 2.1, Box 1: What is Radiative Forcing?

    What is radiative forcing? The influence of a factor that can cause climate change, such as a greenhouse gas, is often evaluated in terms of its radiative forcing. Radiative forcing is a measure of how the energy balance of the Earth-atmosphere system is influenced when factors that affect climate are altered. The word radiative arises because these factors change the balance between incoming solar radiation and outgoing infrared radiation within the Earth’s atmosphere. This radiative balance controls the Earth’s surface temperature. The term forcing is used to indicate that Earth’s radiative balance is being pushed away from its normal state.

    Radiative forcing is usually quantified as the ‘rate of energy change per unit area of the globe as measured at the top of the atmosphere’, and is expressed in units of ‘Watts per square metre’ (see Figure 2). When radiative forcing from a factor or group of factors is evaluated as positive, the energy of the Earth-atmosphere system will ultimately increase, leading to a warming of the system. In contrast, for a negative radiative forcing, the energy will ultimately decrease, leading to a cooling of the system. Important challenges for climate scientists are to identify all the factors that affect climate and the mechanisms by which they exert a forcing, to quantify the radiative forcing of each factor and to evaluate the total radiative forcing from the group of factors.

    https://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html

    Now read the Trenberth at al intro:

    “……greenhouse gases increasingly trap more radiation and hence create warming [……] “Warming” really means heating and extra energy, and hence it can be manifested in many ways. […….] However, most of the excess energy goes into the ocean.

    Well yes except GHG cause is both miss-attribution and an impossibility. The surface imbalance is a global average. In the tropics it is in the order of 24 W.m-2 (Fairall et 1996) and in the Southern Ocean it is -11 W.m-2 (Okada and Yamanouchi 2002), overall average is 0.6 W.m-2 “going into the ocean” (Stephens et al 2012).

    CO2 cannot explain the tropical and Southern Ocean surface imbalances. Neither can it explain the global average. Worse, theoretical CO2 “forcing” (currently 1.9 W.m-2) cannot and does not do any work between the surface as Stephens et al demonstrate inadvertently (0.6 W.m-2 imbalance Sfc and 0.6 W.m-2 imbalance TOA).

    CO2 is simply a passive energy transfer medium, a coolant by definition, refrigerant code R744.

    So the theory is already highly problematic in respect to the current earth’s energy balance. To then go and introduce a theoretical, speculative, and unrealistic 8.5 W.m-2 “forcing” implying a TOA imbalance of 8.5 W.m-2 imbalance (something the CO2-centrics at Climate Etc cannot accept) is outrageous in the first instance and then to regulate by it is scandalous – it cannot be lawful.

  55. Richard C (NZ) on September 3, 2015 at 10:37 pm said:

    >”CO2 is simply a passive energy transfer medium, a coolant by definition, refrigerant code R744.”

    NASA agrees except, oddly, via NCAR, for the troposphere Andy (see articles quoted below, and especially the NCAR video in the second article). The NASA articles also largely disagree with the University of Virginia MAE 494 – Special Topics in Aerospace Engineering Part 3 in respect to CO2 (see below).

    I’ve been reading up on EUV heating of the thermosphere. No CO2 there so heat can’t radiate, 1000 K at TOA but 1400 K at 11 yr solar max, temp gradient goes down to mesopause where there is IR radiating gasses mainly CO2. The heat moves down to the mesopause by conduction and from there is radiated to space – no “increasing” GHG “radiation trap” whatsoever. And this is massive amounts of energy (see below).

    University of Virginia MAE 494 – Special Topics in Aerospace Engineering Part 3

    The Thermosphere [heating and dissipation]

    1. EUV Heats

    2. At Lower Altitudes : Cooling
    # N2 , O2 and O Cannot Radiate IR
    # No CO2 ( diffusive separation )
    # Must Conduct Deposited Heat to a Region Containing CO2 , H2O , O3 etc. Mesopause (Roughly)

    [see diagram – “rough picture”]

    http://people.virginia.edu/~rej/MAE494/Part-3-07.pdf

    Now NASA.

    ‘Solar Storm Dumps Gigawatts into Earth’s Upper Atmosphere’

    NASA Science News March 22, 2012

    A recent flurry of eruptions on the sun did more than spark pretty auroras around the poles. NASA-funded researchers say the solar storms of March 8th through 10th dumped enough energy in Earth’s upper atmosphere to power every residence in New York City for two years.

    “This was the biggest dose of heat we’ve received from a solar storm since 2005,” says Martin Mlynczak of NASA Langley Research Center. “It was a big event, and shows how solar activity can directly affect our planet.”

    Mlynczak is the associate principal investigator for the SABER instrument onboard NASA’s TIMED satellite. SABER monitors infrared emissions from Earth’s upper atmosphere, in particular from carbon dioxide (CO2) and nitric oxide (NO), two substances that play a key role in the energy balance of air hundreds of km above our planet’s surface.

    “Carbon dioxide and nitric oxide are natural thermostats,” explains James Russell of Hampton University, SABER’s principal investigator. “When the upper atmosphere (or ‘thermosphere’) heats up, these molecules try as hard as they can to shed that heat back into space.”

    That’s what happened on March 8th when a coronal mass ejection (CME) propelled in our direction by an X5-class solar flare hit Earth’s magnetic field. (On the “Richter Scale of Solar Flares,” X-class flares are the most powerful kind.) Energetic particles rained down on the upper atmosphere, depositing their energy where they hit. The action produced spectacular auroras around the poles and significant1 upper atmospheric heating all around the globe.

    “The thermosphere lit up like a Christmas tree,” says Russell. “It began to glow intensely at infrared wavelengths as the thermostat effect kicked in.”

    For the three day period, March 8th through 10th, the thermosphere absorbed 26 billion kWh of energy. Infrared radiation from CO2 and NO, the two most efficient coolants in the thermosphere, re-radiated 95% of that total back into space.

    In human terms, this is a lot of energy. According to the New York City mayor’s office, an average NY household consumes just under 4700 kWh annually. This means the geomagnetic storm dumped enough energy into the atmosphere to power every home in the Big Apple for two years.

    “Unfortunately, there’s no practical way to harness this kind of energy,” says Mlynczak. “It’s so diffuse and out of reach high above Earth’s surface. Plus, the majority of it has been sent back into space by the action of CO2 and NO.”

    During the heating impulse, the thermosphere puffed up like a marshmallow held over a campfire, temporarily increasing the drag on low-orbiting satellites. This is both good and bad. On the one hand, extra drag helps clear space junk out of Earth orbit. On the other hand, it decreases the lifetime of useful satellites by bringing them closer to the day of re-entry.

    The storm is over now, but Russell and Mlynczak expect more to come.

    “We’re just emerging from a deep solar minimum,” says Russell. “The solar cycle is gaining strength with a maximum expected in 2013.”

    More sunspots flinging more CMEs toward Earth adds up to more opportunities for SABER to study the heating effect of solar storms.

    “This is a new frontier in the sun-Earth connection,” says Mlynczak, “and the data we’re collecting are unprecedented.”

    http://science.nasa.gov/science-news/science-at-nasa/2012/22mar_saber/

    ‘Extreme ultraviolet spectral irradiance measurements since 1946’
    G. Schmidtke (2015)

    See Fig 23 (c) in particular

    Figure 23. (a) Presentation of the F10:7 index from 2002 to 2010
    (Unglaub et al., 2012; copyright permission granted by University
    of Leipzig). (b) Measured global total electron content from
    2002–2010 (Unglaub et al., 2012; copyright permission granted by
    University of Leipzig). (c) EUV energy deposited in the thermosphere
    from 2002–2010 (Unglaub et al., 2012; copyright permission
    granted by University of Leipzig).
    http://www.hist-geo-space-sci.net/6/3/2015/hgss-6-3-2015.pdf

    An EUV series is not available (to my knowledge) but the F10.7 Radio Flux profile is a rough proxy:

    Solar EUV radiation
    http://www.cbk.waw.pl/~jsokol/solarEUV.html

    Scroll down to F10.7 cm flux time series since 1990:
    http://www.cbk.waw.pl/~jsokol/solarParamsModel/plots/fig107Series.jpg

    NASA again.

    ‘A Puzzling Collapse of Earth’s Upper Atmosphere’

    NASA Science News July 15, 2010

    NASA-funded researchers are monitoring a big event in our planet’s atmosphere. High above Earth’s surface where the atmosphere meets space, a rarefied layer of gas called “the thermosphere” recently collapsed and now is rebounding again.

    “This is the biggest contraction of the thermosphere in at least 43 years,” says John Emmert of the Naval Research Lab, lead author of a paper announcing the finding in the June 19th issue of the Geophysical Research Letters (GRL). “It’s a Space Age record.”

    The collapse happened during the deep solar minimum of 2008-2009—a fact which comes as little surprise to researchers. The thermosphere always cools and contracts when solar activity is low. In this case, however, the magnitude of the collapse was two to three times greater than low solar activity could explain.

    “Something is going on that we do not understand,” says Emmert.

    The thermosphere ranges in altitude from 90 km to 600+ km. It is a realm of meteors, auroras and satellites, which skim through the thermosphere as they circle Earth. It is also where solar radiation makes first contact with our planet. The thermosphere intercepts extreme ultraviolet (EUV) photons from the sun before they can reach the ground. When solar activity is high, solar EUV warms the thermosphere, causing it to puff up like a marshmallow held over a camp fire. (This heating can raise temperatures as high as 1400 K—hence the name thermosphere.) When solar activity is low, the opposite happens.

    Lately, solar activity has been very low. In 2008 and 2009, the sun plunged into a century-class solar minimum. Sunspots were scarce, solar flares almost non-existent, and solar EUV radiation was at a low ebb. Researchers immediately turned their attention to the thermosphere to see what would happen.

    These plots show how the density of the thermosphere (at a fiducial height of 400 km) has waxed and waned during the past four solar cycles. Frames (a) and (c) are density; frame (b) is the sun’s radio intensity at a wavelength of 10.7 cm, a key indicator of solar activity. Note the yellow circled region. In 2008 and 2009, the density of the thermosphere was 28% lower than expectations set by previous solar minima. Credit: Emmert et al. (2010), Geophys. Res. Lett., 37, L12102.

    How do you know what’s happening all the way up in the thermosphere?

    Emmert uses a clever technique: Because satellites feel aerodynamic drag when they move through the thermosphere, it is possible to monitor conditions there by watching satellites decay. He analyzed the decay rates of more than 5000 satellites ranging in altitude between 200 and 600 km and ranging in time between 1967 and 2010. This provided a unique space-time sampling of thermospheric density, temperature, and pressure covering almost the entire Space Age. In this way he discovered that the thermospheric collapse of 2008-2009 was not only bigger than any previous collapse, but also bigger than the sun alone could explain.

    One possible explanation is carbon dioxide (CO2).

    When carbon dioxide gets into the thermosphere, it acts as a coolant, shedding heat via infrared radiation. It is widely-known that CO2 levels have been increasing in Earth’s atmosphere. Extra CO2 in the thermosphere could have magnified the cooling action of solar minimum.

    “But the numbers don’t quite add up,” says Emmert. “Even when we take CO2 into account using our best understanding of how it operates as a coolant, we cannot fully explain the thermosphere’s collapse.”

    According to Emmert and colleagues, low solar EUV accounts for about 30% of the collapse. Extra CO2 accounts for at least another 10%. That leaves as much as 60% unaccounted for.

    In their GRL paper, the authors acknowledge that the situation is complicated. There’s more to it than just solar EUV and terrestrial CO2. For instance, trends in global climate could alter the composition of the thermosphere, changing its thermal properties and the way it responds to external stimuli. The overall sensitivity of the thermosphere to solar radiation could actually be increasing.

    “The density anomalies,” they wrote, “may signify that an as-yet-unidentified climatological tipping point involving energy balance and chemistry feedbacks has been reached.”

    Or not.

    Important clues may be found in the way the thermosphere rebounds. Solar minimum is now coming to an end, EUV radiation is on the rise, and the thermosphere is puffing up again. Exactly how the recovery proceeds could unravel the contributions of solar vs. terrestrial sources.

    “We will continue to monitor the situation,” says Emmert.

    For more information see Emmert, J. T., J. L. Lean, and J. M. Picone (2010), Record-low thermospheric density during the 2008 solar minimum, Geophys. Res. Lett., 37, L12102.

    http://science.nasa.gov/science-news/science-at-nasa/2010/15jul_thermosphere/

    # # #

    1) The NASA accounts differ radically from The Thermosphere – University of Virginia MAE 494 – Special Topics in Aerospace Engineering Part 3

    2) Much to be learned from the thermosphere re CO2 – but not necessarily by NASA press release.

  56. Richard C (NZ) on September 4, 2015 at 12:47 am said:

    NASA Science News

    David Hathaway. “During solar minimum, the gas temperature in the thermosphere is around 700 °C. That’s high, but not nearly as high as the temperature during Solar Max. When the Sun is active, high levels of solar EUV raise the temperature of the thermosphere all the way to 1,500 °C.”

    “The extreme ultraviolet photons that heat the thermosphere aren’t the same as the UV rays that give you sunburns,” says Dr. Judith Lean, a physicist at the US Naval Research Labs. “They are much worse. Sunburns come from the UV-A and UV-B bands around 3000 Angstroms. The photons that heat the thermosphere are at least 10 times more energetic and they vary 100 times more [between solar minimum and solar maximum]. It’s good thing they’re all absorbed by nitrogen and oxygen at high altitudes — otherwise a day at the beach would be no fun.”

    If the thermosphere is so hot, wouldn’t astronauts feel uncomfortably warm during space walks?

    No, says Hathaway. The air up there is so tenuous that you can’t really feel the heat. In fact, it’s so thin that scientists can’t even measure the temperature directly. Instead, they put orbital decay to good use by monitoring the drag on satellites to estimate the density of the rarefied air. Then they can use the density to calculate the temperature — proof that every cloud has a silver lining!

    http://science.nasa.gov/science-news/science-at-nasa/2000/ast30may_1m/

    # # #

    University of Virginia MAE 494 – Special Topics in Aerospace Engineering Part 3 – The Thermosphere

    “Thermosphere T depends on Solar Activity.”

    But we never read from climate science,

    “Ocean T depends on Solar activity”

    Even though solar IR-A and IR-B is 1000 times more energetic than IR-C from the air.

  57. I did see a graph from the Met Office some time back that showed the energy imbalance as trendless for 10 years or more.

    Shame I didn’t bookmark it.

  58. Richard C (NZ) on September 4, 2015 at 11:09 am said:

    >”I did see a graph from the Met Office some time back that showed the energy imbalance as trendless for 10 years or more.”

    It might have been a paper by Richard Allan, University of Reading. Scroll down to:

    ‘Changes in Earth’s radiative energy balance 1985-2010’
    (2011)
    http://www.met.reading.ac.uk/~sgs02rpa/latest.html

Comment navigation

 

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation