Climate Models

This thread is for discussion of computer climate models, or General Circulation Models (GCMs).

Views: 2777

50 Thoughts on “Climate Models

  1. Bob D on 16/09/2012 at 11:47 am said:

    Well, well, it looks like someone got the models wrong again.

    How often have we heard that droughts will increase due to global warming? It’s the single most-quoted effect that alarmists use when discussing Africa, for example.

    Seems they were wrong.
    http://www.nature.com/nature/journal/vaop/ncurrent/full/nature11377.html

    We find no evidence in our analysis of a positive feedback—that is, a preference for rain over wetter soils—at the spatial scale (50–100 kilometres) studied. In contrast, we find that a positive feedback of soil moisture on simulated precipitation does dominate in six state-of-the-art global weather and climate models—a difference that may contribute to excessive simulated droughts in large-scale models.

    This is why these blokes should have checked their models before shouting about the end of the world.

  2. Richard C (NZ) on 18/09/2012 at 8:51 pm said:

    New paper shows negative feedback from clouds ‘may damp global warming’

    A paper published today in The Journal of Climate uses a combination of two modelling techniques to find that negative feedback from clouds could result in “a 2.3-4.5% increase in [model projected] cloudiness” over the next century, and that “subtropical stratocumulus [clouds] may damp global warming in a way not captured by the [Global Climate Models] studied.” This strong negative feedback from clouds could alone negate the 3C alleged anthropogenic warming projected by the IPCC.

    As Dr. Roy Spencer points out in his book,

    “The most obvious way for warming to be caused naturally is for small, natural fluctuations in the circulation patterns of the atmosphere and ocean to result in a 1% or 2% decrease in global cloud cover. Clouds are the Earth’s sunshade, and if cloud cover changes for any reason, you have global warming — or global cooling.”

    According to the authors of this new paper, current global climate models “predict a robust increase of 0.5-1 K in EIS over the next century, resulting in a 2.3-4.5% increase in [mixed layer model] cloudiness.”

    EIS or estimated inversion strength has been shown by observations to be correlated with cloudiness, as demonstrated by the 2nd graph below from the University of Washington, indicating a 1 K increase in EIS results in an approximate 4-5% increase in low cloud cover [CF or cloud fraction]. Thus, a combination of observational data and modelling indicate clouds have a strong net negative feedback upon global warming that is “not captured” by current climate models.

    CMIP3 Subtropical Stratocumulus Cloud Feedback Interpreted Through a Mixed-Layer Model

    PETER M. CALDWELL,* YUNYAN ZHANG, and STEPHEN A. KLEIN

    >>>>>>>

    http://hockeyschtick.blogspot.co.nz/2012/09/new-paper-shows-negative-feedback-from.html

  3. Richard C (NZ) on 18/10/2012 at 7:54 am said:

    Climate change research gets petascale supercomputer

    1.5-petaflop IBM Yellowstone system runs 72,288 Intel Xeon cores

    Computerworld – Scientists studying Earth system processes, including climate change, are now working with one of the largest supercomputers on the planet.

    The National Center for Atmospheric Research (NCAR) has begun using a 1.5 petaflop IBM system, called Yellowstone, that is among the top 20 supercomputers in the world, at least until the global rankings are updated next month.

    For NCAR researchers it is an enormous leap in compute capability — a roughly 30 times improvement over its existing 77 teraflop supercomputer. Yellowstone is a 1,500 teraflops system capable of 1.5 quadrillion calculations per second.

    The NCAR-Wyoming Supercomputing Center in Cheyenne, where this system is housed, says that with Yellowstone, it now has “the world’s most powerful supercomputer dedicated to geosciences.”

    Along with climate change, this supercomputer will be used on a number of geoscience research issues, including the study of severe weather, oceanography, air quality, geomagnetic storms, earthquakes and tsunamis, wildfires, subsurface water and energy resources.

    […]

    Scientists will be able to use the supercomputer to model the regional impacts of climate change. A model that is 100 km (62 miles) is considered coarse because the grid covers a large distance. But this new system may be able to reduce resolution to as much as 10 km (6.2 miles), giving scientists the ability to examine climate impacts in greater detail.

    […]

    Yellowstone is running in a new $70 million data center. The value of the supercomputer contract was put at $25 million to $35 million. It has 100 racks, with 72,288 compute cores from Intel Sandy Bridge processors.

    >>>>>>>>

    http://www.computerworld.com/s/article/9232382/Climate_change_research_gets_petascale_supercomputer

    Rather large energy requirement too – “The facility was designed with a total capacity of 4 to 5 megawatts of electricity, but with Yellowstone now in production, usage is considerably lower. Total power for computing, cooling, office, and support functions has averaged 1.8 to 2.1 MW”

    NCAR-Wyoming Supercomputing Center
    Fact Sheet

    https://www2.ucar.edu/atmosnews/news/nwsc-fact-sheet

  4. Richard C (NZ) on 18/10/2012 at 8:39 am said:

    I queried John Christy as to which modeling group it was that has mimiced absolute temperature and trajectory this century so far in his EPS statement Figure 2.1. This was his reply:-

    Richard:

    This model labeled 27 should be inmcm4 (Russia)

    http://www.springerlink.com/content/x6647x575g82734j/

    John C.

    John R. Christy
    Director, Earth System Science Center
    Distinguished Professor, Atmospheric Science
    University of Alabama in Huntsville
    Alabama State Climatologist

    • Richard C (NZ) on 18/10/2012 at 8:55 am said:

      What did the Russians do that everyone else didn’t in CMIP5 for AR5? Did they ramp GHG forcing down to zero I wonder? They do say there were “some changes in the formulation”

      Abstract

      The INMCM3.0 climate model has formed the basis for the development of a new climate-model version: the INMCM4.0. It differs from the previous version in that there is an increase in its spatial resolution and some changes in the formulation of coupled atmosphere-ocean general circulation models. A numerical experiment was conducted on the basis of this new version to simulate the present-day climate. The model data were compared with observational data and the INMCM3.0 model data. It is shown that the new model adequately reproduces the most significant features of the observed atmospheric and oceanic climate. This new model is ready to participate in the Coupled Model Intercomparison Project Phase 5 (CMIP5), the results of which are to be used in preparing the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC).

      # # #

      Good to see a modeling group validating their model against observations (GCM group that is, RTM groups do this religiously) – this is a major breakthrough.

    • Richard C (NZ) on 18/10/2012 at 1:29 pm said:

      Simulating Present-Day Climate with the INMCM4.0 Coupled Model of the Atmospheric and Oceanic General Circulations

      E. M. Volodin, N. A. Dianskii, and A. V. Gusev, 2010

      Institute of Numerical Mathematics, Russian Academy of Sciences, ul. Gubkina 8, Moscow, 119991 Russia
      e-mail: volodin@inm.ras.ru

      http://83.149.207.89/GCM_DATA_PLOTTING/documents/PhysAtm4_10VolodinLO.pdf

      Page 2:-

      This makes it possible to analyze systematic errors in simulating the present-day climate and to assess the range of its possible changes caused, for example, by anthropogenic forcing.

      Page 3:-

      On the basis of this model, a numerical experiment was carried out to simulate the modern climate. To this end, the concentrations specified for all radioactive gases and aerosols corresponded to those in 1960.

      Page 4:-

      Name: Air temperature at the surface °C
      Observations: 14.0 ± 0.2 [34]
      INMCM3.0: 13.0 ± 0.1
      INMCM4.0: 13.7 ± 0.1

      Page 4:-

      The 1951–2000 NCEP reanalysis data [31] were used to compare the model atmospheric dynamics with observational data, and data from [32–41] were used to compare the integral atmospheric characteristics.

      Page 3:-

      The parameterizations of the basic physical processes in the model have changed only slightly; namely, some of the tuning parameters have changed. Among these are the parameterizations of radiation [18],

      18. V. Ya. Galin, “Parametrization of Radiative Processes
      in the DNM Atmospheric Model,”

      ‘Parametrization of radiative processes in the DNM atmospheric model’

      Galin, V.Y. [Russian Academy of Sciences, Moscow (Russian Federation)]
      1998

      https://www.etde.org/etdeweb/details_open.jsp?osti_id=300295

      Abstract:
      The radiative code of the atmospheric model (DNM model) of the Institute of Numerical Mathematics (IVM), Russian Academy of Sciences is described. The code uses spectral transmission functions and the delta-Eddington approximation to take into account the absorption and scattering of radiation in the atmosphere due to atmospheric gases, aerosols, and clouds. The simplest regularization procedure in combination with the nonmonotonic factorization method is used to find a stable solution to the ill-conditioned system of delta-Eddington equations. Computation algorithms are presented, and the results obtained are compared to both the data of benchmark line-by-line calculations and the model data of ICRCCM international radiative programs. It was found that the DNM model yields a high accuracy of computing the thermal and solar radiation.

      # # #

      Unfortunately I can’t access the body of the Galin paper. Unfortunate because the “absorption and scattering” characteristics of CO2 used (and any changes made in INMCM4.0) would make VERY interesting reading.

    • Richard C (NZ) on 18/10/2012 at 3:43 pm said:

      5.2 Heat emission on page 43 of Volodin, Dianskii, and Gusev gives the formulae, share of emissions across the spectrum, and references tables of coefficients.

      http://83.149.207.89/GCM_DATA_PLOTTING/documents/modelen.pdf

    • Richard C (NZ) on 18/10/2012 at 4:02 pm said:

      Description of the CCM INM RAS and model experiments

      Description of the atmospheric climate model inmcm4.0.(new) [hotlink]

      Short description of the coupled climate model inmcm3.0 and model experiments. [hotlink]

      Timetable of the model experiments.

      Selected publications [hotlinked]

      Volodin E.M., Diansky N.A.. “Prediction of the climate change in 19-22th centuries using coupled climate model”.

      Volodin E.M., Diansky N.A. “ENSO reconstruction in the Coupled Climate Model”.

      Volodin E.M.”Simulation of the modern climate. Comparison with observations and data of other climate models”.

      Volodin E.M. “Reliability of the future climate change forecasts”.

      Volodin E.M., Diansky N.A., Gusev A.V. “Simulating Present Day Climate with the INMCM4.0 Coupled Model of the Atmospheric and Oceanic General Circulations”.(new)

      Volodin E.M. “Atmosphere-Ocean General Circulation Model with the Carbon Cycle”.(new)

      http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_en.html

  5. Andy on 24/10/2012 at 10:49 am said:

    “Pinatubo Climate Sensitivity and Two Dogs that didn’t bark in the night”

    Interesting article on climate sensitivity over at Lucia’s

    http://rankexploits.com/musings/2012/pinatubo-climate-sensitivity-and-two-dogs-that-didnt-bark-in-the-night/

    • Richard C (NZ) on 24/10/2012 at 12:47 pm said:

      Lucia’s blog analysis makes Nuccitelli et al’s DK12 Comment look somewhat ordinary.

      For about 2 yrs data and “a single ocean heat capacity model” (one-heat-sink), Lucia’s model “is “seeing” an ocean capacity of 53 watt-months/deg C/m2 – equivalent to about 30 to 40m water depth”. Further down page, the model is “(still) “seeing” a total ocean heat capacity corresponding to about the top 30-40m of ocean”. This for 60S to 60N only.

      According to Nuccitelli et al, that’s all “noise” and 5 yr smoothed data should be used down to 2000m.

      Can’t say I’m convinced by globally averaged approximations for these calculations. I think the 0-GCM approach using observed ocean heat climatology (which one?) corresponding to TOA satellite observations cell-by-cell are about the only way to arrive at anything anywhere near meaningful. Not that I know what it is about at Lucia-level.

  6. Richard C (NZ) on 31/12/2012 at 5:21 pm said:

    AR5 Chapter 11; Hiding the Decline (Part II)

    http://wattsupwiththat.com/2012/12/30/ar5-chapter-11-hiding-the-decline-part-ii/#more-76591

    Figure 11.33: Synthesis of near-term projections of global mean surface air temperature. a), b) and c):-

    http://wattsupwiththat.files.wordpress.com/2012/12/image_thumb1.png?w=936&h=1143

    They hid the decline! In the first graph, observational data ends about 2011 or 12. In the second graph though, it ends about 2007 or 8. There are four or five years of observational data missing from the second graph. Fortunately the two graphs are scaled identically which makes it very easy to use a highly sophisticated tool called “cut and paste” to move the observational data from the first graph to the second graph and see what it should have looked like:

    http://wattsupwiththat.files.wordpress.com/2012/12/image_thumb2.png?w=939&h=414

    Well oops. Once one brings the observational data up to date, it turns out that we are currently below the entire range of models in the 5% to 95% confidence range across all emission scenarios. The light gray shading is for RCP 4.5, the most likely emission scenario. But we’re also below the dark gray which is all emission scenarios for all models, including the ones where we strangle the global economy.

    + + +

    Also John Christy’s preliminary plot (incomplete) of CMIP5 RCP4.5 vs observations (UAH/RSS):-

    http://curryja.files.wordpress.com/2012/07/christy-fig.jpg?w=808&h=622

  7. Richard C (NZ) on 03/02/2013 at 11:34 am said:

    The controversy

    by Anastassia Makarieva, Victor Gorshkov, Douglas Sheil, Antonio Nobre, Larry Li

    Thanks to help from blog readers, those who visited the ACPD site and many others who we have communicated with, our paper has received considerable feedback. Some were supportive and many were critical. Some have accepted that the physical mechanism is valid, though some (such as JC) question its magnitude and some are certain it is incorrect (but cannot find the error). Setting aside these specific issues, most of the more general critical comments can be classified as variations on, and combinations of, three basic statements:

    1. Current weather and climate models (a) are already based on physical laws and (b) satisfactorily reproduce observed patterns and behaviour. By inference, it is unlikely that they miss any major processes.

    2. You should produce a working model more effective than current models.

    3. Current models are comprehensive: your effect is already there.

    Let’s consider these claims one by one.

    Models and physical laws

    […]

    Thus, while there are physical laws in existing models, their outputs (including apparent circulation power) reflect an empirical process of calibration and fitting. In this sense models are not based on physical laws. This is the reason why no theoretical estimate of the power of the global atmospheric circulation system has been available until now.

    The models reproduce the observations satisfactorily

    As we have discussed in our paper (p. 1046) current models fail when it comes to describing many water-related phenomena. But perhaps a more important point to make here is that even where behaviours are satisfactorily reproduced it would not mean that the physical basis of the model are correct. Indeed, any phenomenon that repeats itself can be formally described or “predicted” completely without understanding its physical nature

    […]

    For example, a climate model empirically fitted for a forest-covered continent cannot inform us about the climatic consequences of deforestation if we do not correctly understand the underlying physical mechanisms.

    You should produce a better model than the existing ones

    […]

    To expect a few theorists, however keen, can achieve that is neither reasonable nor realistic. We have invested our efforts to show, using suitable physical estimates, that the effect we describe is sufficient to justify a wider and deeper scrutiny. (At the same time we are also developing a number of texts to show how current models in fact contain erroneous physical relationships (see, e.g., here)).

    Your effect is already present in existing models

    Many commentators believe that the physics we are talking about is already included in models. There is no omission. This argument assumes that if the processes of condensation and precipitation are reproduced in models, then the models account for all the related phenomena, including pressure gradients and dynamics. This is, however, not so. Indeed this is not merely an oversight but an impossibility. The explanation is interesting and deserves recognition – so we shall use this opportunity to explain.

    […]

    In current models in the absence of a theoretical stipulation on the circulation power, a reverse logic is followed. The horizontal pressure gradients are determined from the continuity equation, with the condensation rate calculated from the Clausius-Clapeyron law using temperature derived from the first law of thermodynamics with empirically fitted turbulence. However, as we have seen, to correctly reproduce condensation-induced dynamics, condensation rate requires an accuracy much greater than γ << 1. Meanwhile the imprecision of the first law of thermodynamics as applied to describe the non-equilibrium atmospheric dynamics is precisely of the same order of γ. The kinetic energy of the gas is not accounted for in equilibrium thermodynamics.

    […]

    Summary and outlook

    The Editor’s comment on our paper ends with a call to further evaluate our proposals. We second this call. The reason we wrote this paper was to ensure it entered the main-stream and gained recognition. For us the key implication of our theory is the major importance of vegetation cover in sustaining regional climates. If condensation drives atmospheric circulation as we claim, then forests determine much of the Earth’s hydrological cycle (see here for details). Forest cover is crucial for the terrestrial biosphere and the well-being of many millions of people. If you acknowledge, as the editors of ACP have, any chance – however large or small – that our proposals are correct, then we hope you concede that there is some urgency that these ideas gain clear objective assessment from those best placed to assess them.

    http://judithcurry.com/2013/01/31/condensation-driven-winds-an-update-new-version/

  8. Richard C (NZ) on 30/04/2013 at 4:33 pm said:

    New paper finds IPCC climate models unable to reproduce solar radiation at Earth’s surface

    A new paper published in the Journal of Geophysical Research – Atmospheres finds the latest generation of IPCC climate models were unable to reproduce the global dimming of sunshine from the ~ 1950s-1980s, followed by global brightening of sunshine during the 1990’s. These global dimming and brightening periods explain the observed changes in global temperature over the past 50-60 years far better than the slow steady rise in CO2 levels. The authors find the models underestimated dimming by 80-85% in comparison to observations, underestimated brightening in China and Japan as well, and that “no individual model performs particularly well for all four regions” studied. Dimming was underestimated in some regions by up to 7 Wm-2 per decade, which by way of comparison is 25 times greater than the alleged CO2 forcing of about 0.28 Wm-2 per decade. The paper demonstrates climate models are unable to reproduce the known climate change of the past, much less the future, that the forcing from changes in solar radiation at the Earth surface is still far from being understood and dwarfs any alleged effect of increased CO2.

    ‘Evaluation of multidecadal variability in CMIP5 surface solar radiation and inferred underestimation of aerosol direct effects over Europe, China, Japan and India’

    R. J. Allen 1, J. R. Norris 2, M. Wild 3

    DOI: 10.1002/jgrd.50426

    http://hockeyschtick.blogspot.co.nz/2013/04/new-paper-finds-ipcc-climate-models.html

  9. Richard C (NZ) on 19/05/2013 at 12:39 pm said:

    ‘Global warming slowdown retrospectively “predicted” ‘

    By Ashutosh Jogalekar

    When I was in graduate school I once came across a computer program that’s used to predict the activities of as yet unsynthesized drug molecules. The program is “trained” on a set of existing drug molecules with known activities (the “training set”) and is then used to predict those of an unknown set (the “test set”). In order to make learning the ropes of the program more interesting, my graduate advisor set up a friendly contest between me and a friend in the lab. We were each given a week to train the program on an existing set and find out how well we could do on the unknowns.

    After a week we turned in our results. I actually did better than my friend on the existing set, but my friend did better on the test set. From a practical perspective his model had predictive value, a key property of any successful model. On the other hand my model was one that still needed some work. Being able to “predict” already existing data is not prediction, it’s explanation. Explanation is important, but a model such as mine that merely explained what was already known is an incomplete model since the value and purpose of a truly robust model is prediction. In addition, a model that merely explains can be made to fit the data by tweaking its parameters with the known experimental numbers.

    These are the thoughts that went through my mind as I read a recent paper from Nature Climate Change in which climate change modelers “predicted” the last ten years of global temperature stagnation.

    […]

    This kind of retrospective calculation is a standard part of model building. But let’s not call it a “prediction”, it’s actually a “postdiction”. The present study indicates that models used for predicting temperature changes need some more work, especially when dealing with tightly coupled complex systems such as ocean sinks. In addition you cannot simply make these models work by tweaking the parameters; the problem with this approach is that it risks condemning the models to a narrow window of applicability beyond which they will lack the flexibility to take sudden changes into account. A robust model is one with a minimal number of parameters which does not need to be constantly tweaked to explain what has already happened and which is as general as possible. Current climate models are not useless, but in my opinion the fact that they could not prospectively predict the temperature stagnation implies that they lack robustness. They should really be seen as “work in progress”.

    I can also see how such a study will negatively affect the public image of global warming. People are usually not happy with prediction after the fact……..

    >>>>>>

    http://blogs.scientificamerican.com/the-curious-wavefunction/2013/05/15/global-warming-slowdown-retrospectively-predicted/

  10. Richard C (NZ) on 19/06/2013 at 2:46 pm said:

    ‘The “ensemble” of models is completely meaningless, statistically’

    Posted on June 18, 2013 by Anthony Watts

    This comment from rgbatduke, who is Robert G. Brown. at the Duke University Physics Department on the No significant warming for 17 years 4 months thread has gained quite a bit of attention [e.g. reproduced by Dr Judith Curry at Climate Etc] because it speaks clearly to truth. So that all readers can benefit, I’m elevating it to a full post

    rgbatduke says:

    June 13, 2013 at 7:20 am

    http://wattsupwiththat.com/2013/06/18/the-ensemble-of-models-is-completely-meaningless-statistically/

    Last two paragraphs:

    “It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.

    Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and still possibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.”

  11. Richard C (NZ) on 24/06/2013 at 7:21 pm said:

    One of the first jobs for NIWA’s High Performance Computing Facility (HPCF) was snow modeling partly funded by the Ski Areas Association of New Zealand:

    ‘New Zealand snow areas confident they will adapt to any risks from climate change’

    16 December 2010

    New climate modelling shows seasonal snow levels at New Zealand ski areas will be reduced by the effects of climate change in the coming years, but the good news is the loss may actually be less than originally anticipated and we should be able to continue to make snow, even under a more extreme climate scenario

    http://www.niwa.co.nz/news/new-zealand-snow-areas-confident-they-will-adapt-any-risks-climate-change

    A lot less. I’ve just seen a newsclip from Mt Hutt (I think it was) where they were saying the 3m base was the most they had ever seen.

  12. Richard C (NZ) on 26/06/2013 at 7:06 pm said:

    ‘New Weather Service supercomputer faces chaos’

    By Steve Tracton

    The National Weather Service is currently in the process of transitioning its primary computer model, the Global Forecast System (GFS), from an old supercomputer to a brand new one [Weather and Climate Operational Supercomputer System (WCOSS)]. However, before the switch can be approved, the GFS model on the new computer must generate forecasts indistinguishable from the forecasts on the old one.

    One expects that ought not to be a problem, and to the best of my 30+ years of personal experience at the NWS, it has not been. But now, chaos has unexpectedly become a factor and differences have emerged in forecasts produced by the identical computer model but run on different computers.

    This experience closely parallels Ed Lorenz’s experiments in the 1960s, which led serendipitously to development of chaos theory (aka “butterfly effect). What Lorenz found – to his complete surprise – was that forecasts run with identically the same (simplistic) weather forecast model diverged from one another as forecast length increased solely due to even minute differences inadvertently introduced into the starting analyses (“initial conditions”).

    […]

    So what lay behind the chaotic like divergence of solutions between the identically same GFS run on different computer systems? Simply speaking, the error in model’s sequence of short range (3 hour) forecasts, which provide the “first guess” in assimilation of the latest observations, does not result in precisely the same initial conditions for the next pair of GFS extended range forecasts (see schematic illustration below).

    The differences in the simulations arise solely from exceedingly small, but apparently consequential differences in numerical calculations. These are associated with differences in the computer systems’ structure and logical organization (architecture) and compilers which translate programming codes (e.g., versions of Fortran) to machine language – and probably other factors way over my head to understand.

    >>>>>>>>

    http://www.washingtonpost.com/blogs/capital-weather-gang/wp/2013/06/25/new-weather-service-supercomputer-faces-chaos/

  13. Richard C (NZ) on 28/06/2013 at 10:08 pm said:

    ‘Policy Implications of Climate Models on the Verge of Failure’

    By Paul C. Knappenberger and Patrick J. Michaels
    Center for the Study of Science, Cato Institute, Washington DC

    [converted from a poster displayed at the AGU Science Policy Conference, Washington, June 24-26]

    INTRODUCTION

    Assessing the consistency between real-world observations and climate model projections
    is a challenging problem but one that is essential prior to making policy decisions which
    depend largely on such projections. National and international assessments often mischaracterize the level of consistency between observations and projections.
    Unfortunately, policymakers are often unaware of this situation, which leaves them
    vulnerable to developing policies that are ineffective at best and dangerous at worst.

    Here, we find that at the global scale, climate models are on the verge of failing to
    adequately capture observed changes in the average temperature over the past 10 to 30
    years—the period of the greatest human influence on the atmosphere. At the regional
    scale, specifically across the United States, climate models largely fail to replicate known
    precipitation changes both in sign as well as magnitude.

    […]

    CONCLUSIONS:

    It is impossible to present reliable future projections from a collection of climate
    models which generally cannot simulate observed change. As a consequence, we
    recommend that unless/until the collection of climate models can be demonstrated to accurately capture observed characteristics of known climate changes, policymakers should avoid basing any decisions upon projections made from them. Further, those policies which have already be established using projections from these climate models should be revisited.

    Assessments which suffer from the inclusion of unreliable climate model projections include those produced by the Intergovernmental Panel on Climate Change and the U.S. Global Climate Change Research Program (including the draft of their most recent National Climate Assessment). Policies which are based upon such assessments include those established by the U.S. Environmental Protection Agency pertaining to the regulation of greenhouse gas emissions under the Clean Air Act.

    http://wattsupwiththat.com/2013/06/27/policy-implications-of-climate-models-on-the-verge-of-failure/

    Re the EPA assessments, see ‘Amicus brief to the Supreme Court’ (filed May 23, 2013):

    https://www.climateconversation.org.nz/open-threads/climate/regions/usa/#comment-213937

  14. Richard C (NZ) on 12/07/2013 at 10:35 am said:

    ‘Climate change: The forecast for 2018 is cloudy with record heat’

    Efforts to predict the near-term climate are taking off, but their record so far has been patchy.

    * Jeff Tollefson

    In August 2007, Doug Smith took the biggest gamble of his career. After more than ten years of work with fellow modellers at the Met Office’s Hadley Centre in Exeter, UK, Smith published a detailed prediction of how the climate would change over the better part of a decade1. His team forecasted that global warming would stall briefly and then pick up speed, sending the planet into record-breaking territory within a few years.

    The Hadley prediction has not fared particularly well. Six years on, global temperatures have yet to shoot up as it projected. Despite this underwhelming result, such near-term forecasts have caught on among many climate modellers, who are now trying to predict how global conditions will evolve over the next several years and beyond. Eventually, they hope to offer forecasts that will enable humanity to prepare for the decade ahead just as meteorologists help people to choose their clothes each morning.

    These near-term forecasts stand in sharp contrast to the generic projections that climate modellers typically produce, which look many decades ahead and don’t represent the actual climate at any given time. “This is very new to climate science,” says Francisco Doblas-Reyes, a modeller at the Catalan Institute of Climate Sciences in Barcelona, Spain, and a lead author of a chapter that covers climate prediction for a forthcoming report by the Intergovernmental Panel on Climate Change (IPCC). “We’re developing an additional tool that can tell us a lot more about the near-term future.”

    In preparation for the IPCC report, the first part of which is due out in September, some 16 teams ran an intensive series of decadal forecasting experiments with climate models. Over the past two years, a number of papers based on these exercises have been published, and they generally predict less warming than standard models over the near term. For these researchers, decadal forecasting has come of age. But many prominent scientists question both the results and the utility of what is, by all accounts, an expensive and time-consuming exercise.

    […]

    By starting in the present with actual conditions, Smith’s group hoped to improve the model’s accuracy at forecasting the near-term climate. The results looked promising at first. The model initially predicted temperatures that were cooler than those seen in conventional climate projections — a forecast that basically held true into 2008. But then the prediction’s accuracy faded sharply: the dramatic warming expected after 2008 has yet to arrive (see ‘Hazy view’). “It’s fair to say that the real world warmed even less than our forecast suggested,” Smith says. “We don’t really understand at the moment why that is.”

    […]

    Smith says that his group at the Hadley Centre has doubled the resolution of its model, which now breaks the planet into a grid with cells 150 kilometres on each side. Within a few years, he hopes to move to a 60-kilometre grid, which will make it easier to capture the connections between ocean activities and the weather that society is interested in. With improved models, more data and better statistics, he foresees a day when their models will offer up a probabilistic assessment of temperatures and perhaps even precipitation for the coming decade.

    In preparation for that day, he has set up a ‘decadal exchange’ to collect, analyse and publish annual forecasts. Nine groups used the latest climate models to produce ten-year forecasts beginning in 2011. An analysis of the ensemble6 shows much the same pattern as Smith’s 2007 prediction: temperatures start out cool and then rise sharply, and within the next few years, barring something like a volcanic eruption, record temperatures seem all but inevitable.

    “I wouldn’t be keen to bet on that at the moment,” Smith says, “but I do think we’re going to make some good progress within a few years.”

    http://www.nature.com/news/climate-change-the-forecast-for-2018-is-cloudy-with-record-heat-1.13344

    # # #

    No mention of UKMO’s Dec 2012 5 year forecast to 2017 but basically, all these near-term model predictions “start out cool and then rise sharply” no matter what year they start them.

    I think they have a collective problem.

  15. Richard C (NZ) on 29/08/2013 at 2:42 pm said:

    Two GCM papers appear to be creating a “buzz” at present.

    First paper:

    ‘Recent global warming hiatus tied to equatorial Pacific surface cooling’

    Yu Kosaka and Shang-Ping Xie

    [Judith Curry] “….the same natural internal variability (primarily PDO) that is responsible for the pause is a major and likely dominant cause (at least at the 50% level) of the warming in the last quarter of the 20th century”

    http://judithcurry.com/2013/08/28/pause-tied-to-equatorial-pacific-surface-cooling/

    [John Michael Wallace of the University of Washington] “It argues that not only could the current hiatus in the warming be due to natural causes: so also could the rapidity of the warming from the 1970s until the late 1990s”

    http://www.climatecentral.org/news/new-study-ties-global-warming-hiatus-to-a-pacific-cooldown-16405

    Second paper:

    ‘Overestimated global warming over the past 20 years’

    Opinion & Comment by Fyfe, Gillett and Zwiers

    [Judith Curry] “Their conclusion This difference might be explained by some combination of errors in external forcing, model response and internal climate variability is right on the money IMO”

    http://judithcurry.com/2013/08/28/overestimated-global-warming-over-the-past-20-years/

    [The Hockey Schtick] “The authors falsify the models at a confidence level of 90%, and also find that there has been no statistically significant global warming for the past 20 years”

    http://hockeyschtick.blogspot.co.nz/2013/08/new-paper-finds-climate-models-have.html

    # # #

    “Pause”, “hiatus”, and “divergence” now standard climatological terms in the literature apparently.

    • Richard C (NZ) on 29/08/2013 at 4:36 pm said:

      Twitter / BigJoeBastardi: Now “climate researchers” will …

      Now “climate researchers” will want huge grants to tell us that when pdo warms in 20 years, warming will resume,after drop to late 70s temps

      Twitter / RyanMaue: Cold-phase of PDO means …

      Cold-phase of PDO means “hiatus/less/pause/plateau” of warming. We need a Nature article w/climate models to prove this?

      Twitter / RyanMaue: I already blamed lack of global …

      I already blamed lack of global TC activity from 2007-2012 on colder Pacific conditions. I thought it was so apparent to be non-publishable

      Twitter / BigJoeBastardi: The arrogance and ignorance …

      The arrogance and ignorance of these guys, now “discovering” what many have forecasted to happen due to cold PDO is stunning

      http://tomnelson.blogspot.co.nz/2013/08/links_1320.html

    • Richard C (NZ) on 29/08/2013 at 4:43 pm said:

      Tisdale re Kosaka and Xie:

      “Anyone with a little common sense who’s reading the abstract and the hype around the blogosphere and the Meehl et al papers will logically now be asking: if La Niña events can stop global warming, then how much do El Niño events contribute? 50%? The climate science community is actually hurting itself when they fail to answer the obvious questions.”

      http://wattsupwiththat.com/2013/08/28/another-paper-blames-enso-for-the-warming-hiatus/

      ‘Global warming pause caused by La Nina’

      The researchers said similar decade-long pauses could occur in future, but the longer-term warming trend was “very likely to continue with greenhouse gas increases”.

      Read more: http://www.smh.com.au/environment/climate-change/global-warming-pause-caused-by-la-nina-20130829-2ss3p.html#ixzz2dKYCdLUn

      # # #

      Or “…the longer-term warming trend was “very likely to [turn to cooling] with [solar decreases]”

      It all depends on the (correct) attribution.

    • Richard C (NZ) on 29/08/2013 at 5:10 pm said:

      Settled science: The heat is hiding in the ocean, while the Pacific Ocean cools, and it’s “pretty straightforward” and “complicated” and a “a chicken vs. egg problem” dogs the finding Pacific Ocean cools, flattening global warming

      “Really, this seems pretty straightforward. The climate is complicated, and natural variability can mask trends seen over century-long timescales,” says climate scientist David Easterling of the National Oceanic and Atmospheric Administration’s National Climatic Data Center in Asheville, N.C.

      MIT’s Susan Solomon is more skeptical of the Pacific Ocean cooling as an explanation for the flattening, saying “a chicken vs. egg problem” dogs the finding. “Did the sea surface temperatures cool on their own, or were they forced to do so by, for example, changes in volcanic or pollution aerosols, or something else? This paper can’t answer that question.”

      http://tomnelson.blogspot.co.nz/2013/08/settled-science-heat-is-hiding-in-ocean.html

  16. Richard C (NZ) on 12/09/2013 at 3:16 pm said:

    New paper finds ‘up to 30% discrepancy between modeled and observed solar energy absorbed by the atmosphere’

    More problems for the climate models: A paper published today in Geophysical Research Letters finds that there is “up to 30% discrepancy between the modeled and the observed solar energy absorbed by the atmosphere.” The authors attribute part of this large discrepancy, which would alone have a greater radiative forcing effect than all of the man-made CO2 in the atmosphere, to water vapor absorption in the near UV region [see hotlink], “But the magnitude of water vapor absorption in the near UV region at wavelengths shorter than 384 nm is not known.” The authors note, “Water vapor is [the most] important greenhouse gas in the earth’s atmosphere” and set out to discover [apparently for the first time] “The effect of the water vapor absorption in the 290-350 nm region on the modeled radiation flux at the ground level.”

    ‘The influence of water vapor absorption in the 290-350 nm region on solar radiance: Laboratory studies and model simulation’

    Juan Du, Li Huang, Qilong Min, Lei Zhu

    Abstract

    [1] Water vapor is an important greenhouse gas in the earth’s atmosphere. Absorption of the solar radiation by water vapor in the near UV region may partially account for the up to 30% discrepancy between the modeled and the observed solar energy absorbed by the atmosphere. But the magnitude of water vapor absorption in the near UV region at wavelengths shorter than 384 nm is not known. We have determined absorption cross sections of water vapor at 5 nm intervals in the 290-350 nm region, by using cavity ring-down spectroscopy. Water vapor cross section values range from 2.94 × 10-24 to 2.13 × 10-25 cm2/molecule in the wavelength region studied. The effect of the water vapor absorption in the 290-350 nm region on the modeledradiation flux at the ground level has been evaluated using radiative transfer model.

    >>>>>>>>

    http://hockeyschtick.blogspot.co.nz/2013/09/new-paper-finds-up-to-30-discrepancy.html

  17. Richard C (NZ) on 23/09/2013 at 7:10 pm said:

    ‘Leaked SPM AR5: Multi-decadal trends’

    Data Comparisons Written by: lucia

    […]

    A way into the section the draft states.

    “Models do not generally reproduce the observed reduction in surface warming trend over the last 10–15 years……………….”

    […]

    Earlier in the draft we find:

    “There is very high confidence that climate models reproduce the observed large-scale patterns and multi-decadal trends in surface temperature, especially since the mid-20th century”

    So evidently the AR5 will admit that they have not reproduced observed warming in the past 10-12 years, speculate that it might be unpredictable climate variability, solar, volcanic or aerosol forcings or possibly due to ‘too strong a response to increasing greenhouse-gas forcings”, which mostly amounts too excess climate sensitivity. That said, reading the leaked draft, I can’t help but wonder about their definition of “multi-decadal”. Generally, I assume that means “two or more decades”. So, I ran my script to get roughly 15, 20 and 25 year trends, comparing the observed earth trend to the spread in trends in the ‘AR5′ models forced using the rcp45 scenario.

    […]

    As you can see, while the 15 year trend (discussed in the leaked draft SPM) are just on the edge of the model spread, longer term predictions are fall outside. So I would think if they don’t have great confidence in predicting 15 year trends, they would have even less confidence in predicting “multi-decadal” trends. But what do I know?

    Anyway, possibly this leaked draft is a hoax. We’ll see.

    http://rankexploits.com/musings/2013/leaked-spm-ar5-multi-decadal-trends/

  18. Richard C (NZ) on 29/09/2013 at 4:51 pm said:

    ‘Viewpoints: Reactions to the UN climate report’

    BBC

    Professor John Shepherd, ocean & earth science, University of Southampton

    “….no-one ever claimed that climate models could predict all these decadal wiggles”

    http://www.bbc.co.uk/news/science-environment-24296204

    # # #

    Successive decadal wiggles are what make multidecadal projections. And Kosaka and Xie (2013) modeled (in retrospect) the present decadal wiggle when constrained by natural oceanic variation.

    Therefore, natural variation (e.g. PDO/AMO) must be integrated in the models before realistic projections can be made – the sceptics argument for yonks,

  19. Richard C (NZ) on 15/11/2013 at 10:02 am said:

    ‘New paper finds simple laptop computer program reproduces the flawed climate projections of supercomputer climate models’

    The Hockey Schtick

    A new paper finds a simple climate model based on just three variables “and taking mere seconds to run on an ordinary laptop computer, comes very close to reproducing the results of the hugely complex climate models.” and “The [laptop computer] model was based on three key processes: how much energy carbon dioxide prevents from escaping to space (radiative forcing), the relationship between rate of warming and temperature, and how rapidly the ocean takes up heat (ocean thermal diffusivity).”

    Actually, you only need one independent variable [CO2 levels] to replicate what the highly complex supercomputer climate models output. This has been well demonstrated by Dr. Murry Salby in his lecture, which shows 1:1 agreement between the supercomputer-simulated global temperature and CO2 levels over the 21st century: [see graph]

    More>>>>>>>

    http://hockeyschtick.blogspot.co.nz/2013/11/new-paper-finds-simple-laptop-computer.html

  20. I remember, even clearer now 20yrs later, when I was studying Electrical Engineering at University doing Philosophy 101. We were the first year that had philosophy added to the course with the intention to opening the eyes of potential engineers to a frame of reference for the decisions we may one day make as engineers.

    The topics I remember were:
    • Value judgments, how our personal values influence what’s right
    • The energy crisis – over reliance of fossil fuels
    • Global warming, The Greenhouse effect, man-made CO2 emissions
    I very much enjoyed the Value judgments topic. Why do we make bridges only 2.5 times stronger than their maximum loading? What makes your decisions and values more important than others? Does it take into account the potential of natural disasters? Excellent stuff!
    The Energy Crisis topic didn’t make as much sense to me. So many loaded learnings. We weren’t philosophizing, we were being brain washed. I could understand that fossil fuel is a finite resource but I also knew, at the same time I was being brain washed, the constant discovery of more and larger deposits that technology was helping us find were being discovered. We were being told to use Nuclear energy, solar, wind, tidal, etc. etc. This only had me think, what is the environmental cost of those resources? Why aren’t we discussing those in philosophy rather than being brain washed? I was sure, even without evidence, the environmental cost of making solar panels was likely to be expensive. Not only were fossil fuels required to make them but how much processing and environmental damage? I knew we weren’t being encouraged to think, but to agree. Anyway, so what if I don’t agree?
    Then the topic of Global Warming. I can’t tell you why, it must have been instinct but the whole topic did not sit right with me. Maybe because it was so accusational? It was our fault! And therefore it was our responsibility to fix it. Nope! I didn’t have anything to do with our current position at 20yrs of age. I knew it wasn’t me, and I was feeling uncomfortable about the whole delivery of this brain washing. I immediately agreed, we probably should stop polluting the planet and reduce our use of fossil fuels but the rest was rubbish.
    I was not happy and there was a lack of scientific evidence. And then, the evidence that was produced? Well it was a chart of the earth’s temperature related to the suns radiation. I don’t have a copy any longer, all these years later and I can’t find it online. What I saw, at least in my mind was a direct correlation between the suns output compared to the earth’s temperature. It was as clear as day for me. I wanted to find evidence to support my gut feeling and the only piece of evidence that seemed to matter, but there was none. Keep in mind the internet wasn’t what it is today. The best thing about the internet at that moment in time was the release of Netscape, so I got to see boobs on the computer. Yes indeed, remember the very first steps into the World Wide Web? I do.
    Needless to say, I failed philosophy. I would not spew their lies. I still say, I have never learned more than I did when I failed philosophy.
    All these years later I have found a growing movement of educated and intelligent people who share my suspicion towards the global warming lie. Ok, I better clarify that comment. The lie that global warming is due to man-made CO2 emissions.
    I encourage everyone to research this for themselves. It shouldn’t be a surprise that I refer you to a community I am involved with SuspiciousObservers.
    Here is Ben’s latest conference which is a great start and overview. Watch this if nothing else. Ben Davidson: The Variable Sun and Its Effects on Earth | EU2014
    Their website contains a wide variety of brilliant information including:
    • Starwater – water comes from stars and every planet has water
    • C(lie)mate – the global warming lie
    • Agenda 21
    Check out the daily SO news on YouTube at https://www.youtube.com/user/Suspicious0bservers
    See weather presented from a space perspective.
    It’s bigger than you think.
    Rikdownunda

  21. Richard C (NZ) on 17/01/2015 at 7:53 am said:

    ‘Does the Uptick in Global Surface Temperatures in 2014 Help the Growing Difference between Climate Models and Reality?’

    Bob Tisdale / January 16, 2015

    CLOSING

    As illustrated and discussed, while global surface temperatures rose slightly in 2014, the minor uptick did little to overcome the growing difference between observed global surface temperature and the projections of global surface warming by the climate models used by the IPCC.

    http://wattsupwiththat.com/2015/01/16/does-the-uptick-in-global-surface-temperatures-in-2014-help-the-growing-difference-between-climate-models-and-reality/

    In comments:

    Simon
    January 16, 2015 at 9:06 am

    Quote from NOAA’s annual summary…
    “This is the first time since 1990 the high temperature record was broken in the absence of El Niño conditions at any time during the year in the central and eastern equatorial Pacific Ocean, as indicated by NOAA’s CPC Oceanic Niño Index. This phenomenon generally tends to increase global temperatures around the globe, yet conditions remained neutral in this region during the entire year and the globe reached record warmth despite this.”

    As much as this article has tried to imply this record year is not significant, the paragraph above would say otherwise.

    http://wattsupwiththat.com/2015/01/16/does-the-uptick-in-global-surface-temperatures-in-2014-help-the-growing-difference-between-climate-models-and-reality/#comment-1837163

    Reply

    Bob Tisdale
    January 16, 2015 at 9:19 am

    NOAA is playing games, Simon. They well know that this year’s El Nino was not focused on the NINO3.4 region. The JMA uses the NINO3 region and they’ve stated that El Nino conditions have existed since June 2014:

    http://ds.data.jma.go.jp/tcc/tcc/products/elnino/outlook.html

    Bazinga!

    • Richard C (NZ) on 17/01/2015 at 9:40 am said:

      ‘Warmest year’, ‘pause’, and all that

      by Judith Curry, January 16, 2015

      […]

      El Nino?

      One of the key aspects of the hype about the ‘warmest year in 2014′ was that 2014 was not even an El Nino year. Well, there has been a great deal of discussion about this issue on the Tropical ListServ. Here is what I have taken away from that discussion:

      A global circulation response pattern to Pacific convection with many similarities to El Niño has in fact been present since at least June. Convection to the east of New Guinea is influencing zonal winds in the upper troposphere across the Pacific and Atlantic, looking similar to an El Nino circulation response.

      So, is it El Niño? Not quite, according to some conventional indices, but a broader physical definition might be needed to capture the different flavors of El Nino. A number of scientists are calling for modernizing the ENSO identification system. So I’m not sure how this event might eventually be identified, but for many practical purposes (i.e. weather forecasting), this event is behaving in many ways like an El Nino.

      What does this mean for interpreting the ‘almost warmest year’? Well not much; I think it is erroneous to infer that ‘it must be AGW since 2014 wasn’t even an El Nino year’ is useful reasoning here.

      That said, there is definitely some unusual events on the North Pacific, including extreme warm anomalies in the mid-high latitudes, and positive value of the PDO.

      Bottom line

      Berkeley Earth sums it up well with this statement:

      “That is, of course, an indication that the Earth’s average temperature for the last decade has changed very little.”

      The key issue remains the growing discrepancy between the climate model projections and the observations: 2014 just made the discrepancy larger.

      Speculation about ‘warmest year’ and end of ‘pause’ implies a near term prediction of surface temperatures – that they will be warmer. I’ve made my projection – global surface temperatures will remain mostly flat for at least another decade. However, I’m not willing to place much $$ on that bet, since I suspect that Mother Nature will manage to surprise us. (I will be particularly surprised if the rate of warming in the next decade is at the levels expected by the IPCC.)

      http://judithcurry.com/2015/01/16/warmest-year-pause-and-all-that/#more-17601

  22. Richard C (NZ) on 17/01/2015 at 8:02 am said:

    ‘Peer-reviewed pocket-calculator climate model exposes serious errors in complex computer models and reveals that Man’s influence on the climate is negligible’

    Anthony Watts / January 16, 2015

    What went wrong?

    A major peer-reviewed climate physics paper in the first issue (January 2015: vol. 60 no. 1) of the prestigious Science Bulletin (formerly Chinese Science Bulletin), the journal of the Chinese Academy of Sciences and, as the Orient’s equivalent of Science or Nature, one of the world’s top six learned journals of science, exposes elementary but serious errors in the general-circulation models relied on by the UN’s climate panel, the IPCC. The errors were the reason for concern about Man’s effect on climate. Without them, there is no climate crisis.

    Thanks to the generosity of the Heartland Institute, the paper is open-access. It may be downloaded free from http://www.scibull.com:8080/EN/abstract/abstract509579.shtml. Click on “PDF” just above the abstract.

    More>>>>>
    http://wattsupwiththat.com/2015/01/16/peer-reviewed-pocket-calculator-climate-model-exposes-serious-errors-in-complex-computer-models-and-reveals-that-mans-influence-on-the-climate-is-negligible/

  23. Richard C (NZ) on 03/02/2015 at 9:26 am said:

    ‘Questioning the robustness of the climate modeling paradigm’

    by Judith Curry, February 2, 2015

    Are climate models the best tools? A recent Ph.D. thesis from The Netherlands provides strong arguments for ‘no’.

    http://judithcurry.com/2015/02/02/questioning-the-robustness-of-the-climate-modeling-paradigm/

  24. Richard C (NZ) on 09/02/2015 at 8:16 pm said:

    Remote Sensing Systems (RSS) – Climate Analysis

    Atmospheric Temperature

    […] The troposphere has not warmed as fast as almost all climate models predict.

    To illustrate this last problem, we show several plots below. Each of these plots has a time series of TLT temperature anomalies using a reference period of 1979-2008. In each plot, the thick black line is the measured data from RSS V3.3 MSU/AMSU Temperatures. The yellow band shows the 5% to 95% envelope for the results of 33 CMIP-5 model simulations (19 different models, many with multiple realizations) that are intended to simulate Earth’s Climate over the 20th Century. For the time period before 2005, the models were forced with historical values of greenhouse gases, volcanic aerosols, and solar output. After 2005, estimated projections of these forcings were used. If the models, as a whole, were doing an acceptable job of simulating the past, then the observations would mostly lie within the yellow band. For the first two plots (Fig. 1 and Fig 2), showing global averages and tropical averages, this is not the case. Only for the far northern latitudes, as shown in Fig. 3, are the observations within the range of model predictions.

    http://www.remss.com/research/climate

    Ouch! From RSS no less.

  25. Richard C (NZ) on 15/02/2015 at 7:19 pm said:

    ‘Winters in Boston Becoming Drier’

    Written by Dr. Roy Spencer on 13 February 2015.

    Much has been said in recent weeks about how bigger snowstorms in Boston are (supposedly) just what climate models have predicted. “Global warming” is putting more water vapor into the air, leading to more “fuel” for winter storms and more winter precipitation.

    While this general trend is seen in climate models for global average conditions (warming leads to more precipitation), what do the models really predict for Boston?

    And what has actually been observed in Boston?

    The following plot shows that the observed total January precipitation in Boston has actually decreased since the 1930′s, contrary to the average “projections” (in reality, hindcasts) from a total of 42 climate models, at the closest model gridpoint to Boston:

    [See graph]

    Note that even the forecast increase in January precipitation is so small that it probably would never be noticed if it actually occurred.

    During the same period, January temperatures in Boston have seen a statistically insignificant +0.1 deg. F per decade warming, in contrast to 2.5 times faster average warming produced by the 42 climate models:

    [See graph]

    What is very evident is the huge amount of natural variability from year to year, as Bostonians are well aware.

    It’s just weather, folks. Blaming everything on “climate change” is just plain lazy.

    http://www.climatechangedispatch.com/winters-in-boston-becoming-drier.html

  26. Richard C (NZ) on 25/02/2015 at 9:18 pm said:

    ‘Are Climate Modelers Scientists?’

    by Pat Frank February 24, 2015

    For going on two years now, I’ve been trying to publish a manuscript that critically assesses the reliability of climate model projections. The manuscript has been submitted twice and rejected twice from two leading climate journals, for a total of four rejections. All on the advice of nine of ten reviewers. More on that below.

    The analysis propagates climate model error through global air temperature projections, using a formalized version of the “passive warming model” (PWM) GCM emulator reported in my 2008 Skeptic article. Propagation of error through a GCM temperature projection reveals its predictive reliability.

    […]

    I will give examples of all of the following concerning climate modelers:

    They neither respect nor understand the distinction between accuracy and precision.
    They understand nothing of the meaning or method of propagated error.
    They think physical error bars mean the model itself is oscillating between the uncertainty extremes. (I kid you not.)
    They don’t understand the meaning of physical error.
    They don’t understand the importance of a unique result.

    Bottom line? Climate modelers are not scientists. Climate modeling is not a branch of physical science. Climate modelers are unequipped to evaluate the physical reliability of their own models.

    The incredibleness that follows is verbatim reviewer transcript; quoted in italics. Every idea below is presented as the reviewer meant it. No quotes are contextually deprived, and none has been truncated into something different than the reviewer meant.

    And keep in mind that these are arguments that certain editors of certain high-ranking climate journals found persuasive.

    […]

    In their rejection of accuracy and fixation on precision, climate modelers have sealed their field away from the ruthless indifference of physical evidence, thereby short-circuiting the critical judgment of science.

    Climate modeling has left science. It has become a liberal art expressed in mathematics. Call it equationized loopiness.

    The inescapable conclusion is that climate modelers are not scientists. They don’t think like scientists, they are not doing science. They have no idea how to evaluate the physical validity of their own models.

    They should be nowhere near important discussions or decisions concerning science-based social or civil policies.

    http://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/

  27. Richard C (NZ) on 27/02/2015 at 7:05 pm said:

    ‘On Steinman et al. (2015) – Michael Mann and Company Redefine Multidecadal Variability And Wind Up Illustrating Climate Model Failings’

    Bob Tisdale / 4 hours ago February 26, 2015

    http://wattsupwiththat.com/2015/02/26/on-steinman-et-al-2015-michael-mann-and-company-redefine-multidecadal-variability-and-wind-up-illustrating-climate-model-failings/

    Some good comments too e.g. Dr Norman Page:

    http://wattsupwiththat.com/2015/02/26/on-steinman-et-al-2015-michael-mann-and-company-redefine-multidecadal-variability-and-wind-up-illustrating-climate-model-failings/#comment-1870168

    “That the Steinman et al paper got through peer review for Science Magazine says much about the current state of establishment science. However in a short comment on the paper in the same Science issue Ben Booth of the Hadley center does sound a refreshingly cautionary ( for Science Mag and Hadley ) note saying that the paper is only useful if the current models accurately represent both the external drivers of past climate and the climate responses to them and that there is reason to be cautious in both of these areas. This comment is an encouraging sign that empirical reality may be finally making an impression on the establishment consciousness.”

  28. Richard C (NZ) on 24/03/2015 at 9:24 am said:

    INMCM4 (Russian Academy of Sciences) in Judith Curry’s post:

    ‘Climate sensitivity: lopping off the fat tail’

    There is one climate model that falls within the range of the observational estimates: INMCM4 (Russian). I have not looked at this model, but on a previous thread RonC makes the following comments.

    “On a previous thread, I showed how one CMIP5 model produced historical temperature trends closely comparable to HADCRUT4. That same model, INMCM4, was also closest to Berkeley Earth and RSS series.

    Curious about what makes this model different from the others, I consulted several comparative surveys of CMIP5 models. There appear to be 3 features of INMCM4 that differentiate it from the others.”

    1.INMCM4 has the lowest CO2 forcing response at 4.1K for 4XCO2. That is 37% lower than multi-model mean

    2.INMCM4 has by far the highest climate system inertia: Deep ocean heat capacity in INMCM4 is 317 W yr m22 K-1, 200% of the mean (which excluded INMCM4 because it was such an outlier)

    3.INMCM4 exactly matches observed atmospheric H2O content in lower troposphere (215 hPa), and is biased low above that. Most others are biased high.

    So the model that most closely reproduces the temperature history has high inertia from ocean heat capacities, low forcing from CO2 and less water for feedback.

    Definitely worth taking a closer look at this model, it seems genuinely different from the others.

    http://judithcurry.com/2015/03/23/climate-sensitivity-lopping-off-the-fat-tail/

    And, I suggest, throw out all the others.

  29. Richard C (NZ) on 06/04/2015 at 5:25 pm said:

    Tom Nelson tweet (he’s hardly missed a beat by suspension, and blogging again too):

    “Ok, so maybe the Canadian Climate Model isn’t quite matching reality”

    https://twitter.com/tan123/status/584853278426406914

    See graph https://pbs.twimg.com/media/CB3Q9RJUAAAX8_g.png

  30. Richard C (NZ) on 18/04/2015 at 4:22 pm said:

    ‘Open Letter to U.S. Senators Ted Cruz, James Inhofe and Marco Rubio’

    Bob Tisdale / April 14, 2015

    Subject: Questions about Climate Model-Based Science

    From: Bob Tisdale – Independent Climate Researcher

    To: The Honorable Ted Cruz, James Inhofe and Marco Rubio

    Dear Senators Cruz, Inhofe and Rubio:

    I am writing you as chairs of the Subcommittee on Space, Science, and Competitiveness, of the Senate Environment and Public Works Committee, and of the Committee on Oceans, Atmosphere, Fisheries, and Coast Guard. I am an independent researcher who studies global warming and climate change, and I am probably best known for my articles at the science weblog WattsUpWithThat, where I would be considered an investigative reporter.

    I have a few very basic questions for you about climate model-based science. They are:

    # Why are taxpayers funding climate model-based research when those models are not simulating Earth’s climate?
    # Why are taxpayers funding climate model-based research when each new generation of climate models provides the same basic answers?
    # Redundancy: why are taxpayers funding 5 climate models in the U.S.?
    # Why aren’t climate models providing the answers we need?
    Example: Why didn’t the consensus of regional climate models predict the timing, extent and duration of the Californian drought?

    I have discussed and provided support for those concerns in the following.

    Note: I began this letter a couple of months ago, back when it was announced that you would be chairs of those committees. Two of you are now running for President. Even with that in mind, I hope that you and your staffs will consider these questions.

    Questions follow>>>>>
    http://wattsupwiththat.com/2015/04/14/open-letter-to-u-s-senators-ted-cruz-james-inhofe-and-marco-rubio/

  31. Richard C (NZ) on 28/04/2015 at 7:36 pm said:

    Deceit by the the University of New South Wales ARC Centre of Excellence for Climate System Science.

    Paper:

    Robust warming projections despite the recent hiatus by Matthew H. England, Jules B. Kajtar and Nicola Maher published in Nature Climate Change, doi:10.1038/nclimate2575

    Commentary:

    “The peer-reviewed study, published today in Nature Climate Change, compared climate models that capture the current slowdown in warming to those that do not.”

    And,

    “This shows that the slowdown in global warming has no bearing on long-term projections – it is simply due to decadal variability. Greenhouse gases will eventually overwhelm this natural fluctuation,” said lead author and Chief Investigator with the ARC Centre of Excellence for Climate System Science, Prof Matthew England.

    http://www.reportingclimatescience.com/news-stories/article/heat-on-despite-global-warming-pause-say-researchers.html

    # # #

    1) The HadCRUT4 series is heavily smoothed in their graph compared to the model runs:

    http://www.reportingclimatescience.com/index.php?eID=tx_cms_showpic&file=uploads%2Fpics%2Funsw_1_nclimate2575-f1.jpg&md5=cff854267170c09526926833a5393f3ac439ca26&parameters%5B0%5D=YTo0OntzOjU6IndpZHRoIjtzOjQ6IjgwMG0iO3M6NjoiaGVpZ2h0IjtzOjM6IjYw&parameters%5B1%5D=MCI7czo3OiJib2R5VGFnIjtzOjQyOiI8Ym9keSBiZ0NvbG9yPSIjZmZmZmZmIiBz&parameters%5B2%5D=dHlsZT0ibWFyZ2luOjA7Ij4iO3M6NDoid3JhcCI7czozNzoiPGEgaHJlZj0iamF2&parameters%5B3%5D=YXNjcmlwdDpjbG9zZSgpOyI%2BIHwgPC9hPiI7fQ%3D%3D

    HadCRUT4 unsmoothed actually looks like this:

    http://www.woodfortrees.org/graph/hadcrut3gl/from:1990

    Somewhat at odds with the profiles of the model runs selected. The graph caption states “The future projections have been appended to corresponding historical runs at 2006”. 2006 corresponds to the start of the models-observations divergence in the graph.

    2) The climate models selected DO NOT capture the full extent of the current slowdown in warming. The trajectory of the models departs from the trajectory of the observations about a decade ago. The post 2006 simulations have not “captured the current slowdown in warming” post 2006.

    3) In no way can projections be described as “robust”, as in the first word of the title of the paper.

    4) The statement “Greenhouse gases will eventually overwhelm this natural fluctuation” is neither science nor fact – it is speculation.

    5) The University of New South Wales lead author and Chief Investigator with the ARC Centre of Excellence for Climate System Science, Prof Matthew England, is no more than a charlatan.

  32. Richard C (NZ) on 25/05/2015 at 3:24 pm said:

    Underlying architecture of selected climate models by Kaitlin Alexander, PhD student in climate science at the University of New South Wales in Sydney, Australia..

    Diagram key
    COSMOS 1.2.1
    Model E (17/06/2011)
    HadGEM3 (03/08/2009)
    CESM 1.0.3
    GFDL CM 2.1
    IPSLCM5A
    UVic ESCM 2.9

    http://climatesight.org/2011/08/16/wrapping-up/

    All have direct incidence of “solar radiation” to the atmosphere module – correct.

    Not one has direct incidence of solar radiation to either ocean or land contrary to Trenberth et al’s ‘Global Energy Flows’ (see below), and conventional radiation-matter physics. Apparently there’s no direct incidence, the ocean and land receives solar energy via interaction with the atmosphere. Which it does but that’s the minor “diffuse” component (neglected by Trenberth et al), the major is direct as shown:

    Global Energy Flows, Trenberth et al., 2009
    http://www.cgd.ucar.edu/cas/Topics/Fig1_GheatMap.png

    The climate science modeling world sure is a strange place. And internally inconsistent and contradictory.

  33. Richard Treadgold on 25/05/2015 at 3:32 pm said:

    A startling revelation. I’ve heard hand-on-heart assertions from warmists that climate models are founded in physics, which this little summer project blows apart. A great effort.

  34. HemiMck on 25/05/2015 at 5:56 pm said:

    Useful (for us) seeing what aspects are being factored in, but It is curious that she has chosen to go to so much trouble counting lines of code and ranking the models on that basis. I don’t imagine that the impact of each aspect has much to do with how many lines of code there are.

    Perhaps the more lines of code the better climate scientist you are.

  35. Richard C (NZ) on 06/01/2016 at 9:12 am said:

    ‘Update of Model-Observation Comparisons’ [HadCRUT4 & RSS]

    Steve McIntyre, posted on Jan 5, 2016 at 12:27 PM

    http://climateaudit.org/2016/01/05/update-of-model-observation-comparisons/

  36. Richard C (NZ) on 10/02/2016 at 8:09 pm said:

    ‘A TSI-Driven (solar) Climate Model’

    February 8, 2016 by Jeff Patterson

    “The fidelity with which this model replicates the observed atmospheric CO2 concentration has significant implications for attributing the source of the rise in CO2 (and by inference the rise in global temperature) observed since 1880. There is no statistically significant signal of an anthropogenic contribution to the residual plotted Figure 3c. Thus the entirety of the observed post-industrial rise in atmospheric CO2 concentration can be directly attributed to the variation in TSI, the only forcing applied to the system whose output accounts for 99.5% ( r2=.995) of the observational record.

    How then, does this naturally occurring CO2 impact global temperature? To explore this we will develop a system model which when combined with the CO2 generating system of Figure 4 can replicate the decadal scale global temperature record with impressive accuracy.

    Researchers have long noted the relationship between TSI and global mean temperature.[5] We hypothesize that this too is due to the lagged accumulation of oceanic heat content, the delay being perhaps the transit time of the thermohaline circulation. A system model that implements this hypothesis is shown in Figure 5.”

    “The results (figure 10) correlate well with observational time series (r = .984).”

    http://wattsupwiththat.com/2016/02/08/a-tsi-driven-solar-climate-model/comment-page-1/

    # # #

    Goes a long way towards modeling Multi-Decadal Variation/Oscillation (MDV/MDO).
    Long system lag (“oceanic delay”), well in excess of 70 years depending on TSI input series.
    Cannot be accused of “curve fitting” (but was in comments even so).

  37. Richard C (NZ) on 24/02/2016 at 6:18 pm said:

    STATISTICAL FORECASTING How fast will future warming be?

    Terence C. Mills © Copyright 2016 The GlobalWarming Policy Foundation

    Summary
    The analysis and interpretation of temperature data is clearly of central importance
    to debates about anthropogenic globalwarming (AGW). Climatologists currently rely
    on large-scale general circulation models to project temperature trends over the coming
    years and decades. Economists used to rely on large-scale macroeconomic models
    for forecasting, but in the 1970s an increasing divergence between models and
    reality led practitioners to move away from such macro modelling in favour of relatively
    simple statistical time-series forecasting tools, which were proving to be more
    accurate.

    In a possible parallel, recent years have seen growing interest in the application of
    statistical and econometric methods to climatology. This report provides an explanation
    of the fundamental building blocks of so-called ‘ARIMA’ models, which are widely
    used for forecasting economic and financial time series. It then shows how they, and
    various extensions, can be applied to climatological data. An emphasis throughout
    is that many different forms of a model might be fitted to the same data set, with
    each one implying different forecasts or uncertainty levels, so readers should understand
    the intuition behind the modelling methods. Model selection by the researcher
    needs to be based on objective grounds.

    ARIMA models are fitted to three representative data sets: the HADCRUT4 global
    surface series, the RSS global lower troposphere series and the Central England Temperature
    (CET) series. A clear finding presents itself for the two global temperature
    series. Irrespective of the model fitted, forecasts do not contain any trend, with longhorizon
    forecasts being flat, albeit with rather large measures of imprecision even
    from models in which uncertainty is bounded. This is a consequence of two interacting
    features of the fitted models: the inability to isolate a significant drift or trend
    parameter and the large amount of overall noise in the observations themselves compared
    to the fitted ‘signals’. The CET exhibits season-specific trends, with evidence of
    long-term warming in the winter months but not in the summer.

    […]

    Figure 5: HADCRUT4 and forecasts from fitted ARIMA (0, 1, 3) model
    Monthly data, January 2011–December 2014 with forecasts out to December 2020
    accompanied by 95% forecast intervals.

    Figure 6: HADCRUT4 and forecasts from fitted segmented trend model
    Monthly data, January 2011–December 2014 with forecasts out to December 2020
    accompanied by 95% forecast intervals.

    Figure 7: RSS and forecasts from fitted ARIMA (0, 1, 1) model
    Monthly data, January 2011–December 2014 with forecasts out to December 2020
    accompanied by 95% forecast intervals.

    Figure 8: RSS and forecasts from fitted segmented trend model
    Monthly data, January 2011–December 2014 with forecasts out to December 2020
    accompanied by 95% forecast intervals.

    Figure 9: CET and forecasts
    Monthly data, January 2011–December 2014. Forecasts per ‘multiplicative ARIMA plus
    deterministic seasonal trends’ model out to December 2020 accompanied by 95%
    forecast intervals.

    http://www.thegwpf.org/content/uploads/2016/02/Forecasting-3.pdf

    # # #

    Richard Betts doesn’t like this one little bit:

    Are @thetimes so desperate for subscribers that they’re reduced to covering daft GWPF reports for trashy clickbait? https://t.co/u8VqcNOOJc
    — Richard Betts (@richardabetts) February 23, 2016

    http://bishophill.squarespace.com/blog/2016/2/23/two-worlds-collide.html

    Terence C. Mills just elevated himself to warmist enemy #1.

  38. Richard C (NZ) on 20/04/2016 at 3:03 pm said:

    MUST READ (yes MUST)

    Gavin Schmidt and Reference Period “Trickery”

    Steve McIntyre, posted on Apr 19, 2016

    In the past few weeks, I’ve been re-examining the long-standing dispute over the discrepancy between models and observations in the tropical troposphere. My interest was prompted in part by Gavin Schmidt’s recent attack on a graphic used by John Christy in numerous presentations (see recent discussion here by Judy Curry). Schmidt made the sort of offensive allegations that he makes far too often:

    @curryja use of Christy’s misleading graph instead is the sign of partisan not a scientist. YMMV. tweet;

    @curryja Hey, if you think it’s fine to hide uncertainties, error bars & exaggerate differences to make political points, go right ahead. tweet.

    As a result, Curry decided not to use Christy’s graphic in her recent presentation to a congressional committee. In today’s post, I’ll examine the validity (or lack) of Schmidt’s critique.

    Schmidt’s primary dispute, as best as I can understand it, was about Christy’s centering of model and observation data to achieve a common origin in 1979, the start of the satellite period, a technique which (obviously) shows a greater discrepancy at the end of the period than if the data had been centered in the middle of the period. I’ll show support for Christy’s method from his long-time adversary, Carl Mears, whose own comparison of models and observations used a short early centering period (1979-83) “so the changes over time can be more easily seen”. Whereas both Christy and Mears provided rational arguments for their baseline decision, Schmidt’s argument was little more than shouting.

    Background

    The full history of the controversy over the discrepancy between models and observations in the tropical troposphere is voluminous. While the main protagonists have been Christy, Douglass and Spencer on one side and Santer, Schmidt, Thorne and others on the other side, Ross McKitrick and I have also commented on this topic in the past, and McKitrick et al (2010) was discussed at some length by IPCC AR5, unfortunately, as too often, deceptively on key points

    […]

    Conclusion

    There is nothing mysterious about using the gap between models and observations at the end of the period as a measure of differing trends. When Secretariat defeated the field in the 1973 Belmont by 25 lengths, even contemporary climate scientists did not dispute that Secretariat ran faster than the other horses.

    Even Ben Santer has not tried to challenge whether there was a “statistically significant difference” between Steph Curry’s epic 3-point shooting in 2015-6 and leaders in other seasons. Last weekend, NYT Sports illustrated the gap between Steph Curry and previous 3-point leaders using a spaghetti graph (see below) that, like the Christy graph, started the comparisons with a common origin. The visual force comes in large measure from the separation at the end.

    If NYT Sports had centered the series in the middle of the season (in Bart Verheggen style), then Curry’s separation at the end of the season would be cut in half. If NYT Sports had centered the series on the first half (in the style of Gavin Schmidt’s “reasonable baseline”), Curry’s separation at the end of the season would likewise be reduced. Obviously, such attempts to diminish the separation would be rejected as laughable.

    There is a real discrepancy between models and observations in the tropical troposphere. If the point at issue is the difference in trend during the satellite period (1979 on), then, as Carl Mears observed, it is entirely reasonable to use center the data on an early reference period such as the 1979-84 used by Mears or the 1979-83 period used by Christy and Spencer (or the closely related value of the trend in 1979) so that (in Mears’ words) “the changes over time can be more easily seen”.

    Varying Schmidt’s words, doing anything else will result in “hiding” and minimizing “differences to make political points”, which, once again in Schmidt’s words, “is the sign of partisan not a scientist.”

    There are other issues pertaining to the comparison of models and observations which I intend to comment on and/or re-visit.

    https://climateaudit.org/2016/04/19/gavin-schmidt-and-reference-period-trickery/

    # # #

    Note Carl Mears RSS graph “Figure 2. From RSS” (yellow models, blue observations).

    https://climateaudit.files.wordpress.com/2016/04/rss_model_ts_compare_trop30.png?w=1024

    The original graph at the RSS website only had a black line representing observations. Mears altered the graph to show uncertainty at the behest of Thomas from Hot Topic (he posted his discussion with Mears in comments at HT).

    In some ways, I think it actually casts an even worse light on the models because the obs uncertainly band now encroaches the zero anomaly baseline more-so than the previous black line did. It doesn’t really help the models that the upper obs limit now encroaches the models lower limit, which was what Thomas wanted to achieve.

    [RT, any chance you could “reblog” Steve McIntyre’s post? This is a VERY hot topic among the big hitters of climate]

  39. Richard C (NZ) on 14/05/2016 at 10:54 am said:

    Gareth S. Jones (UK Met Office, Jones, Lockwood, and Stott (2012) cited AR5 Chap 9 Radiative Forcing, contributing author AR5 Chap 10 Detection and Attribution) Tweets:

    Gareth S Jones ‏@GarethSJones1

    Update of comparison of simulated past climate (CMIP5) [RCP4.5] with observed global temperatures (HadCRUT4)

    Graph
    https://pbs.twimg.com/media/CZLSm-RWAAAltk8.jpg

    Tweet
    https://twitter.com/GarethSJones1/status/689845260890050562/photo/1

    # # #

    Schmidt chimes in. Jones’ Tweet to get everyone fizzed up obviously (except Barry Woods in thread) because El Nino spike in central 50% red zone of climate models.

    Except ENSO-neutral data is OUTSIDE the red zone. The spike wil be back down again before the end of the year and before an impending La Nina. As Barry Woods puts it:

    “An El Nino Step Up, in temps, or a peak to be followed by cooler years? (for a few yrs)”

    We wont have to wait long to find out. Some climate scientists, led by Schmidt, headed for a fall I’m picking.

  40. Richard C (NZ) on 25/06/2016 at 11:03 pm said:

    For the record in conjunction with post: ‘IPCC Ignores IPCC Climate Change Criteria’ (not published yet as of this comment date)

    The earth’s energy budget has no LW flux into the surface once the net of OLR and DLR is arrived at (-52.4 W.m-2). Energy accumulation at surface i.e. the surface imbalance (+0.6) is therefore simply the residual of solar ingress after all egress is subtracted. Solar ingress is the greater (+188 vs -187.4). LW nomenclature in ocean surface energy budgets varies a little in oceanography papers but the only radiative LW energy transfer flux tabulated by the definitive Fairal et al (1996) paper is “Rnl” (net LW radiation), which is an outgoing transfer (flux) upwards from the surface.

    But climate models bypass the physics of the AO interface, instead the models allocate energy transfer at the surface by the IPCC forcing assumption. Proof of this is in IPCC AR4 WG1 Chapter 2 on this page:

    2.9.5 Time Evolution of Radiative Forcing and Surface Forcing [see Figure 2.23 Surface Forcing]
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-9-5.html

    Figure 2.23. Globally and annually averaged temporal evolution of the instantaneous all-sky RF (bottom panel) and surface forcing (top panel) due to various agents, as simulated in the MIROC+SPRINTARS model (Nozawa et al., 2005; Takemura et al., 2005). This is an illustrative example of the forcings as implemented and computed in one of the climate models participating in the AR4. Note that there could be differences in the RFs among models. Most models simulate roughly similar evolution of the LLGHGs’ RF.
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-2-23.html

    Graphs only, Surface and Radiative:

    Figure 2.23
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/fig/figure2-23-l.png

    The surface imbalance is +0.6 W.m-2 attributable to the solar residual as above but the models implement a surface forcing regime nothing like the actual surface imbalance. Year 2000 LLGHG forcing is just over +0.4 W.m-2, solar just less than +0.2 W.m-2, but net LLGHG+Ozone+Aerosols+Land Use is -1.4 W.m-2 (cooling).

    There is no way the net of all the forcings, +ve and-ve, come anywhere near the actual surface imbalance. The model example (MIROC+SPRINTARS model) bears no resemblance whatsoever to the actual surface energy budget.

    Worse, the model contradicts the IPCC’s speculated anthropogenic ocean warming mechanism (“air-sea fluxes”,)

  41. Richard C (NZ) on 15/09/2016 at 10:49 am said:

    ‘Global climate models and the laws of physics’

    http://judithcurry.com/2016/09/13/global-climate-models-and-the-laws-of-physics

Leave a Reply

Your email address will not be published. Required fields are marked *