Filmed free for nothing

1010 logo

UPDATE1: OCT 3 12:25 AM

Apology from O2. See end of story.

UPDATE2: OCT 3 10:30 AM

Many more sponsors and partners than I realised. H/T Huub Bakker.

Join the boycott of Sony, O2 and Kyocera

(see end of story)

After all the work they put into it, the film “No Pressure” lasted just a few hours on the Internet before the torrent of abuse from scandalised viewers forced the producers to apologise and withdraw the movie. Or they tried to. Unfortunately for them it went viral and is still available all over the place. Anyway, their apology wasn’t worth the ether it was posted into.

What a storm of outrage the film aroused! Oddly enough, it affronted both sides of the climate debate equally. The film was deeply disturbing because it crossed a boundary in gruesomeness and the corruption of youth. Even in the cause of saving the Earth, reasonable people everywhere are saying “that’s a brutality too far.”

Slick but sick

I’m talking, of course, about the mini-movie released yesterday by 10:10, a global campaign to “cut carbon” by 10% a year, starting in 2010.

Produced by Richard Curtis (writer of Blackadder, Four Weddings, Notting Hill and others), acted by some famous names along with footballers from Tottenham Hotspur and with a full professional film crew giving their time for free, the film production was certainly slick.

Slick, but sick. Let us hope we never see its like again for any reason. The production of “No Pressure” marks a terrible new low in the propaganda that passes for information in the climate wars. What a shame all that effort went for nothing.

Sweet teacher incapable of cruelty

There have been illuminating descriptions and comments at WUWT, James Delingpole at the Telegraph. Anthony Watts, by the way, gives our friend Gareth Renowden a sound scolding.

Of the four different scenes, I would like to mention the classroom. The teacher is a sweet young woman, credible, caring and clearly incapable of cruelty.

Here she is, after asking the class whether they’d be interested in changing their light bulbs for the good of the Earth, telling two of the children that it’s perfectly all right to demur.

video capture

After reassuring them she mildly presses a big red button which makes the two children explode in a storm of red gore. She cheerfully dismisses the stunned classroom with blood on her face without a hint of disturbance.

video capture

Among the many depictions of evil people, none are more chilling than those who dish out mayhem and death with a smile on their face. So it is here, where mild-mannered actors destroy those who merely disagree with them.

Supporters of the film can simply be ignored. It’s a fitting sentence for those who trample on the feelings of others.

Gareth Renowden said:

“It’s only offensive if you have a sense of humour failure (IMHO).”


“I turned off the “dislike” button because of the influx from Watts.”

So he requires us to laugh at exploding innocents yet he cannot stomach readers simply disagreeing with him. It is a shallow outlook.

Time for a boycott

The film was partly sponsored in the UK by public funds. Delingpole informs us that the three commercial sponsors of this tasteless insanity are Sony, O2 and Kyocera.

If you’re of a mind to, feel free to inform them of your disgust at this film and to boycott their products. I have.


Contact page
You must fill out a form which is sent to them.


Contact page
You must fill out a form.
You must quote a valid UK postal code; I used one from Southampton: SO32 3PN


Contact page

The message I sent them

I’m disappointed you sponsored the disgusting UK film “No Pressure” for the 10:10 campaign. I am now boycotting [name of sponsor] products until you apologise and advising all my friends to do the same. Shame on you!

UPDATE 1: OCT 3 12:25 AM

Just one hour and two minutes after sending my complaint to O2 about their sponsorship of the disgusting film “No Pressure” I received this reply:

Hello Richard

I’m sorry that you’re unhappy with the environmental campaign’s video 10:10.

Please accept my apologies about this as we weren’t aware about the content of the video. We also weren’t consulted when this video was made and published. You can find the apology statement by clicking the link below:

I Hope you understand the situation. If there is anything else, please reply to my email or visit our online help centre at:


O2 Customer Service

Telefónica O2 UK Limited, Registered in England No 1743099. Registered
Office: 260 Bath Road, Slough, Berkshire SL1 4DX.

It’s revealing that they didn’t seem to know much about the project. We’ll see if they escalate this apology to a press release. Send in more complaints!

UPDATE 2: OCT 3 10:30 AM

Wow! Did I leave out some research! In my haste to publish I took Delingpole at face value. Now reader Huub Bakker tells us the true extent of support for the 10:10 campaign:

some of the other sponsors are mentioned at Troy Media:

“Around 100,000 people from 152 countries have signed up. British Prime Minister David Cameron has pledged the entire British government to participate. Large companies are associated with 10:10, including Britain’s Royal Mail, the electronics giant Sony, and Facebook. The United Nations-backed Climate Neutral Network is one of its many “partner” organisations. The World Wildlife Fund for Nature and Greenpeace are supporters through their proxy the Global Campaign for Climate Action.”

Then he says rather drily: “So it’s not exactly a fringe group, is it?”

This rather changes things and it certainly makes me angrier. Governments and large organisations ought to be better behaved.

First, it makes it even more important to complain vociferously to all the sponsors. It might look like a huge campaign, but as the Tiger Woods debacle showed, nobody wants to be associated with bad behaviour, no matter how big the show.

Second, since large numbers of people have been offended by the campaign (around the world) there’s a momentum of odium; if we add to it quickly it could be the death of the campaign.

Third, a lot of people consider the film has exposed the underlying political inclinations of the modern environmentalist. Kill the disbelievers. Not hard to agree with that, especially if there are degrees of “killing” that might include sanctions that believers avoid.

What to do

Mine the 10:10:10 global site for email addresses and contacts.
Let them know the film was unacceptable.
Write letters to editors, complaining about the film and asking why have a campaign.
Ask them why try to limit carbon — the element that life’s built on?

Make suggestions here

This is a first attempt. Think of things, post them here. Share the messages you send, especially the pithy, hard-hitting ones. We don’t all want to send the same message, but we can repeat pithy phrases or slogans.


Carbon — the element that life’s built on

Views: 488

36 Thoughts on “Filmed free for nothing

  1. Huub Bakker on 03/10/2010 at 4:16 am said:


    You can also send a message directly to 10:10 in New Zealand at although some of the other sponsors are mentioned at Troy Media:

    “Around 100,000 people from 152 countries have signed up. British Prime Minister David Cameron has pledged the entire British government to participate. Large companies are associated with 10:10, including Britain’s Royal Mail, the electronics giant Sony, and Facebook. The United Nations-backed Climate Neutral Network is one of its many “partner” organisations. The World Wildlife Fund for Nature and Greenpeace are supporters through their proxy the Global Campaign for Climate Action.”

    So it’s not exactly a fringe group is it?

    My email to is presented below in full. (Apologies to those that read it in the other thread.) I was also moved to email Nick Smith yesterday as well. I’ll post his reply if I get one.


    I write to you as someone with a doctorate as a Chemical Engineer and as a senior lecturer at Massey University for more than 20 years. I have the training to intelligently analyse the issue of anthropogenic global warming and have spent hundreds of hours doing so. My conclusions are that there is no direct evidence for AGW and that the null hypothesis that AGW does not exist has not been disproven. The hysteria that attends this issue is therefore entirely unwarranted.

    I have further found that the actions suggested to curb AGW will have no significant affect, even if the hypothesis of AGW was shown to be true. I have also discovered that the costs of such actions would be much more than the costs resulting from AGW, even if true.

    Most disturbingly, the failure of the scientific, political and journalistic spheres to admit the glaringly obvious forces me to conclude (as many others have) that there are vested interests at work with political, financial and ideological reasons for supporting this charade, of which the environmentalists are by no means in the background.

    As an academic of a university, and as a Professional Engineer and member of my professional body, IPENZ, my duty is to make this information available to the public. I have been doing so whenever the opportunity arises.

    I wish to inform you that the truly disgusting movie that is the subject of this email, and produced by your organisation, sends a clear message of what 10:10 stands for, even if that message is unintended.

    From your founder:

    “Doing nothing about climate change is still a fairly common affliction, even in this day and age. What to do with those people, who are together threatening everybody’s existence on this planet? Clearly we don’t really think they should be blown up, that’s just a joke for the mini-movie, but maybe a little amputating would be a good place to start?” […] “We ‘killed’ five people to make No Pressure – a mere blip compared to the 300,000 real people who now die each year from climate change”
    —Franny Armstrong, founder of 10:10

    If you wish to justify the worth of this video because the end justifies the means, then you place themselves on the same level as the witch-burners of Salem, the inquisitors of Spain, the final problem-solvers of Germany and the suicide-bomber organisers of Islam. This truly is the work of eco-facists, no joke. Believe or die.

    Thank you so much for making clear to people what you stand for and making my task so much easier.


    Dr Huub Bakker, Senior Lecturer

  2. Richard C on 03/10/2010 at 1:49 pm said:

    Coincidence or what?

    From WUWT “Blow Me Up, Blow Me Down.

    “Geoffrey Allan Plauché says:
    October 1, 2010 at 9:41 pm

    The organization’s name, 10:10, and their push to reduce carbon emissions by 10%, coupled with the slaughtering of a few dissenters in every scene (roughly 10%?), reminded me of the Roman disciplinary practice of decimation.

    Decimation was a punishment imposed on Roman military units for failure, cowardice, or mutiny in which one in ten (10% of) soldiers were selected by lot to be slaughtered by their comrades. Only the decimated victims in 10:10′s video are chosen for this ultimate punishment by their failure to make the “right” choice. No pressure. Decimating the global population sure is one way to reduce carbon emissions by 10%”

    My protest.

    To 10:10 UK
    DON’T BRING YOUR VILE UK PROPAGANDA TO NEW ZEALAND (In the email title so they MUST read before deleting)

    In the email body:

    Re “No Pressure”

    Moving the carbon emission reduction awareness campaign to the realm
    of propaganda and the indoctrination of children stoops to a very
    depraved level.

    It proves how empty the substance of the case for carbon reduction really is

    To 10:10 NZ


    In the email body:

    The case for the carbon emission reduction awareness campaign should stand on scientific merit without the need to stoop to the depravity of propaganda and the destructive indoctrination of children.

    There are real disasters happening here and now (earthquakes, tsunamis) . An imaginary non-event in 2100 just does not warrant this ugly un-substantiated alarmism.

    • Huub Bakker on 03/10/2010 at 5:16 pm said:

      Well done Richard. You’ve captured the disgust beautifully. 🙂

  3. David White on 03/10/2010 at 1:57 pm said:

    The rhetoric over at hot-topic is becoming disturbing.

    One character, RedLogix, is writing things like:

    “We are at the point now where reasoned persuasion has demonstrably failed; coercion or mass death are the only choices left on the table.

    “If that makes us ‘eco-fascist nazis’ … then so be it. By fiddling while the planet burned you brought it on yourselves.”


    “We have a powerful right to defend our civilisation against the greedy and irresponsible; it trumps your selfish desire to drive a motor car.”


    “The time for reasoned persuasion is over. We now get to be held responsible for the consequences of our choices…as do all grown ups.”

    • Richard C on 03/10/2010 at 2:48 pm said:

      For these people, we are paying our ETS dues?

    • Yes, this is very disturbing. I just hope it’s confined to the lunatic asylum that is Hot Topic. When these views start to be aired on Sciblogs or in the newspapers there are authorities who can stop it. In the meantime, it is superb and I am grateful that there are reasonable people like you and other visitors prepared to investigate, report and engage in calm debate here. Eventually, when people tire of emotion and want just the truth (and they will), they go to a place like this where they can trust what is said.

    • Richard C on 04/10/2010 at 10:40 am said:

      “Eventually, when people tire of emotion and want just the truth (and they will), they go to a place like this where they can trust what is said.”

      Given my propensity to wax between lucidity and lunacy, I hope that anyone reading my comments here, does so with an appropriate degree of scepticism.

      Unfortunately, the flip side of the information age is that it is also a mal-information age and it is very easy to become ill-informed by taking seemingly authoritative reports for granted.

      Thus one can very easily find oneself in the unsettling position of being wrong. In my youth, I would have clung to an erroneous position un-movingly because at that time I was always right, so why adjust?

      I have since discovered that conceding defeat where my position has been based on error and moving my position becomes easier with age and I now regard this trait in others as a strength of character. It takes guts and some humbling to move from a firmly entrenched position so I have a great deal of respect for those who do so.

      I also have respect for those who have laboured and produced in their fields for some time and this includes Dr Kevin Trenberth. I note much of the educational value of this series is due to his work:

      It is only really page 7 that gets a bit silly.

      Okay, the energy budget is debatable but T&F did the work (there are other budgets sans GHG forcing but I digress) allowing others to pick holes in it.

      So (in view of the above), I am watching to see if Trenberth cracks when the missing heat search proves fruitless. I will be even more interested if he finds it.

      Two examples:

      1. I learned more about clouds in models and model parameterisation due to the goading of a passing troll (Not-Richo) at JoNova, than I had on my own account previously.

      2. Can we really trust satellite series?

      I put great stock in the global MSL produced by U of Colorado

      But as a Hydrologist at Port of Tauranga observed: the series should be taken with a grain of salt due to the averaging that goes on (not to mention fractions of a millimetre on a wavy ocean).

      Also Spencer and Christy have the tricky task of cobbling together the AMSU-A series from 11 different satellites and 11 different channels. Only two series are now reliable as the others have degraded. This situation is unlikely to improve in the near future because NASA’s primary mission is (as Obama puts it): “reaching out to the Muslim nations”.

      The potential for human error in the satellite series (as in land series) is immense; as the 600 Deg F temperatures at lake Wisconsin demonstrate.

    • Richard C on 05/10/2010 at 5:27 pm said:

      Antony Watts on “An over the top view of satellite sensor failure”

      A little reassuring but only just.

  4. David on 03/10/2010 at 2:12 pm said:

    Well, you guys might end up on WUWT’s blog roll. I asked this the other day and Anthony responded.

    “A good NZ website with a NZ focus is
    They’ve had some good stuff about NIWA’s fiddling with temp data.
    Wonder if Anthony would add them to the blog roll?

    REPLY: I’m working on a guest post with them, but we are waiting for some additional details. – Anthony”

    • Yes, this is true, and the one he’s waiting for is me! I was looking for photos of the Albert Park weather station to document its history through the 20th Century. Willem de Lange gave me a reference to a collection in the Auckland Public Library and I’ve located suitable pics which I’m about to send to Anthony.

      It would give this blog a big boost to get noticed on WUWT.

      Thanks for your comments to Anthony, David, that’s a big help!

  5. mort on 03/10/2010 at 6:06 pm said:

    Can you please list the offending sponsors of this vile campaign, I intend to never spend another cent with any of them.

    • Well, the article gives some names and addresses, then Update 2 repeats Huub Bakker’s contribution. Finally, I suggest you go to the 10:10:10 global site at for names of other sponsors and partners. There are a great many of them. Some of us are in the process of writing to each one of them…

      I guess I should write a list, to make it easier for others! I’ll do it in the morning.

  6. Bob D on 03/10/2010 at 6:28 pm said:

    Apology from Kyocera:


    Thank you for your email. I totally understand your reaction to this video, which was very similar to my own.

    Kyocera Mita UK has supported the 10:10 campaign because we share its ambition to reduce carbon emissions. However, we don’t support the “No Pressure” video and are dismayed by the suggestion that we might have been knowing partners in its production; in fact, we had no knowledge of its content until it appeared online. We consider that 10:10 made a serious error of judgement in its choice of creative approach, which is totally at odds with the inclusive and positive attitude that has been the hallmark of its other activities. We understand that 10:10 has acknowledged its mistake, withdrawn the video and issued an apology.

    I assure you that we are taking this issue extremely seriously. A formal statement will be issued in due course.

    Kind regards

    Tracey Rawling Church
    Director of Brand and Reputation

    KYOCERA MITA (UK) Limited

  7. Bob D on 03/10/2010 at 6:37 pm said:

    I sent the following to

    Wow! Just Wow! What an amazing own goal.


    Bob D

  8. mawm on 03/10/2010 at 7:14 pm said:

    10:10nz sounds like it is a one greenie nutter affair. I wonder if our government is sponsoring him and his lame web site?

    THE 10:10 TEAM
    Meet one of the people behind 10:10 NZ:

    Rhys Taylor

    Job title: Voluntary 10:10NZ hub person

    Actual job: Christchurch-based freelance community educator, project manager, photo-journalist and science researcher.

    10:10er since: 2009

    10:10 plan: Building an eco-house, flying less often, cycling when here in the city, eating less meat and growing more of my own food, choosing and testing energy-efficient appliances.

    Favourite 10:10er: Fellow New Zealander Lizzie Gillett, for her tenacity in producing ‘The Age of Stupid’ movie!

    Guilty pleasure: Having to drive, although in a shared car, to reach our organic vege garden and orchard in South Canterbury.

    Best 10:10 moment: Realising that 10:10 is ready made for the urgent education and action needs in NZ, whilst we wait for politicians to catch up. This has saved re-inventing it!

    Background: Lincoln Masters Degree in natural resource management, experience in both UK and NZ of community development and education for environmental action, mostly within NGOs and alongside local government. Two recent research projects with Landcare Research Ltd on NZ futures and behaviour change. An accredited sustainability advisor to business and local government with The Natural Step NZ. Contributor to Sustainable Living education programme (operates in 24 areas across NZ), to Sustainable Otautahi Christchurch Inc, Transition Town groups, home gardening education at Christchurch Botanic Gardens and writing a regular column for The Press newspaper.

  9. Richard C on 03/10/2010 at 8:56 pm said:

    Was debating RedLogix at Hot Topic “No pressure – 10:10 on the button” but was unable to post the following comment (for some reason -duplicate apparently)

    RedLogix @ 188, 189

    You seem to imply here (correct me if I’m wrong) that the assumption is: because there has been a 40 year sustained MSL rise prior to the 5 yr fall (blip) then a return to continued sustained rise is guaranteed in the near future.

    This assumption is at odds with the planets warming-cooling cycle and has been challenged by Geologists



    So over the next 30 years one faction will be proved right and the other wrong by simple observation. In view of the cycle, can a cool 30 yr phase be discounted?

    I note that Akasofu’s cycle shows a much hotter climate around 2050 than we are currently experiencing but again this is natural and not ACO2 induced.

    • Richard C on 04/10/2010 at 9:13 am said:

      I’m back in business at Hot Topic @ 210

      The enthralling debate continues…….

    • Richard C on 05/10/2010 at 10:04 am said:

      Have now engaged with Gareth @ 202, 212

    • Richard C on 05/10/2010 at 10:32 pm said:

      Called a halt to hostilities with Gareth.

      The real debate is taking place here:

      What can we learn from climate models? by Judith Curry

      This will be an ongoing series among very influential people that will have international repercussions.

      The quality of input and diversity of views (post and comments) is staggering, except for the spelling in the example below (he’s a Dr BTW):

      Roy Clark | October 4, 2010 at 2:49 pm | Reply

      The fundamental goal of any simulation is to reproduce (and then hopefully predict) the behavior of the physically measurable variables in the system. These ideas go back to the start of quantum mechnical models in the 1920′s. In the case of climate models, the basic variable is the ‘surface temperature’. The claim of global warming is that the 1.7 W.m-2 increase in ‘clear sky’ downward flux from a 100 ppm increase in atmopspheric CO2 concentration has produced an increase in surface temperature of 1 degree C. This is based on the use of the meteorological surface air temperature record [‘hockey stick’] as a proxy for the surface temperature.

      The idea of radiative forcing goes back to Manabe and Weatherald in 1967.
      The underlying assumption of radiative forcing is that long term averages of dynamic variables can somehow be treated as though they were in equilibrium and that perturbation theory can be applied. The downward flux of 1.7 W.m-2 from CO2 is added to the flux from a surface at an ‘equilibrium surface temperature’ of 288 K. This gives a temperature rise of ~0.3 C and the rest of the 0.7 C temperature rise is claimed from ‘water vapor feedback’. This is to say the least, totally absurd.

      The surface energy transfer is dynamic and the surface flux terms vary between +1000 W.m-2 for full summer sun and -100 W.m-2 for low humidity night time cooling. The surface flux is also coupled into the ground, so the thermal conduction down into a cubic meter of soil has to be added to the calculation. The surface temperature during the day, for dry bare soil/summer sun conditions can reach over 50 C. Now do the averaging correctly, with half hour cumulative flux terms and the whole global warming problem goes away. 1.7 W.m-2 for 0.5 hour is ~3kJ.m-2. The full solar flux for 0.5 hour is 1.8 MJ.m-2. The night time cooling is 0.18MJ.m-2/0.5 hr. The heat capacity of the soil is over 1 MJ.m-3. Do the math and there can be no measurable surface temperature rise from 100 ppm [CO2].

      Now, remember that this is the real surface temperature. The one under our bare feet when we walk around. The MSAT is the temperature of the air in an enclsosure at eye level. The change in the MSAT is the change in the bulk air temerpature of the local air mass of the weather system as it passes through. This is usually set by the ocean surface temerpatures in the region of origin of the weather system, with a little help from the local urban heat island effects. When the ocean surface temperatures and the urban heat island effects are included, CO2 can have no effect on the MSAT.

      I learnt long ago to leave the Navier Stokes equation well alone, but make sure that the output of the fluid dynamics models was firmly anchored to real, measurable variables. There was also a very good indpendent check. If the models were wrong, the rocket engine could blow up ….

      There is no problem with the fluid dynamics in the climate models. Make small changes and validate often. They may not be very accurate, but we do not need them to predict global warming. They are research tools, so keep them out of public policy. Weather and climate follow from ocean surface temperatures and sunspots etc.

      The problem is that the empirical assumption of CO2 induced warming has been ‘hard wired’ into the models using ‘radiative forcing constants’. This climate astrology not climate science. The IPCC is a political body that has created anthropogenic global warming where none exists. Once radiative forcing is removed and replaced with realistic surface energy transfer algorithms for the air-land and air-ocean interfaces, the climate models should begin to behave much better.

      The global warming surface temperature predicted by the IPCC is not a valid climate variable.

      R. Clark, Energy and Environment 21(4) 170-200 (2010) ‘A null hypothesis for CO2′.
      R. Clark, ‘California Climate is caused by the PDO, not by CO2′
      R. Clark, Gravity rules over the photoncs in the greenhouse effect—guest-post-by-roy-clark-201.php
      [Note: Figure 6 in this ref gives the half hour flux terms for a real surface]
      J. D’Aleo and D. Easterbrook, Multidecadal tendencies in ENSO and global temperatures related to multidecadal oscillations
      A. Cheetham, A history of the global warming scare
      [This shows the IPCC liars we are really dealing with]
      Lindzen, R.S. & Y-S. Choi, Geophys Res. Letts., On the determination of climate feedbacks from ERBE data, 2009, 36 L16705 1–6
      [Goodbye water feedback]

    • Richard C on 06/10/2010 at 9:41 am said:

      Have issued this challenge to Gareth at Hot Topic:

      Richard C2 October 6, 2010 at 9:01 am

      “So you are happy to accept that you are wrong in the particular respects I have pointed out? Or are you just not prepared to continue the discussion”

      Whether you or I are right or wrong at this juncture is immaterial in this 1:1 debate.

      The appropriate international forum for all the issues we are covering has now opened up at Judith Curry’s blog.

      I suggest that our respective points of view are presented on that forum where deficiencies of argument are immediately taken to task by experts. Given that the focus of the forum is uncertainty in the models, I think you will find that you are defending the indefensible.

      So I will be continuing this discussion on that forum (but not here) and whether you contribute or not there is up to you.

      Thank you for the opportunity to present my point of view on your Blog; this discussion has been much more stimulating than preaching to the converted at sceptic sites.

    • Richard C on 06/10/2010 at 10:12 am said:

      An interesting development.

      05 Oct 2010: Analysis
      “On Climate Models, the Case For Living with Uncertainties” by fred pearce


      Clearly, concerns about how climate scientists handle complex issues of scientific uncertainty are set to escalate. They were highlighted in a report about IPCC procedures published in late August in response to growing criticism about IPCC errors. The report highlighted distortions and exaggerations in IPCC reports, many of which involved not correctly representing uncertainty about specific predictions.

      But efforts to rectify the problems in the next IPCC climate-science assessment (AR5) are likely to further shake public confidence in the reliability of IPCC climate forecasts.

      Last January, Trenberth, head of climate analysis at the National Center for Atmospheric Research in Boulder, Colo., published a little-noticed commentary in Nature online. Headlined “More Knowledge, Less Certainty,” it warned that “the uncertainty in AR5’s predictions and projections will be much greater than in previous IPCC reports.” He added that “this could present a major problem for public understanding of climate change.” He can say that again.

    • Richard C on 08/10/2010 at 9:50 am said:

      Fred Pearce’s article is making headlines.

      Reuters have now picked it up the same article, different headline:

      “Climate Models: Get Ready for More Uncertainty”

    • Richard C on 06/10/2010 at 11:14 am said:

      Also this progression:

      30 Aug 2010: Analysis
      “The Effect of Clouds on Climate: A Key Mystery for Researchers” by michael d. lemonick


      A major problem facing climate modelers is extrapolating the behavior and impacts of clouds from an individual level to a regional scale. The resolution of climate models — the grid boxes researchers divide the atmosphere into for the purposes of simulations, analogous to the pixels that make up a digital image — is much bigger than any individual cloud. And, says Randall, what goes on inside those grid boxes in the real world varies widely depending on local conditions, including the type of particles around which water vapor condenses to form clouds.


      Randall cited one example of a huge regional cloud phenomenon in the tropics whose behavior in a warming world is uncertain. Known as the Madden-Julian Oscillation, the phenomenon involves the formation of enormous systems of thunderstorms over the oceans, driving weather patterns affecting millions of people. “Most models do not even produce this phenomenon, even though it’s the largest feature in tropical atmosphere,” said Randall. “If you’re missing that, you’re missing an important thing. We’d like to be able to predict whether it will get stronger and more common, or less.”

      Climate scientists would obviously be far more confident in the models if the simulations of cloud behavior matched the real world. But just as with the computer models, observations of clouds have been too spotty to get an accurate picture of what’s going on. Meteorologists have been taking reasonably consistent readings of temperatures around the world for more than a century, which is why the Intergovernmental Panel on Climate Change can talk so confidently about the fact of global warming. But there’s no comparable data set on clouds, which means that “there’s really nothing we can say about how clouds have changed globally over the 20th century,” says Amy Clement, a climatologist at the University of Miami.

      Take a deep breath and actuate BS filter before reading the entire article – the author assumes a “warming world” but readable nonetheless.

    • Richard C on 06/10/2010 at 2:00 pm said:


      Climate science modeling now getting serious stick from other fields – nuclear, chemical etc.

      Some very astute and knowledgeable comments in this (very long) thread at Climate Audit:

      “Curry Reviews Jablonowski and Williamson”


      Frank K.
      Posted Feb 3, 2008 at 10:24 PM | Permalink

      “There aren’t any standard test cases used by atmospheric modelers.”

      I find this utterly astonishing! You mean no one has bothered to apply various GCMs to reference solutions until now? And we’re talking about just the dynamical cores here…

      There is also another related issue that I believe plagues many of the numerical models. How do you prove that the algorithms expressed in the computer code are actually solving the equations they purport to be solving? That is, has anyone done a software audit on these codes? Many organizations, like NASA GISS, provide little to no documentation of the algorithms, even though the code itself is provided. For those who are interested, take a look at Model E for instance at the GISS website. What equations are used for the dynamical core? Are they implemented correctly? Has any stability analysis be performed on the discrete equations? What are the stability limits? How do changes in parametric models (e.g. ocean, ice, tracers, radiation, precipitation models) and initial conditions affect the stability? I could go on…

      Yet, these are the numerical models that are being used to advise policy makers on future climate. I think it is high time we demand the same kind of software verification and validation for climate models in particular that we demand for codes used in, say, the nuclear industry.

      [The formulations he is looking for are here but the symbols don’t come across on the web]

      [Also “There aren’t any standard test cases used by atmospheric modelers.” addressed (unconvincingly) by Curry down-thread]

      Posted Feb 3, 2008 at 10:44 PM | Permalink

      #3 and #4:

      Having spent fifteen years writing software in the nuclear industry, I have an appreciation for the issues associated with doing V&V on complex software. I’ve also been involved in several code re-engineering projects.

      Referring to the discussion concerning GCM models underway in Steven Mosher’s link, it would be a most interesting (and expensive) experiment to take one of the older GCM models running in procedural language, construct a requirements document and a code design document for the GCM under current V&V practices, and then re-code it using a more modern object-oriented language.

      Would the two incarnations of the same GCM model produce identical results starting out with identical initial parameters?

      My guess is, not the first time around doing the re-coding, and probably not even the second time around.

      Geoff Sherrington
      Posted Feb 4, 2008 at 12:32 AM | Permalink

      Prof Garth Partridge held several eminent positions in Australia, latterly CEO of the Antarctic CRC. So we can assume he is quotable, at least to provoke discussion. Quote –

      One can throw a grid of measurement (as dense as you like) all over the plume (smoke in box example explained elsewhere) at some particular time, but won’t be able to forecast the eddy behaviour for very long thereafter. Basically this is because of the growth of sub-scale eddies which seem to come out of nowhere.

      Is this just one of a set of limits to calculation that inherently restrict the utility of modelling? It takes my mind back to Mandelbrot’s fractal images, where you mined deeper and deeper to get different patterns, seemingly without limit.

      Is there a worthwhile payback for the work Judith Curry proposes, or is it merely another demonstration that too many people drew inferences from models before they were ready? Will they ever be ready in the sense that they can be? Meanwhile, what of policy formulation……


      William Newman
      Posted Feb 4, 2008 at 8:14 AM | Permalink

      #4: “There aren’t any standard test cases used by atmospheric modelers. I find this utterly astonishing!”

      I don’t think this should be so astonishing, though you might want to be astonished by some other stuff. (And if my discussion of the other stuff is insufficiently close to the original topic, it’s OK with me if some moderator-type person deletes all but the next two paragraphs.)

      I did some undergraduate research, and my Ph. D., in simulations of biomolecules. There are some standard sorts of checks that people tried to do — e.g., compare to the atomic positions found by experimentalists in really-well-studied macromolecules. But even with complete scientific integrity there can be vexing practical obstacles to making satisfactory “standard test case” choices. For example, the stuff which is known most clearly can be annoyingly far from the regime which is of the most practical interest. For the proteins-in-solution problem which motivated my work as an undergraduate programmer, the experimentalists had very good results for protein crystals, but they have to impose pretty weird harsh conditions (like extreme salt concentrations) to get them to crystallize, and what we care about much more in practice is how proteins behave in milder environments more like the insides of living organisms.

      I am generally underimpressed by the climate science folk — e.g., yesterday I was idly wondering whether a page like would look more like Darwinism or Lysenkoism to someone who doesn’t already have my cynical view. And I generally support some common criticisms of modelling, like Steve McIntyre’s remarks somewhere about how blurring the distinction between measured and modelled/extrapolated results is a well-recognized sin in mining-related advocacy and should be here too. But I don’t think you should necessarily be shocked at the lack of standard test cases.

      One less-standard criticism you might want to be shocked at, or at least very seriously disturbed by, is lack of attention to other kinds of verification of predictive power. It is weird for me, coming from molecular modeling (and a general interest in the history of science), to see people paying so much attention to matching a very small number of observations given the large number of parameters in the model. It is particularly weird because I currently have _The Minimum Description Length Principle_ checked out from the local university library (for my own machine-learning reasons, not for policy advocacy reasons). In my experience, when scientists have a theory which they believe to have a lot of predictive power and only a few obvious numbers of practical importance to test it against, they look intensely for less obvious practically unimportant observations to test it against. These numbers may not come up in, e.g., testimony before Congress, since they may be honest-to-goodness very hard to present in a few pretty pictures. But they come up all the time in more technical discussions.

      E.g., in molecular modelling, people got very interested in higher-dimensional nuclear magnetic resonance data. Such NMR doesn’t naturally give the same kind of pretty every-atom-in-its-fixed-place pictures as X-ray crystallography, but it gives a great volume of weird detailed little constraints and correlations which could be cross-checked against a model. And even if what eeveryone cared about in practice was some simple high-level summary like the function of a protein (e.g., something like the O2 affinity of hemoglobin), nobody would present a new model with hundreds of parameters and focus only on its fit to a few-parameter curve of bound O2 vs. partial pressure of O2. If you can’t find datasets with very large numbers of degrees of freedom to compare against (like higher dimensional NMR, or various other kinds of spectroscopy) caution is in order. And if the modeller isn’t terribly interested in finding such datasets, perhaps great caution is in order.

      Without knowing much specific about climate models, coming from chemical modelling I’d expect two general things of them. First, they’d regularly refer to at least one huge family of checkable things about regional distributions and correlations and so forth, rather larger than “the number of adjustable parameters” (a vague concept, but one which can be made more precise, as e.g. in the book I mentioned) in their models. Second, if they’re confident their models are precise enough to pick out interesting nonobvious phenomena (e.g., famously, that the net temperature response to C02 concentration is considerably higher than the first-order effect) then even if the mechanism doesn’t leave clear footprints in the current experimental datasets, it should be leave footprints in some imaginable experimental datasets that the climate folk are now passionately longing to measure (I dunno: nocturnal concentration fluctuations of an ionized derivative of CO2 over the Arctic and Antarctic). I don’t absolutely know that these things don’t exist, but I’ve spent some hours surfing RealClimate without noticing any hints that they do. (And if molecular modelling had been subject to the same level of informed controversy as climate modelling, I’d’ve expected that a few hours surfing the RealBiomolecule site would have given many such hints.)

      I’d be reassured to see climate modellers hammering on how detailed interesting regularities in experimental data (existing or wished-for) are explained or predicted by their model: something like the (honestly exasperated) way biologists refer to all the megabytes of detail revealed by DNA sequencing and other molecular biology which just keeps matching the constraints of Darwin’s model. So far, that honestly exasperated attitude hasn’t come through as strongly as I’d like.:-| I don’t expect the climate scientists to be angels: I’ve followed the creationism dispute for decades, and honest competent biologists are not immune to the temptation to be exasperated at distractions like their critics’ funding not coming from the holy NSF and their critics’ almighty credentials not being biology degrees. But biologists seldom get so fascinated by distractions that they forget to refer to fundamentals like the enormous volume of detailed regularities in nature that the consensus model matches.

      Posted Feb 4, 2008 at 2:07 PM | Permalink


      If climate models are to be used to learn what it is we don’t know about climte, then I agree with you vis a vis V&V.
      If climate models are going to be used as the justification for restructuring the entire world’s economy, then nuclear/avionic levels of V&V are the absolute minimum that I am going to demand.

      The climateers can’t have it both ways.

      Judith Curry
      Posted Feb 4, 2008 at 3:39 PM | Permalink

      Re V&V, probably the model with the best documentation is ECMWF (which is the same dynamical core and mostly the same parameterizations of ECHAM climate model). Here is the link to the full archive of their technical notes. Read all this and let me know if you have confidence in the model.
      specifically, the technical memos and technical reports

      Judith Curry
      Posted Feb 4, 2008 at 5:50 PM | Permalink

      For those of you taking potshots at the models, going to the ECMWF site (#32) is a must, even if you just read the titles of the tech notes. This will give you an idea of what goes into these models and how they are evaluated. Rejecting the models because of the lack of a standardized set of tests is irrational.

      Scott-in-WA has some sense of the reality of climate modelling. You will really need to crank up the activity in the tip jar to pay for the development and implementation of standardized tests for each element of a climate model. In the best of all worlds, would this be done? yes. But actually figuring out how to do this for such a complex numerical system is no easy task, and then getting people to agree on what tests to actually use is probably hopeless, and finding resources to fund such an activity is a fantasy. Unfortunate, but this is reality.

      steven mosher
      Posted Feb 4, 2008 at 6:43 PM | Permalink

      re 37. Judy judy judy.

      “For those of you taking potshots at the models, going to the ECMWF site (#32) is a must, even if you just read the titles of the tech notes. This will give you an idea of what goes into these models and how they are evaluated. Rejecting the models because of the lack of a standardized set of tests is irrational.”

      1. I have slogged through almost the entire 100K lines of ModelE. Now I am I starting On the MIT GCM which is much easier. So, Some of us have earned our potshots. In walking through ModelE I found nothing to reccommend it. No test cases. No test suites. No test drivers. No unit test. No standardized test. At one point gavin directed me to a site of “test data”. I found errata exposing monumentaly stupid programing blunders that your worst GT undergraduate programing student wouldnt commit to a daily build after an all night bender
      Worse, when I requested access to the IPCC data, I was denied. Private citizens cannot get access to this data
      . You want to talk about irrational. Irrational is this: no spec. no coding standard. no test plan. No verification. No validation. No manual. No documentation. No public access. no accountability.

      I’ll link some climate modelers in a bit saying essentially the same thing. you can potshot them

      “Scott-in-WA has some sense of the reality of climate modelling. You will really need to crank up the activity in the tip jar to pay for the development and implementation of standardized tests for each element of a climate model.”

      I ran cocomo which nasa uses to estimate a total rewrite of ModelE. With a full V&V its less than 10Million dollars. It is the responsibility of program manangers within nasa to make the appropriate budget requests. The problem is they dont value transparency and openness and testing and accountablity.
      Ask the guys on challenger. opps they are dead. pity that. The explosion was pretty however.

      Judith Curry
      Posted Feb 4, 2008 at 7:19 PM | Permalink

      I agree that anyone slogging through a GCM code is entitled to make potshots. But there is a WORLD of difference between the ECMWF model and the NASA GISS model. I encourage you to look at the documentation of what is regarded to be the best atmospheric model in the world. ECMWF puts NASA and NOAA to shame.

      A model of the complexity of global models is never going to be perfect. What can we learn from an imperfect climate model? I refer you to a paper by Leonard Smith, who is somewhat of a guru in the field dynamical systems, their simulation, and applications to atmospheric models

      Pat Frank
      Posted Feb 4, 2008 at 8:40 PM | Permalink

      #41 — “I agree that anyone slogging through a GCM code is entitled to make potshots.”

      Sufficient, but not necessary. Anyone who can appreciate the model errors documented in the 4AR Chapter 8 and Chapter 8 Supplemental, which show that GCMs not only make large intrinsic errors when tested against observables but also that different high-resolution GCMs can make large-scale errors of the opposite sign when tested against the very same observable, is entitled to make potshots. These GCMs all presumably include the analogous physics and are parameterized with best-guess estimates. Nevertheless, the error residuals vary from GCM to GCM, often wildly. This is hardly cause for confidence in prediction.

      Steve Hempell
      Posted Feb 5, 2008 at 12:44 AM | Permalink

      I just read the Dr. Syun-Ichi Akasofu paper that is up on ICECAP website. In it he states “we asked the IPCC arctic group (consisting of 14 sub-groups headedby V. Kattsov) to “hindcast” geographic distribution of the temperature change during the last half of the last century.” The result: “We were surprised at the difference between the two diagrams in Figure 11b. If both were reasonably accurate, they should look alike. Ideally, the pattern of change modeled by the GCMs should be identical or very similar to the pattern seen in the measured data. We assumed that the present GCMs would reproduce the observed pattern with at least reasonable fidelity. However, we found that there was no resemblance at all, even qualitatively.” I would have presumed the IPCC would have used their best GCM.

      He then goes on to give two examples of how to use “GCM results to identify natural changes of unknown causes.” That was a nice twist.

      Doesn’t fill you with a warm fuzzy feeling about GCMs.

      Tom Vonk
      Posted Feb 5, 2008 at 10:53 AM | Permalink

      OK so I took my time and looked at that ECMWF that’s supposed to be the 8th marvel of the world . After having looked at the titles of all the documents accessible on line , I selected the Radiation Transfer that I know well . More precisely “The technical memorandum 539 . Recent Advances in Radiation Transfer Parameters .”

      I took a stance of an independent expert charged to audit this piece of document .
      I must say that the result was depressive – it has all the flaws that have already been mentioned in this thread . Specifically the part that should consist to compare the new McRad (new model) prediction to reality with a precise description of the experimental detail and data treatment is completely missing .

      They want to introduce a random cloud generator .
      Besides the generic method consisting to compare the model vs model runs (which would consist to compare error to error if the models were inadequate) there is a weak attempt at comparing runs with reality .
      So here with cloudiness and CERES is mentionned in that respect .
      However the CERES equatorial satelite doesn’t work and the other 2 are at polar orbits what means that you get readings always at the same time of the day .
      A question forces itself upon us – what kind of data did they use ? What kind of “cloudiness” did they extract from CERES ? How did they (re)treat it ?
      Shouldn’t that be at least mentionned in a document that recommends nothing less than to redo the biggest part of a radiative model ?
      Well it is not .
      The argument for the “cloud generator” is weak and is supposed to be supported by one work mentionned as reference .
      This question being central , something better than only a reference should be in the report .

      They also want to model aerosols .
      Here is mentionned Modis Channel – same satellites as CERES , same remarks .
      Yet Modis on top of the above caveat gives neither vertical distribution of aerosols nor their nature .
      So what is it used for , what data treatment , what period ?
      That is not mentioned either .
      On the other hand valuable time and space is wasted on an anecdote showing a Modis picture of a sand plume coming from Sahara to Europe and a chart of a simulation .
      There is a qualitative agreement between both .
      Of course it doesn’t impress anybody because already the Romans knew 2000 years ago that when the wind was coming from the south and the weather was fine , sand from Sahara could come to Europe .
      They didn’t need satellites and multimillion computers .
      So also on this topic no adequate argument is made about why the skill of the new model should represent the reality better .
      No attempt is made on justifying the statistics used .
      For example what time averaging is relevant and what bias there may be ?

      Of course Hitran is used .
      Therefore collisionnaly induced emissions/absorptions (and no it is neither 0 nor negligible) are ignored because they are not in Hitran . I already noticed that people use Hitran like a magical word – if you say Hitran , you access to the Nirvana of infinite accuracy and the world where “the radiative transfer is a settled science” .
      Well it is not in the famous details where the devil is .
      Also CFCs are mentionned .
      We know about everything about their radiative properties but we know very little about their distribution and have practically no past data .

      The list of references and charts is almost as long as the text and that is never a good sign .

      Conclusion after 2 hours of reading the document .
      It is neither V nor V .
      The way in which the document is presented doesn’t seek to present the reader with a logical argumentation structure , doesn’t separate the essential from secondary , doesn’t explain and structure experimental data and its treatment when applicable .
      In short the reader is either supposed to be one of the 19 people (yes they wrote about 1 page per person) who wrote the report or to have unlimited faith in the 19 people .
      The reader can neither validate the recommendation presented (the use of a new model) nor can he even in principle redo/verify any part or statement in the report .
      If I was charged with a real audit of this document (or generally with other documents made by those 19 people) I would add :

      – the above doesn’t mean that the 19 people don’t know what they do . They probably do .
      – the above is not an exhausting analysis . Many more flaws , defaults and methodological errors would probably be appear after a thorough examination
      – an advice on 1 document doesn’t represent a synthesis on all documents . There may be others of a much better quality
      – but … between us , if you ask me , they are really sloppy

      Judith Curry
      Posted Feb 5, 2008 at 12:09 PM | Permalink

      i just did an interesting exercise, google the the three words (no quotes)
      ECMWF model validation (18,000 hits)
      ECMWF model verification (10,400 hits)

      reproduce this exercise, cruise the titles of the first few pages of hits, and you will get some sense of the complexity of this challenge and the huge amount of work that has been done on this issue.

      Posted Feb 5, 2008 at 1:47 PM | Permalink

      Yes. Some people are arguing about whether or not the GCM’s are perfect. Other people really are discussing the V&V. These are entirely separate issues.

      GCM’s can’t be perfect. The people asking for that will never be convinced by a GCM. But that’s probably only a very small fraction.

      GCM’s are complex and so more difficult to validate than models describing simpler things.

      However, neither complexity nor the impossibility of perfection interferes with the “do-ability” of a V&V! The goal of V&V is absolutely not to create a perfect model, and complexity is simply not a problem.

      If you can write a code, you can do a V&V. Complexity doesn’t actually affect form Verification very much; it only means one requires longer document because there are more modules to test. If a model is approximate and has difficulties (like GCM’s), or the results are difficult to interpret, that is reflected in the narratives and figures contained in the Validation document. All the caveats you describe here would be included in the validation document in written from where third parties could read the caveats. That’s a purpose of the validation.

      But with regard to this:

      In the case of the ECMWF model, the V&V is very clear (read the technical notes, memos).

      No. It’s nto at all clear in those notes and memos.

      I suspect the V&V for ECMWF may well be very clearly stated somewhere. In so far as a model is used for weather prediction broadcast to the public, and it’s predictions have a direct impact on public safety, funding and regulatory agencies probably do require V&V for any model.

      That said:, there are zillions of links at the site you point to. I’ve clicked on 20 or so links to reports and memos and skimmed. They look like good reports. They look like decent science. Unfortunately, with regard to the discussion going on in this thread, absolutely nothing I clicked remotely resembles a Verification. A small fraction of documents contain snippets that would belong in validations, but those would be stupendously incomplete as validations.

      Could you point to an individual link that you think looks like a verification?

      If you could point to a specific one, this get us all on the same page. Otherwise, right now , it looks like everyone is talking past each other.

      Meanwhile– Dan, or Tom Vonk, could you supply Judy with examples of formal verification documents? (I know this is difficult since most are only available in the grey literature and they are generally a set of documents. At that, they represent such a small fraction of the grey literature that you need to know a named party did one. But if either of you was involved in one, then that might help Judy understand.)

      (FWIW, in some ways, I’m agnostic on this isue. I frankly don’t give a hoot whether GCM’s are verified or validated because I don’t base any of my judgment about AGW on the results of GCM’s. I rely on simpler energy balance models supported by temperature trends, and some physical arguments. I think the balance of the evidence points toward warming caused by human activity.

      Nevertheless, if documents describing formal validation and verification of GCM’s do exist (and I’m betting Quatloos they don’t,) it would be useful to the never ending discussion if someone could identify those specific documents.

      Identifying alternative documents of the sort that appear in academic journals and incorrectly calling them V&V just won’t do for people who now what V&V is and want to see V&V. (It’s a bit like giving someone chocolate ice cream when they want chocolate fudge and saying “See, I gave you chocolate! And anyway, ice cream is better than fudge– you should want fudge!” Ice cream is tasty, but the customer wants fudge.)

      Only bringing the V&V to table can bring this bitter arguments about whether or not V&V has been done.

      [i.e. Don’t take what someone tells you for granted]

      [And now my personal favourite]

      Craig Loehle
      Posted Feb 4, 2008 at 8:14 AM | Permalink

      For my Ph.D. thesis I developed a model of a grazing ecosystem in Pascal. It ran beautifully, but one curious bug was that when I simulated adding cows to the range, it stopped raining. Pretty realistic, a rancher would say, but really due to utilizing dynamic arrays of unequal length, so the added item stomped on some of the memory. I would say my code from 1981 was much better structured and documented than Model E etc. Sad but true.

    • Richard C on 06/10/2010 at 4:19 pm said:

      At the risk of brow-beating, some thoughts on the state-of play re climate model uncertainty as the result of my solitary up-thread odyssey.

      Our worst nightmare is just around the corner.

      That is the cobbling together of climate simulation models and economic simulation models.

      Think Gareth Renowden – Gareth Morgan.

      If dear reader, you are overwhelmed by the sheer volume of discussion, complexity and concept in regard to climate models, then let me alleviate your pain.

      First, the significance of this development.

      Think of how globally: politicians, policy-makers and the public have been hoodwinked by the results of the climate models.

      Now consider that the next phase for IPCC AR5 will be policy built on the results of climate-economic coupled computer simulations with AGW hard-wired in.

      If this is news to you then you are behind the 8 ball and without further education you will be blind-sided when you first encounter the executive summary.

      Gareth Renowden is up to speed:

      Gareth October 5, 2010 at 9:39 pm

      Yes, the models are run to equilibrium state with a prescribed atmosphere and other forcings, before being fed the trajectory chosen for study. This does not mean that the “parameters are hard-wired”, it means that the intial climate forcings are chosen to allow a stable “climate” within the model. The response to forcing from that state is prescribed by the physics in the model. If you want to argue about the radiation transfer code, then you need a different debate.

      AR5 modeling will use the latest versions of the earth systems models available, and also use new “policy relevant” scenarios. This means that they will (of course) produce projections that differ from AR4. That’s a good thing, not a sign of failure.

      Are you?

      Note Gareth’s obfuscation re AGW hard-wiring and the “good thing” spin (uncertainty has been increasing with each successive IPCC report).

      An example of the “earth systems models” he is referring to is here:
      which is an aforementioned climate-economic coupled model.

      You can rapidly get up to speed in the progression of the climate model uncertainty discussion and where climate science stands in the context of scientific and engineering simulation generally (not good) in preparation for the next onslaught on your senses, by reading the following two threads (posts and comments):-

      First from Climate Audit

      Curry Reviews Jablonowski and Williamson

      Second from Climate Etc

      What can we learn from climate models?

      Well, that’s your homework.

      Don’t say I didn’t warn you.

      More at a later date on the distinction between IPCC prescribed RF methodology (ACO2 forced and Naturally forced) and alternative simulations using NON-IPCC RF methodology and NON-IPCC Natural forcing (big difference).

      I have already approached Dr David Wratt at NIWA in this regard (inserted in comments about 6 posts ago), he has acknowledged my approach and has said he will give a detailed reply but I have not heard to date. Given my non-entity status, I am not holding my breath.

      Undaunted, I’m off to search the web for glimmers of hope in the NON-IPCC RF Method/Naturally forced sphere of climate model simulations. This shouldn’t be difficult, it must only be a very small sphere.

    • Ron on 06/10/2010 at 5:09 pm said:

      Thanks for that Richard, it is shocking to see that those objections at Climate audit date back over 2 and a half years and seem to have had zero effect. At least Judith Curry is showing a more honest approach to scientific enquiry.

    • Richard C on 08/10/2010 at 11:27 am said:


      Judith Currey’s approach may not be that honest but am willing to give her the benefit of doubt.

      I get the impression that she is either:

      A. Protecting the status quo by raising a strawman, or

      B, Just has not investigated other avenues.

      e.g. Her observation:

      “So far, it seems that the biggest climate model uncertainty monsters are spawned by the complexity monster.”

      I disagree entirely.

      My thinking is that as models evolve and unknown functions addressed (clouds etc), certainty in the functions INCREASE but over the last 7 years say, certainty in the results have DECREASED (uncertainty increased).

      As time progresses, certainty in functionality should continue to increase and therefore uncertainty in results SHOULD decrease but wont for the following reason which is the notion that I do not think she has entertained in B. above.


      The PCMDI project that supposedly makes model inter-comparisons is a massive group-think exercise and somewhat incestuous.

      The IPCC’s assertion that: well, we took out CO2 forcing and ran 15 simulations on 5 different models using natural forcing only (Lean solar) with OUR RF methodology and the simulations failed to mimic 90’s warming, JUST DOES NOT STAND UP TO SCRUTINY.

      Both the IPCC’s ACO2 forced AND the naturally forced simulations, failed to mimic the 1930’s warming AND the ACO2 forced simulations are now diverging from the observed condition (points of inflexion across all metrics in the mid 2000’s).

      I intend to make a comment on Judith’s “What can we learn from climate models?” thread (currently 188 comments) but have been distracted (Statement of Defence issues).

      Given the rarified atmosphere of discussion, the key is to choose words carefully to get her attention and perhaps that of lurking heavyweights. Being an Antipodean, knuckle dragging, non-entity from downunda will not help my cause.

      I posted this in comments at Hot Topic whereupon my very aggressive antagonist (RedLogix) immediately disengaged from scientific discussion and wandered off to an ideological vein:

      Richard C2 October 5, 2010 at 8:37 am

      Sorry, I’ll keep it simple.

      CO2 fails dismally to account for the 1930′s warming but sunspot cycle length correlates with temperature over the entire warming period:

      CO2 fails dismally to correlate with Arctic-wide Surface Air Temperature anomalies:

      But solar irradiance does:

      Where are the models that mimic these natural phenomena?

      If someone could point me to them, it would be greatly appreciated.

  10. G.S. Williams on 04/10/2010 at 10:00 am said:

    Just a couple of points.

    Your header should be, “Filmed free for nothing”, The word “for” MUST be followed by a noun not an adjective. In the context of your header “free”, is an adjective. Also, the word “but” is superfluous in that context.

    I hope that this is of help.

    G.S. Williams.

    • Thank you, G.S. Williams.

      Efficient. Lovely. Changed.

      P.S.: I’ve left the hyperlink as it was or things would break.

    • Ron on 04/10/2010 at 11:31 am said:

      Don’t want to get embroiled in a prescriptivist grammar stoush here, but the expression “do something for free” is perfectly acceptable colloquial English. It is in e.g. Cambridge Learner’s Dictionary, Chamber’s 21st Century Dictionary.
      (I suppose “for free” is an adverbial phrase not preposition plus adjective)

  11. Andy on 04/10/2010 at 8:32 pm said:

    Richard North has provided a lot of background to this over at

    In addition to the various spoofs (including a Monty Python remix) there is some interesting feedback from disgruntled supporters of 10-10.

    It really does seem like they have blown it this time.

  12. Andy on 04/10/2010 at 8:42 pm said:

    If there’s one video that you have to watch, it’s this one

    It neatly segues all the propaganda used to terrorise children over recent years.

  13. Andy on 09/10/2010 at 8:05 am said:

    The internet is full of spoof 10-10 videos now

    This one is pretty good

  14. Richard C on 12/10/2010 at 2:07 pm said:

    Delingpole is still wringing every drop from this one. His latest:-

    10:10: who are YOU going to kill to help save the planet?

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation