Dr. John Happs
“If you tell a lie big enough and keep repeating it, people will eventually come to believe it.”
Attributed (with some uncertainty) to: Joseph Goebbels, Reich Minister of Propaganda in Nazi Germany from 1933 to 1945.
There are now many frightened school children and quite a few worried parents because of what is taught in some classrooms and what they hear about impending climate-induced doom. We also have personal attacks on those scientists who oppose, with facts, the promoters of climate Armageddon.
Some scientists have lost their jobs because they dared to speak out in defence of common sense and a lack of empirical evidence to show that our trivial emissions of invisible, non-toxic, life-giving carbon dioxide threaten life on Earth.
We have escalating power prices and deaths from fuel-poverty because of the headlong rush into inefficient, unreliable wind and solar power sources.
Turbines blight the landscape, killing birds and bats by the thousands. Solar and wind projects drive up power costs, penalising the poor whilst having zero effect on global climate.
So where has this madness come from and why does it prevail?
It’s likely that decision makers, such as politicians and disseminators of (usually bad) news, such as journalists, listen to dire predictions about climate change that have no empirical evidence to support them. Such predictions come from those who have political or financial vested interests in promoting the catastrophic global warming meme. Some politicians actually believe (or their party insists they believe) the nonsense about dramatic sea level rise, ice sheet melting, ocean acidification, the death of coral reefs and a host of related and imaginary dilemmas.
It’s also likely that some people believe in those dire climate predictions, not because they are based on reliable empirical evidence. Rather they are told that outputs from computer models tell them so.
The problem essentially comes from some in the climate research establishment who have been tapping into the generous funding that has gone into climate research and computer modeling over the last three decades. Much of that money has come from governments (taxpayers) where some politicians, promising to deliver us from all evil, actually believe the nonsense about catastrophic anthropogenic global warming even though the alarmism has never come from real world evidence but from those unvalidated, unreliable computer models.
Such belief persists when even the alarmist Intergovernmental Panel on Climate Change (IPCC) admitted that:
“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that long-term prediction of future climate states is not possible.”
Third Assessment Report (2001), Section 220.127.116.11, page 774.
Climate computer programs, and there are many of them, are very complex. They require millions of lines of code in vain efforts to model all the factors that influence the Earth’s climate even though it is likely there are many factors yet to be identified. Of those factors that are known, many are poorly understood.
The UN’s political/ideological IPCC has predicted from models that, should anthropogenic carbon dioxide emissions continue at the current rate, (they contribute a mere 3% to the atmosphere’s trivial 400 ppm carbon dioxide) by the year 2100, we will see a global average surface air temperature increase of around 3ºC.
Of course alarmist computer projections are usually published with no indication of possible errors. As Smith (2002) reminds us:
“Even in high school physics, we learn that an answer without “error bars” is no answer at all.”
Future climate aside, computer-driven predictions of weather only a few days ahead can be problematic as Senior Forecaster Gordon Banks, from the Australian Bureau of Meteorology frankly stated:
“The atmosphere is very complex. Our models are having trouble doing any forecasts beyond about 7 days.“ (ABC Radio, 4th January, 2011)
Such intellectual honesty is refreshing but seems to be lacking amongst many climate modellers.
Even short-term weather forecasts can differ markedly depending on the models used and their inputs. We might expect that, in the days before super-computers, differences in weather forecasts for the same period would be inevitable. The various weather forecasts for D-Day, the allied invasion of Normandy (Operation Overlord) in June, 1944 provided notable examples.
Potential landing dates for the Normandy invasion were few since a full moon was needed to facilitate the landing of gliders, men and equipment. Critically, only a 3-day window was available.
English forecasters said to expect stormy weather on the 5th June with a break on June 6th whereas US forecasters predicted clear sunny skies over the English Channel on the 5th June. German forecasters predicted continuing bad weather with rough seas and gale-force winds expected to continue unabated until at least the middle of June. Consequently, most German commanders were confident that an allied invasion would not take place during the month of June.
Field Marshall Erwin Rommel left the coastal defenses and went home to celebrate his wife’s birthday on the 6th June.
The invasion took place on Tuesday, 6th June 1944 with the English forecast proving to be correct.
There is no doubt that computing power has grown dramatically in the last few decades yet long-term weather forecasting still proves to be difficult with so many quickly-changing variables involved. Weather forecasters are aware of this problem yet they come under increasing pressure from those who sail, fly, farm and vacation. Everyone wants to know what the weather will be like at a particular destination on some future date and forecasters know they will not always get their long-term forecast right. They also know that, due to the complexity of rapidly-changing weather systems, there will be occasions when they will be spectacularly wrong.
If weather forecasting for the next week or two is difficult, then imagine the difficulty (or impossibility) of making climate predictions into the distant future. Of course this has never deterred the climate alarmists from convincing politicians and media reporters that their computers are totally reliable, putting out forecasts of rising global temperatures with any number of dire consequences for the planet.
The fact that none of their computer-driven predictions about global temperature has ever been correct doesn’t seem to trouble them. They simply move on to their next dire warning for the future (“It’s worse than we thought.”) knowing it will be believed by those politicians who want to believe and those scientists who know that continuing the alarmism will keep their research funds flowing.
Computer-generated climate predictions have been consistently wrong and have resulted in trillions of taxpayer dollars being wasted on climate mitigation schemes that never had any prospect of having the slightest impact on global climate.
In 2016 climate scientist Dr. John Christy testified before the U.S. House of Representatives Committee on Science, Space and Technology, commenting on the reliability of climate models. He compared the average of 102 climate models with observations from the more reliable satellite and weather balloon measurements and concluded:
“These models failed at the simple test of telling us ‘what’ has already happened, and thus would not be in a position to give us a confident answer to ‘what’ may happen in the future and ‘why.’ As such, they would be of highly questionable value in determining policy that should depend on a very confident understanding of how the climate system works.”
Dr. David Henderson and Dr. Charles Hooper from the Hoover Institution at Stanford University pointed out that:
“The ultimate test for a climate model is the accuracy of its predictions. But the models predicted that there would be much greater warming between 1998 and 2014 than actually happened. If the models were doing a good job, their predictions would cluster symmetrically around the actual measured temperatures. That was not the case here… “
Dr. Myles Allen, Professor of Geosystem Science at the University of Oxford said:
“We haven’t seen that rapid acceleration in warming after 2000 that we see in the models. We haven’t seen that in the observations.”
The IPCC’s Dr. Ben Santer admitted:
“In the early twenty-first century, satellite-derived tropospheric warming trends were generally smaller than trends estimated from a large multi-model ensemble. “
No surprise there.
Despite the huge amounts of money still pouring into computing power, models cannot predict global temperatures over the next few years let alone the distant future. As Dr. Bjorn Stevens, from the Max Planck Institute for Meteorology bluntly points out:
“The computational power of computers has risen many millions of dollars, but the prediction of global warming is as imprecise as ever.”
There is no doubt that the “Pause” in global warming is real and computer models predictions of dramatically increased temperatures have proved to be hopelessly wrong. The bottom line is this: If you don’t understand something and you can’t explain your observations, you can’t possibly model it.
Even those factors that are known to influence climate are modelled differently by different groups, as are their perceived impact. This is known as “Parametrization.” We might more accurately call them “fudge factors” and as climate scientist Dr. Judith Curry observes:
“Often, different parameterizations deliver drastically divergent results.”
Dr. Will Happer, Professor of atmospheric physics at Princeton University reflected:
“So, if they want to show that the earth’s temperature at the end of the century will be two degrees centigrade higher than it is now, they put in the numbers that produce that result. That’s not science. That’s science fiction.”
If alarmist modellers continue to assume, for political/ideological reasons, that anthropogenic carbon dioxide emissions will lead to global warming and this becomes a dominant computer input, little wonder that predictions will continue to be wrong. As climate scientist Dr. Roy Spencer explains, the modellers have a fixation on carbon dioxide:
“Importantly, we don’t understand natural climate variations, and the models don’t produce it, so CO2 is the only source of warming in today’s state-of-the-art models.”
Former Virginia State Climatologist Dr. Patrick Michaels, agrees:
“The computer models are making systematic, dramatic errors because they are “parameterized” (fudged). We put in code steps that give us what we think it ‘should’ be. The models were ‘tuned.’ We forced the computer models to say, aha! human influence. The models “tell us what we wanted to see. The models have been tuned to give an anticipated, acceptable range of results.”
Again, Dr. Judith Curry was more forthright, saying of some alarmists:
“The scientists were contracted to be “narrowly focused” on man’s impact and thus ended up ignoring what may be the most important factors, such as solar and oceanic cycles.”
Computer modellers essentially play down or omit factors such as clouds, air-pressure changes, plate tectonics, aerosols, natural oscillations, ozone, solar radiation, ice and snow albedo, cosmic rays, orbital dynamics, volatile organic compounds, evaporation, vegetation, multi-decadal variability, geothermal heat, and precipitation. An even more fundamental problem with climate models rests on a poor understanding about the coupling of two chaotic fluids – the oceans and the atmosphere.
Modellers ignore evidence that shows, when atmospheric carbon dioxide levels and global temperature do track closely, it is temperature that precedes carbon dioxide levels. Rather they have a political/ideological need to link climate change to human activity whereas there is no empirical evidence for this assertion.
A critical component of any climate model is climate sensitivity, defined as the increase, in global temperature should atmospheric levels of carbon dioxide be doubled.
IPCC scientists claimed that climate sensitivity lies between 1 and 6ºC with a mean estimate of 3.1ºC as a result of feedbacks from a variety of poorly understood factors.
Climate scientists Dr. Thorsten Mauritsen from the Max Planck Institute for Meteorology and Dr. Robert Pincus from the University of Colorado calculated a median climate sensitivity value of 1.5 ºC.
Dr. Boris Smirnov, author of 20 physics textbooks has calculated climate sensitivity from anthropogenic emissions of carbon dioxide as being negligible.
Additionally, there are at least 85 peer-reviewed, published papers that say climate sensitivity is extremely low. These can be located at:
The major drivers of global climate are more likely the sun and the oceans, with the role of carbon dioxide being deliberately exaggerated by the IPCC for its own political/ideological purposes, commonly known as “The Cause.”
The abysmal track record of computer models is seen in two significant failures:
1. No models predicted the pause in warming over the last 20 years despite atmospheric carbon dioxide levels increasing.
2. Models predicted an upper troposphere “hotspot” that is non-existent.
There are other problems linked with temperature trends from the 19th century.
Computer models cannot account for the almost identical temperature increases between 1855-1880, 1910-1950 and 1980-2000:
Computer models cannot show that the 1855-1880 warming phase was due to natural processes whilst the late 20th century warming was anthropogenic in origin, as claimed by alarmists.
Models cannot explain the cooling periods between 1880-1910 and 1950-1980 when atmospheric carbon dioxide levels increased.
If climate models don’t get basic predictions correct, why would anyone trust their predictions about dramatically rising temperatures and the resulting consequences? Marine biologist Dr. Walter Starck highlights the difference between computer model predictions and reality:
“A significant factor in the growing detachment of environmental science from empirical reality has been the rise in popularity of computer modelling and a decline in real-world observation. The latter tends to be uncomfortable to obtain, the results are often messy, frequently don’t support the desired narrative and present a risk of independent examination. By contrast, computer modelling can be done in comfort during office hours, presents an aura of advanced expertise and mathematical certainty, can be “adjusted “to give a desired result generally inaccessible to independent examination.”
Dr. Mototaka Nakamura is a computer modeller and an expert on cloud dynamics and atmospheric-ocean interactions. In his book: “Confessions of a climate scientist: the global warming hypothesis is an unproven hypothesis.” Nakamura makes clear that:
“The models just become useless pieces of junk or worse (worse in a sense that they can produce gravely misleading output) when they are used for climate forecasting.”
“These models completely lack some critically important climate processes and feedbacks, and represent some other critically important climate processes and feedbacks in grossly distorted manners to the extent that makes these models totally useless for any meaningful climate prediction.”
Perhaps NASA climate scientist Dr. Duane Thresher has suggested good reasons why climate modellers persist even though they know their models simply don’t deliver:
“Predicting climate decades or even just years into the future is a lie, albeit a useful one for publication and funding.”
NASA has acknowledged that climate models do not have the ability to accurately model clouds and that this is a major problem.
More than 500 prominent scientists have sent a letter to UN Secretary General Antonio Guterres urging him to de-politicise climate discussions. They point out that UN alarmism is based on unreliable climate modeling:
“The general circulation models of climate on which international policy is at present founded are unfit for their purpose. Therefore it is cruel as well as imprudent to advocate the squandering of trillions on the basis of results from such immature models.”
Dr. Julia Slingo, Chief Scientist at the UK Met Office (from 2009 to 2016) stated that predictions would not get much better until they had super-computers that were 1,000 times more powerful than the ones they had. She said:
“In terms of computing power, it’s proving totally inadequate. With climate models we know how to make them much better to provide much more information at the local level… we know how to do that, but we don’t have the computing power to deliver it.“
We don’t need a super-computer to predict that giving carbon-dioxide obsessed gamers faster computers will only result in them giving us the wrong prediction sooner.
Dr. John Happs M.Sc.1st Class; D.Phil. John has an academic background in the geosciences with special interests in climate, and paleoclimate. He has been a science educator at several universities in Australia and overseas and was President of the Western Australian Skeptics for 25 years.