Jo Nova- Welcome to the CO2 Disaster

Jo Nova slices a UNSW press release pointing out despite ongoing deforestation in the tropics and “climate change” the amount of vegetation globally is increasing. They ascribe this to the scientific principle of “good luck” (that would have had you laughed out of a 1st Year physics class 30 years ago), although they do give grudging acknowledgement of the effects of increased CO2 and warmer temperatures.

Welcome to the CO2 disaster — 4 billion tons more plants, more greenery

During the recent warmest decades on record, Earth suffered under the highest CO2 levels of the last 800,000 years. Life responded to this devastating situation by — flourishing. There are now some 4 billion tons more living matter on the planet than there was in 1993. What a calamity. (And what a lot of carbon credits.)

It has, naturally, got nothing to do with warmth andaerial fertilizer. The researchers tell us it due to that force of nature known as “good luck”. Remember, human CO2 emissions were pollution that was going to afflict life on Earth. After twenty years of predicting the loss of forests and species, it turned out that biology bloomed instead. Notch up another model “success”. The press release headline:Good luck reverses global forest loss.(What else would we expect from UNSW?)

To those who know basic biology — and that almost half the dry weight of plants is carbon, sucked straight out of the air — this is not so much good luck as one entirely foreseeable and foreseenconsequence of rising CO2. Acquiring carbon is often a plant’s hardest task. When the sun comes up, a cornfield begins sucking, andby lunch timeits already got all it can get, so growth slows til night returns to pump up the CO2 levels again. Pulling out all that plant fertilizer from under Middle Eastern deserts and spreading it around where the plants could get it has a predictable effect on plant life (though it’s fair to ask if our emissionsactually contributeverymuch).

Remember in post-modern climate science, your air-conditionercauses snowstorms,but if CO2 rises and plants grow — that’s “luck”.

Read the rest here

Major Problems With Climate Models

 

Do you ever wonder why there is such a big gap between the predictions of climate models and reality? Part of the reason is that they are fundamentally flawed, like disobeying the laws of thermodynamics and underestimating the incoming solar radiation for starters.

From WUWT

Whoops! Study shows huge basic errors found in CMIP5 climate models

Earth’s_Energy_Budget_Incoming_Solar_Radiation_NASA

Incoming solar radiation at the Top of the Atmosphere (TOA)

It was just yesterday that we highlighted this unrealistic claim from CMIP5 models: Laughable modeling study claims: in the middle of ‘the pause’, ‘climate is starting to change faster’Now it seems that there is a major flaw in how the CMIP5 models treat incoming solar radiation, causing up to 30 Watts per square meter of spurious variations. To give you an idea of just how much of an error that is, the radiative forcing claimed to exist from carbon dioxide increases is said to be about 1.68 watts per square meter, a value about 18 times smaller than the error in the CMIP5 models!

The HockeySchtick writes:

New paper finds large calculation errors of solar radiation at the top of the atmosphere in climate models

A new paper published in Geophysical Research Letters finds astonishingly large errors in the most widely used ‘state of the art’ climate models due to incorrect calculation of solar radiation and the solar zenith angle at the top of the atmosphere.

According to the authors,

Annual incident solar radiation at the top of atmosphere (TOA) should be independent of longitudes. However, in many Coupled Model Intercomparison Project phase 5 (CMIP5) models, we find that the incident radiation exhibited zonal oscillations, with up to 30 W/m2 of spurious variations. This feature can affect the interpretation of regional climate and diurnal variation of CMIP5 results.

The alleged radiative forcing from all man-made CO2 generated since 1750 is claimed by the IPCC to be 1.68 W/m2. By way of comparison, the up to 30 W/m2 of “spurious variations” from incorrect calculation of solar zenith angle discovered by the authors is up to 18 times larger than the total alleged CO2 forcing since 1750.

radiative-forcing-components

Why wasn’t this astonishing, large error of basic astrophysical calculations caught billions of dollars ago, and how much has this error affected the results of all modeling studies in the past?

The paper adds to hundreds of others demonstrating major errors of basic physics inherent in the so-called ‘state of the art’ climate models, including violations of the second law of thermodynamics. In addition, even if the “parameterizations” (a fancy word for fudge factors) in the models were correct (and they are not), the grid size resolution of the models would have to be 1mm or less to properly simulate turbulent interactions and climate (the IPCC uses grid sizes of 50-100 kilometers, 6 orders of magnitude larger). As Dr. Chris Essex points out, a supercomputer would require longer than the age of the universe to run a single 10 year climate simulation at the required 1mm grid scale necessary to properly model the physics of climate.

The paper: On the Incident Solar Radiation in CMIP5 Models

Linjiong Zhou, Minghua Zhang, Qing Bao, and Yimin Liu1

Annual incident solar radiation at the top of atmosphere (TOA) should be independent of longitudes.However, in many Coupled Model Intercomparison Project phase 5 (CMIP5) models, we find that the incident radiation exhibited zonal oscillations, with up to 30 W/m2 of spurious variations. This feature can affect the interpretation of regional climate and diurnal variation of CMIP5 results. This oscillation is also found in the Community Earth System Model (CESM). We show that this feature is caused by temporal sampling errors in the calculation of the solar zenith angle. The sampling error can cause zonal oscillations of surface clear-sky net shortwave radiation of about 3 W/m2 when an hourly radiation time step is used, and 24 W/m2 when a 3-hour radiation time step is used.

Half of Australia’s Warming Due to “Adjustments”

The duplicity of the BOM in its historic temperature adjustments is documented in this article from Jo Nova:

Historic documents show half of Australia’s warming trend is due to “adjustments”

Adjustments that cool historic temperatures have almost doubled Australia’s rate of warming.

CSIR published “Meteorological Data” 1855 – 1931

There was a time back in 1933 when the CSIRO was called CSIR and meteorologists figured that with 74 years of weather data on Australia, they really ought to publish a serious document collating all the monthly averages at hundreds of weather stations around Australia. Little did they know that years later, despite their best efforts, much of the same data would be forgotten and unused or would be adjusted, decades after the fact, and sometimes by as much as one or two degrees. Twenty years later The Commonwealth Bureau of Census and Statistics would publish an Official Year Book of Australia which included the mean temperature readings from 1911 to 1940 at 44 locations.

Chris Gillham has spent months poring over both these historic datasets, as well as the BoM’s Climate Data Online (CDO) which has the recent temperatures at these old stations. He also compares these old records to the new versions in the BOM’s all new, all marvelous, best quality ACORN dataset. He has published all the results and tables comparing CDO, CSIR and Year Book versions.

He analyzes them in many ways – sometimes by looking at small subsets or large groups of the 226 CSIR stations. But it doesn’t much matter which way the data is grouped, the results always show that the historic records had warmer average temperatures before they were adjusted and put into the modern ACORN dataset. The adjustments cool historic averages by around 0.4 degrees, which sounds small, but the entire extent of a century of warming is only 0.9 degrees C. So the adjustments themselves are the source of almost half of the warming trend.

The big question then is whether the adjustments are necessary. If the old measurements were accurate as is, Australia has only warmed by half a degree. In the 44 stations listed in the Year Book from 1911-1940, the maxima at the same sites is now about half a degree warmer in the new millenia. The minima are about the same.

Remember that these sites from 1911-1940 were all recorded with modern Stevenson Screen equipment.  Furthermore, since that era the biggest change in those sites has been from the Urban Heat Island (UHI) effect as the towns and cities grew up around the sites. In some places this effect may already have been warming those thermometers in the first half of the last century, but in others UHI can make 5 to 7 degrees difference.

If Australian thermometers are recording half a degree higher than they were 70 – 100 years ago, we have to ask how much of that warming is the UHI effect? Common sense would suggest that if these older stations need any correction, it should be upward rather than downward to compensate for the modern increase in concrete, buildings and roads. Alternatively, to compare old readings in unpopulated areas with modern ones, we would think the modern temperatures should be adjusted down, rather than the older ones.

The Official Year Book 1953

Chris Gillham discusses the potential size of the UHI changes:

“In 2012 and 2013 it was anticipated that UHI warming in south-eastern Australia will continue to intensify by approximately 1C per decade over and above that caused by global warming (Voogt 2002), with tests in 1992 showing a UHI influence up to 7.2C between the Melbourne CBD and rural areas. [PDF]

Smaller but significant UHI influences were found in regional towns, with a 1994 test observing a UHI intensity up to 5.4C between the centre of a Victorian town and its rural outskirts.”  [PDF]

 

Full article here

BOM Caught Out Hyping Results Again

Cyclone Marcie was a terrible storm. It has been reported as “Cat-5” which is the top of the cyclone/ hurricane scale.

The devastation, though widespread has not been as severe as you might have been expected.

The BOM was forecasting that the cyclone would become Cat-5 and of course the media ran with that, but in the event, as far as we can measure these things it made landfall as a Cat-3.

So it the BOM doing to cyclones what it has been doing to temperature records? Are they being re-categorised to march the global warming mantra of “more extreme weather”?

Of course if we believe that Rockhampton and Yeppoon survived a Cat-5 and it was only really a Cat-3 will that lead to relaxed building standards and complacency that could be fatal when a real Cat-5 storm comes.

Sigh! I used to be a real fan of science untll it became just another propaganda vehicle.

From Jo Nova:

Category Five storms aren’t what they used to be

The facts on Cyclone Maria: the top sustained wind speed was 156 km and the strongest gust 208 km/hr. These were recorded on Middle Percy Island in the direct path before it hit land and apparently rapidly slowed.  The minimum pressure recorded after landfall was 975Hpa. BOM and the media reported a “Cat-5″ cyclone with winds of 295 km/hr. To qualify as a Cat 5, windspeeds need be over 280km/hr. The UN GDACS alerts page estimated the cyclone as a Cat 3.

The damage toll so far is no deaths (the most important thing), but 1,500 houses were damaged and 100 families left homeless. It was a compact storm, meaning windspeeds drop away quickly with every kilometer from the eye, so the maps and locations of the storm and the instruments matter. See the maps below — the eye did pass over some met-sites, but made landfall on an unpopulated beach with no wind instruments. It slowed quickly thereafter. The 295 km/hr wind speed was repeated on media all over the world, but how was it measured? Not with any anemometer apparently — it was modeled. If the BOM is describing a Cat 2 or 3 as a “Cat 5″, that’s a pretty serious allegation. Is the weather bureau “homogenising” wind speeds between stations?

What will happen when Australians living in cyclone areas have to prepare for real Cat 5s? How much respect will Australians have for the BOM (and the ABC) if they find out that supposedly dispassionate and impartial scientists have been hyping weather events to score political points? Will the BOM issue any clarifications and corrections?

What does a Cat 5 mean anymore?

The headlines are still calling Marcia a “Cat 5″ cyclone three days later. But today there are many questions about that, and very different debates have broken out on the old media and the new. On the mainstream media, Premier Annastacia Palaszczuk is already defending the BOM after the Marcia “surprise”. But she is talking about the sudden escalation of a Cat 1 or 2 up to a 5, and whether the BOM gave residents enough warning. On the Internet people are asking why it was called a Cat 5. As Jennifer Marohasy points out, the top speeds recorded showed the cyclone was a Category 3. “Middle Percy” was under the path, and out to sea.

There is a weather station on Middle Percy, and it recorded a top wind speed of 156 km/h, the strongest gust was 208 km/h, and the lowest central pressure was 972 hPa. This raw observational data is available at the Bureau’s website and indicates a category 3 cyclone.

As commenters unmentionable and Ken recorded here, none of the observed wind-speeds came remotely close to being Cat 5. By strange coincidence, two guest authors here, Ken Stewart and TonyfromOz, live north of Rockhampton and both “walked in the eye” last Friday. I’ve spoken to both this morning, and fortunately their houses and families are OK, though still without electricity.

The US Navy’s Joint Typhoon Warning Centre was tracking the cyclone and, like me, noted the surface observations from Middle Percy Island. The US Navy had been estimating wind speeds based on the Dvorak modelling method. This method is considered much less reliable than aircraft reconnaissance, with surface observations (from anemometers and barometers) historically the ultimate measure of a tropical cyclone’s wind speed and central pressure. For example, in the case of Cyclone Yasi, a barograph at Tully sugar mill recorded a minimum central pressure of just 929 hPa, and this is the value in the final report from the Australian Bureau of Meteorology confirming that Yasi was a category 5 system.

In the case of Marcia, the US Navy acknowledged that their Dvorak estimates were higher than the surface observations from Middle Percy Island. In particular their real time “warning”, no longer available on the internet, noted an “intensity of 110 knots” based on the anemometer on Middle Percy. This corresponds with the highest wind gust recorded on Middle Percy Island as Marcia passed over. The maximum sustained wind speed, however, never exceeded 156 km/h, and the central pressure was never less than 972 hPa. This makes Marcia a category 3 based on the Australian system, and only a category 2 based on the Saffir-Simpson Hurricane Wind Scale.

Yet the bureau continued to report the cyclone, not as it was, based on the surface observations, but as they had forecast it in a media release the previous day: “Tropical Cyclone Marcia to reach Category 5 system at landfall”.

Full story here

Maybe the BOM isn’t that interested in climate

From Jo Nova

The mysterious BOM disinterest in hot historic Australian Stevenson screen temperatures

When it comes to our rare high-quality historic records, and the real long term trends of Australian weather, the silence is striking. There are some excellent historical records of long term temperature data from the late 1800s in Australia, which lie underused and largely ignored by the BOM.

For the BOM, history almost appears to start in 1910, yet the modern type of Stevenson screen thermometer was installed across Australia starting as early as 1884 in Adelaide. Most stations in Queensland were converted as long ago as 1889 and in South Australia by 1892. Though states like NSW and Victoria were delayed until 1908.

Here’s a photo of the ones in Brisbane in 1890.

Brisbane was recording temperatures with modern Stevenson screens in 1890, as were some other stations, but the BOM often ignores these long records.

The BOM don’t often mention all their older temperature data. They argue that all the recordings then were not taken with standardized equipment. The BOM prefers to start long term graphs and trends from 1910 (except when they start in 1950 or 1970, or 1993).

The BOM was set up in 1908.  Before that there were Stevenson screens going in all over Australia, but somehow these records appear uninteresting to climate researchers. Could it be that the late 1800s would have been more captivating if they were colder? In the late 1800′s there was the widespread heatwave of 1896 killing hundreds of people and recording 50C plus temperatures across the continent as well as the infamous Federation Drought?

Figure that if the BOM were curious about long term natural trends, it would not be impossible for a PhD student to compare the distant past and estimate those long trends. (If two stands of trees in 1200AD are accurate to 0.1C, why not actual, but non-standard thermometers in 1890?)

Not only were some stations using Stevenson screens in Australia, but other types of non-standard but common screens were documented, along with sites, and there were studies of overlapping data. (Though there were also some highly irregular sites that would defy analysis). More to the point, with millions in government grants available for research, the BOM could even recreate some historic sites and do modern side-by-side comparisons. Surely in the space age we can figure out the temperature differences of wooden boxes?

Suppose for a moment that the old records showed cool summers, or demonstrated that Australia had warmed by two degrees instead of one? Wouldn’t there rather be a flood of papers adjusting and homogenising Glaishers and Stevensons, and perhaps even sheds and octagons? Whole new museums could spring forth, recreating sacred meteorology stations from 1862. School children would file by and gasp!

The British CRU (University of East Anglia) reports Australian trends from 1850

Jennifer Marohasy wonders where the CRU got the data that the BOM don’t want to use. She has beenwriting about the Stevenson screens and  asking the Australian BOM questions like this and more. Warwick Hughes has been analyzing these old records even longer.  His paper in 1995 provoked the Neville Nicholls reply of 1996 (which is used to create the map below).

Above, the year that Nicholls 1996 describes “most” stations as being shifted to Stevenson screens.

There were a few late exceptions to these dates.

Although there were many sites, especially in NSW and Victoria that didn’t get Stevenson screens until sometime in 1907,  vast areas of Australia in WA,  Queensland and South Australia have accurate older data. When   “hottest” ever records for these states are announced, why are the older high quality measurements almost invisible?

Full article here

Jo Nova- Enough Money Wasted on Renewables to Give Everyone Clean Water

Jo Nova reports that Europe alone has wasted more than 100 Billion Euro on mismanagement of renewable energy- that is just in terms of where wind farms and solar plants are located, ignoring the huge subsidies and innate inefficiencies, or the excess price over traditional power sources. The estimated cost to give everyone in the world safe drinking water is only 30 billion Euros.

A bonfire of waste: $100 billion burnt by big-government renewables mismanagement

 

Renewables, are not just inefficient, unnecessary, and deadly to wildlife, but they were also a disaster of planning and management. The list of dollars and euros destroyed in the Glorious Renewables Quest has gone “nuclear”. The World Economic Forum estimates $100 billion Euro has been wasted, but its even worse than it looks. I had to read their opening sentence twice. I thought it read “European countries could have saved approximately $100 billion if each country had invested in the most efficient energy source.” I was thinking they could have saved that sort of money by using coal instead of windmills… but no, those huge savings would be over and above those ones. The WEF is talking about money saved if “badly managed renewables, had been “well managed ones”.

The inefficiency here is the scale only big-government could achieve.

The Energy Collective

Europe Loses Billions in Badly Sited Renewable Power Plants

European countries could have saved approximately $100 billion if each country had invested in the most efficient capacity given their renewable energy resources, that is, by installing wind turbines in windier countries and solar power plants in sunnier places.

But why would we be surprised?

The people who pushed renewables onto Europe were never doing it for pragmatic or practical reasons. The numbers never made sense on any level — not for electricity-made, not for global “cooling”, nor species saved, nor jobs created.  The numbers didn’t work for “energy independence” and they certainly didn’t add up to a profit.

Since the point wasn’t about electricity, or the environment, it didn’t really matter if the solar panels were not in sunny spots, and the wind towers were not in windy places. If those things mattered, the Greens would have been apoplectic at this waste. How many children could have got access to clean water instead?All of them. WHO estimates the cost of clean water globally at $30b.

The sticker shock of the odd 100b has worn off, but we’re talking of one hundred thousand million dollars.

The WEF are a pro-renewables lot too. They want renewables to work.

The $100b figure was not surprisingly, not in the executive summary, but on page 14.

 For example, it is obvious to most European citizens that southern Europe has the lion’s share of the solar irradiation while northern Europe has the wind.

But the EU’s investment in renewables does not reflect this: where Spain has about 65% more solar irradiation than Germany (1750 vs 1050 kWh/m2), Germany installed about 600% more solar PV capacity (33 GW vs 5 GW). In contrast, whereas Spain has less wind than countries in the north, it has still installed 23 GW of wind capacity.

Such suboptimal deployment of resources is estimated to have cost the EU approximately $100 billion more than if each country in the EU had invested in the most efficient capacity given its renewable resources. And by looking across borders for the optimum deployment of renewable
resources (with associated physical interconnections), the EU could have saved a further $40 billion.

And if the EU had coordinated across boundaries (isn’t that what the EU is for) they could have saved another $40b on top of that.

 

Everywhere Hit Hardest by Climate Change

You know climate change must be real when everywhere is supposedly the hardest hit by those hundredths of a degree temperature changes. It’s a bit like cancer being caused by everything.

From Jo Nova:

Climate change will hit “Everywhere” harder than “rest of world”

The Prophets of Doom are still at  The Guardian (and the CSIRO)

Climate change will hit Australia harder than rest of world, study shows

The first paragraph contains the word “could”. It’s all a guess based on models they already know are broken:

Australia could be on track for a temperature rise of more than 5C by the end of the century, outstripping the rate of warming experienced by the rest of the world, unless drastic action is taken to slash greenhouse gas emissions, according to the most comprehensive analysis ever produced of the country’s future climate.

But wait, will Australia — a rich, low population country with a temperate climate and surrounded by ocean — really be hit harder than the polar regions, the poor, those closest to rising seas and those living in cyclone zones?

A new website called ClimateChangePredictions is keeping track of the “hardest hit” predictions and can’t find a consensus on this one:

“Rural Australians will be the hardest hitby climate change according to Professor Steve Vanderheiden from the Charles Sturt University (CSU)”

“Sydney’s urban areas to be hit hardest by global warming” — ARC Centre of Excellence for Climate Sytem Science

“Climate change is faster and more severe in the Arctic than in most of the rest of the world”

There seems to be consensus in the developed world that Africa will be the hardest hit or most affected region, due to anthropogenic climate change.

Bangladesh is one of the hardest hit nations by the impacts of climate change.

Maldives is the most at-risk country in South Asia from climate change impacts”

“…climate change is likely to have the strongest impact on Scandinavian countries such as Denmark, Norway and Sweden”

Bulgaria, Spain, Portugal, Italy and Greece are the countries that would be worst affected by global warming, according to a European Union report.”

China’s Poor Farmers Hit Hardest by Climate Change.

Middle East, African Countries to Be Hardest Hit by Climate Change

The environment organization Germanwatch compiled a climate risk index. At the top of the 2011 ranking is Thailand.

Vietnam is likely to be among the countries hardest hit by climate change…

So the real question is, where won’t be “hit hardest” by climate change (apart from Tasmania). Can anyone find a headline saying “Climate Change will benefit ______ region?”

Hottest Year Ever! Probably Not.

You won’t hear this on the ABC, but NASA now admits that 2014 probably was not the hottest year ever and we are only talking about hundredths of a degree, well inside the margins of error (how accurately can you read a thermometer? And if you add thousands of thermometers around the world to get an average, how accurate is that going to be?)

 

Jo Nova writes: 

Gavin Schmidt now admits NASA are only 38% sure 2014 was the hottest year

I said the vaguest scientists in the world lie by omission, and it’s what they don’t say that gives them away. The “hottest ever” press release didn’t tell us how much hotter the hottest year supposedly was, nor how big the error bars were. David Rose of the Daily Mail pinned down Gavin Schmidt of NASA GISS to ask a few questions that bloggers and voters want answered but almost no other journalist seems to want to ask.

Finally…

Nasa climate scientists: We said 2014 was the warmest year on record… but we’re only 38% sure we were right

Nasa admits this means it is far from certain that 2014 set a record at all 

Does that mean 97% of climate experts are 62% sure they are wrong?*

The thing with half-truths is that they generate a glorious fog, but it has no substance. Ask the spin-cloud of a couple of sensible questions and the narrative collapses. This is the kind of analysis that would have stopped the rot 25 years ago if most news outlets had investigative reporters instead of science communicators trained to “raise awareness”. (The media IS the problem). If there was a David-Rose-type in most major dailies, man-made global warming would never have got off the ground.

The claim made headlines around the world, but yesterday it emerged that GISS’s analysis – based on readings from more than 3,000 measuring stations worldwide – is subject to a margin of error. Nasa admits this means it is far from certain that 2014 set a record at all.

Yet the Nasa press release failed to mention this, as well as the fact that the alleged ‘record’ amounted to an increase over 2010, the previous ‘warmest year’, of just two-hundredths of a degree – or 0.02C.

The margin of error is about a tenth of a degree, so those error bars are 500% larger than the amount pushed in headlines all over the world. Gavin Schmidt of course, is horrified that millions of people may have been mislead:

GISS’s director Gavin Schmidt has now admitted Nasa thinks the likelihood that 2014 was the warmest year since 1880 is just 38 per cent. However, when asked by this newspaper whether he regretted that the news release did not mention this, he did not respond.

I’m sure he’s too busy contacting newspapers and MSNBC to make sure stories from NASA GISS are accurate and scientifically correct.

Read more: http://www.dailymail.co.uk

In the mood for sport? Turn the torch back on the journalists who were too gullible to ask a sensible question. Let’s start asking the ABC and BBC journalists why they didn’t ask “how much hotter was it” and “how big are those error bars”.

H/t to Colin, Gardy.

Renewable Energy A Pipe Dream

Engineers who worked on Google’s now abandoned renewable energy project discover that there is no way that current technology can replace fossil fuels in any practical way. They made the interesting discovery that the output of all renewable systems barely exceeds the energy input of constructing them in the first place.

 

If you add that to the research last month that found that it an area the size of the U.K. would have to be given over to wind or solar power every year just to provide renewable energy for the increased energy demand that year, it is obvious that current technology will not provide the clean green world that greenies dream of.

From WUWT:

A research effort by Google corporation to make renewable energy viable has been a complete failure, according to the scientists who led the programme. After 4 years of effort, their conclusion is that renewable energy “simply won’t work”.

According to an interview with the engineers, published in IEEE;

“At the start of RE<C, we had shared the attitude of many stalwart environmentalists: We felt that with steady improvements to today’s renewable energy technologies, our society could stave off catastrophic climate change. We now know that to be a false hope …
Renewable energy technologies simply won’t work; we need a fundamentally different approach.”
http://spectrum.ieee.org/energy/renewables/what-it-would-really-take-to-reverse-climate-change

There is simply no getout clause for renewables supporters. The people who ran the study are very much committed to the belief that CO2 is dangerous – they are supporters of James Hansen. Their sincere goal was not to simply install a few solar cells, but to find a way to fundamentally transform the economics of energy production – to make renewable energy cheaper than coal. To this end, the study considered exotic innovations barely on the drawing board, such as self erecting wind turbines, using robotic technology to create new wind farms without human intervention. The result however was total failure – even these exotic possibilities couldn’t deliver the necessary economic model.

The key problem appears to be that the cost of manufacturing the components of the renewable power facilities is far too close to the total recoverable energy – the facilities never, or just barely, produce enough energy to balance the budget of what was consumed in their construction. This leads to a runaway cycle of constructing more and more renewable plants simply to produce the energy required to manufacture and maintain renewable energy plants – an obvious practical absurdity.

According to the IEEE article;

“Even if one were to electrify all of transport, industry, heating and so on, so much renewable generation and balancing/storage equipment would be needed to power it that astronomical new requirements for steel, concrete, copper, glass, carbon fibre, neodymium, shipping and haulage etc etc would appear. All these things are made using mammoth amounts of energy: far from achieving massive energy savings, which most plans for a renewables future rely on implicitly, we would wind up needing far more energy, which would mean even more vast renewables farms – and even more materials and energy to make and maintain them and so on. The scale of the building would be like nothing ever attempted by the human race.”

I must say I’m personally surprised at the conclusion of this study. I genuinely thought that we were maybe a few solar innovations and battery technology breakthroughs away from truly viable solar power. But if this study is to be believed, solar and other renewables will never in the foreseeable future deliver meaningful amounts of energy.