Solar’s dirty secrets: How solar power hurts people and the planet 1

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to Comments:

14 Comments

  1. Thanks for the well-researched (and referenced) article Brian. It’s lengthy; I’ll try to get to it this weekend.

  2. Just one thing, the footnote links in the document go to Brian’s site and not to the footnotes in this page.

    1. @Engineer-Poet

      Thank you for the important information. I count on you for careful reading and feedback. We are working to correct the issue. It’s not quite as simple as I thought it would be, but we should have the references working right in a day or so.

      1. For many years I made my living nit-picking the finest details in code, in drawings and in specifications.  Old habits die hard and as long as I have them I’ll try to put them to good use.

  3. Natural gas electric power plants retrofitted to use renewable methanol will probably be the primary means of electricity production in the US by the year 2050, IMO. And the cryocapture of CO2 from such electric power facilities could be used to produce more methanol from the synthesis of CO2 with nuclear produced hydrogen.

    Land based and ocean based nuclear power plants will probably be the primary means of producing carbon neutral methanol.

    Since carbon neutral methanol is likely to be significantly more expensive than natural gas derived from fossil fuels, electricity from solar power plants could be utilized to reduce eMethanol demand by perhaps 10 to 25% depending on the region of the country.

    Solar energy currently represents less than 1% of the total energy consumed in the US. So I doubt that solar energy consumption in the US will ever represent more than 10% of the energy consumed in the US this century century.

  4. Brian wrote:
    “The reliability of a power source is measured by capacity factor. The capacity factor of a power plant tracks the time it’s producing maximum power throughout the year.”

    I’m not sure I agree with either of those statements. Capacity factor Cf is simply the ratio of the energy a plant (or plants) generate over the course of a year, divided by the energy those plants could have generated if they had generated at maximum capacity at every minute of the year. That is all. In whatever context Levelized Cost of Energy is a meaningful metric, then Cf is a useful (and required) input. But it shouldn’t be confused with reliability.

    Take a random collection of 10 open-cycle gas peaker plants, with thermal effiency during their hours of wildly variable operation of maybe 20 – 30 percent. This is half or third that of a constant load combiined cycle plant, and twice the fuel cost. Naturally, a Balancing Authority will schedule peaker plants only when it really needs them; their resulting Cf is a desultory 10%.
    https://www.eia.gov/outlooks/aeo/pdf/electricity_generation.pdf (10%, for new construction)
    https://www.cleanegroup.org/ceg-projects/phase-out-peakers/ (4%, current)

    But when the BA does really need those peaker plants, it generally needs them really bad. Fortunately, open cycle peakers are simple and reliable. The BA can count on 8 of those 10 peakers to be available when it needs them.
    https://en.wikipedia.org/wiki/Availability_factor

    Big dam hydro is similar. Their turbines are usually spec’ed in considerable excess of average river flow to provide peaking service. Their Cf suffers as result: Hoover Dam’s Cf is 23%, whereas nearby Agua Caliente Solar Project is 29%. Notwithstanding cloudy weeks or extended drought, which is the more reliable?
    https://en.wikipedia.org/wiki/Capacity_factor#Hydroelectric_dam

    1. ” then Cf is a useful (and required) input. But it shouldn’t be confused with reliability.”

      Strange opinion. Appears you do not work at a commercial electrical power plant. The Nuclear Plant Manager I worked for took a job as the Site Manager for a three unit, fairly new, mine-mouth Coal fired power facility. His major reason for this promotion was that each of the plant at that facility had a “less then Steller” capacity factor < 50%, and the performance record he had at the NPP. He immediately recognized that the attitude at the plant was "If it isn't Broke don't Fix It." Basically, they did not perform any type of Preventive Maintenance. When it broke, they fixed it – that meant they had an outage and were not making electricity and probably paying twice as much to by the replacement electricity. I spent a month or so at the station looking at their process and on one inspection of the equipment I opened up the enclosure for the position controller for a major feed-water flow valve. The enclosure, an 8" X 8" X 4," box was almost 1/2 full of coal dust as fine as talcum powder. If any fuller it would have interfered with the mechanical action in the enclosure. Equipment operators usually carried a kitchen broom in front of them waving it up and down as they walked the rout of less used pathways in areas where there is High Pressure Steam. They did this to be sure that there were no leaks from the high pressure, 3,000 PSI steam as a steam jet is invisible and could cut off an arm if the walked through it. After implementing a "Preventative Maintenance Program"(PMP) within 2 years the number of unexpected outages fell to less than two total for the station per year instead of over six per year per plant. Within a few years many other Coal plants were also implementing a PMP at their plant. I also provided consulting service to a "Sister" NPP that had a less than 50% CF for the exact same reasons. After implementing a PMP their CF went up to the mid 80's. That is why management looks at a plants CF number. It is a good indicator of readiness and reliability.

      1. I agree with Ed. But then I’m only a senior engineer at a working geothemal plant that has had a capacity factor of over 90% for 40 years – we got to 100% one year when we had no scheduled surveys and could overdrive in the winter. The figure quoted in the lead post is distorted by Geysers where it was overbuilt in the boom and bust days and now run as a two shifter. The plants in Nevada are a lot better indicators.

        If the plants are on must-run dispatch, then capacity factor is an indicator of reliability. It is not synonymous. In geothermal binary plants for example, the MW go up and down a lot because of the air temperature which is the heat sink. This is when the steam or brine coming into the plant has the same massflow and enthalpy. For one of ours that is rated at 16MW gross, it can vary between 12 and 20MW (the generator rating). Just a practical example of Thermodynamics first law. By second law exergy analysis, the efficiency doesn’t significantly change.

        For a plant that is only dispatched on an as-needed basis (or two-shifted) the capacity factor is a very poor indicator of reliability. That is what Ed is pointing out. In those cases there, then availability and forced outage factor are the relevant indicators. The real thing for a power network is how well the station meets its dispatch. It doesn’t matter if it can only run 2 hours a day. If it meets what it says it will do every day and no trips, that can be accommodated. The schedule is the important thing.

        Nuclear and most geothermal are great at baseload, thermal steamers and CCGTs best at two shifting and OCGTs for peaking. Solar and wind are useless at meeting any dispatch, even on a two hour ahead schedule. A cloud goes over and solar plummets, then you have to ramp up OCGTs really quickly. That is what recently happened in Brisbane.

  5. Slightly tangential to the topic, but the methods of making solar less unreliable can also be used to make baseload nuclear supply peaking power.

    So e.g. molten salt storage stores energy in a molten salt. But solar thermal needs seasonal storage for reliability, and this only gives you intra-day-storage and is often still supplemented with gas for cloudy days etc. It’s just not a good fit.

    For a high temperature reactor it just immediately makes a lot more sense. You’re making thermal energy anyway so there is no conversion loss, you’re just diverting heat. You’re only storing it intraday anyway so you’re only losing a couple of percent to the environment. You can store a few hundred kWh thermal per cubic metre in cheap salt. If you’re trying to do multi-week or seasonal storage that’s kind of bad, with losses of a few percent per day being especially horrid. But for supplying peak load, maybe 2000 W/person during 4 hours peak each day (and absorbing half that during the nightly lul during 8 hours). This is really a lot since you’re cycling each day reliably. Just back of the envelope the peak for mutliple households can be met with 1 cubic meter of cheap salt in an insulated basin.

    If you have the high temperature reactor, most of proposed designs can load follow a bit anyway, but pumping out full power 24/7 and storing thermal energy in cheap salt offpeak might make even more sense as it buffers and supplies more power during the expensive peak hours, and if it is a dedicated “nuclear peaker plant” it can theoretically heat salt for 20 hours out of the day and dump that all on the grid during peak load. The extra cost besides the salt storage are on the steam plant (non-nuclear) side.

  6. @RichLentz. I agree. Both that I do not work at a commercial electrical power plant, and also that you are making a direct apples-to-apples comparison within a restricted class of thermal plants. In this case similar — or even the same — coal plants operating under similar market and demand conditions.

    That is not what the author was doing comparing overall average capacity factors of wildly disparate generator types operating under different market rules. I gave specific examples where comparing such capacity factors might not be a reliable indicator of reliability.

    I also commend your own experience and work supplying us all with reliable electric power.

    Thanks!

    1. @Ed Leaver, You are very Welcome.

      I also very much Agree with the author of this article. I am very much against the destructive impact the renewables’ have on the Environment. Especially because it is very unreliable.

      The NPP I recently retired from has shut down as it is “Not competitive” with renewables’ and the board controlling the plant [in my opinion] want the service area to have more renewable’s. They have used this as a selling point to attract several big tech companies that brag about being 100% Renewable. However, since they shut down the plant and replaced that power with “Contracted” Wind Turbine power from an “Investor owned/operated Co.” that has replaced the power lost by shutting down a plant with a CF in the low 90’s we have experienced (on average) at least an outage every month. At least once a year my home experiences an outage that begins with 4 to 6 momentary loses of power be for the power is lost for well over an hour. Two of those multiply surge/loss cycles has required me to buy a new TV. Another required replacement of the freezer. That is what reliability prevents. At least the Heat-Pump has a protective loss of power delay.

  7. Good article. The facts aside, it is a good illustration of how important it is to subject one’s convictions to opposing views to help understand just how well such convictions hold up. It seems that over recent years there has been an increasing number of renewables advocates realizing that their convictions and wishful thinking simply do not pass the mustard.

Recent Comments from our Readers

  1. Avatar
  2. Avatar
  3. Avatar
  4. Avatar
  5. Avatar

Similar Posts