Economy of Scale?: Is Bigger Better?
It is possible for engineers to make incredibly complex calculations without a single math error that still come up with a wrong answer if they use a model based on incorrect assumptions.
(Originally published May 1996) Pick up almost any book about nuclear energy and you will find that the prevailing wisdom is that nuclear plants must be very large in order to be competitive. This notion is widely accepted, but, if its roots are understood, it can be effectively challenged.
When Westinghouse, General Electric and their international competitors first learned that uranium was a incredible source of heat energy, they were huge, well established firms in the business of generating electrical power. Each had made a significant investment in the infrastructure necessary for producing central station electrical power on a massive scale.
Experience had taught them that larger power stations could produce cheaper electricity and that electricity from central power stations could be effectively distributed to a large number of customers whose varying needs allowed the capital investment in the power station to be most effectively shared between all customers.
Their experience was even codified by textbook authors with a rule of thumb that said that the cost of a piece of production machinery would vary by the throughput raised to the 0.6 power. (According to this thumb rule, a pump that could pump 10 times as much fluid as another pump of similar design and function should cost only four times as much as the smaller pump.) They, and their utility customers, understood that it was much cheaper to deliver bulk fuel by pipeline, ships, barges, or rail than to distribute smaller quantities of fuel in trucks to a network of small plants.
Just as individuals make judgements based on their experience of what has worked in the past, so do corporations. It was the collective judgement of the nuclear pioneers that the same rules of thumb that worked for fossil plants would apply to nuclear plants.
Failed Paradigm
There have now been 110 nuclear power plants completed in the United States over a period of almost forty years. Though accurate cost data is difficult to obtain, it is safe to say that there has been no predictable relationship between the size of a nuclear power plant and its cost. Despite the graphs drawn in early nuclear engineering texts-which were based on scanty data from less than ten completed plants-there is not a steadily decreasing cost per kilowatt for larger plants.
It is possible for engineers to make incredibly complex calculations without a single math error that still come up with a wrong answer if they use a model based on incorrect assumptions. That appears to be the case with the bigger is better model used by nuclear plant planners.
For example, one assumption explicitly stated in the economy of scale model is that the cost of auxiliary systems does not increase as rapidly as plant capacity. In at least one key area, that assumption is not true for nuclear plants.
Since the reactor core continues to produce heat after the plant is shutdown, and since a larger, more powerful core releases less of its heat to its immediate surroundings because of a smaller surface to volume ratio, it is more difficult to provide decay heat removal for higher capacity cores. It is also manifestly more difficult, time consuming and expensive to prove that the requirements for heat removal will be met under all postulated conditions without damaging the core. For emergency core cooling systems, overall costs, including regulatory burdens, seem to have increased more rapidly than plant capacity.
Curve of Growth
Though the “economy of scale” did not work for the first nuclear age, there is some evidence that a different economic rule did apply. That rule is what is often referred to as the experience curve. According to several detailed studies, it appears that when similar plants were built by the same organization, the follow-on plants cost less to build. According to a RAND Corporation study, “a doubling in the number of reactors [built by an architect-engineer] results in a 5 percent reduction in both construction time and capital cost.”
This idea is extremely significant. It tells us that nuclear power is no different conceptually than hundreds of other new technologies.
The principle that Ford discovered is now known as the experience curve. . . It ordains that in any business, in any era, in any capitalist competition, unit costs tend to decline in predictable proportion to accumulated experience: the total number of units sold. Whatever the product (cars or computers, pounds of limestone, thousands of transistors, millions of pounds of nylon, or billions of phone calls) and whatever the performance of companies jumping on and off the curve, unit costs in the industry as a whole, adjusted for inflation, will tend to drop between 20 and 30 percent with every doubling in accumulated output.
George Guilder Recapturing the Spirit of Enterprise Updated for the 1990s, ICS Press, San Francisco, CA. p. 195
In applying this idea, however, one must realize that the curve is reset to a new value when a new product is introduced and that there must be competition in order to keep firms focused on lowering unit costs and unit prices. In the nuclear industry, new products in the form of bigger and bigger plants continuously were introduced, and, after the dramatic rise in the cost of fossil fuel during the 1970s, there was little competitive benefit in striving for cost reduction during plant construction.
When picking the proper size of a particular product, the experience curve should lead one to understand that high volume products will eventually cost less per unit output than low volume products and that large products inherently will have a lower volume than significantly smaller products.
In the case of the power industry, it is very difficult to double unit volume if the size of a single unit is so large that it takes a minimum of 5 years to build and if the total market demand is measured in tens or hundreds of units.
Engines vs Power Plants
The Adams Engine philosophy of small unit sizes is based on aggressively climbing onto the experience curve. If a market demand exists for 300 MW of electricity, distributed over a wide geographic area, traditional nuclear plant designers would say that the market is not yet ready for nuclear power, thus they would decide to learn nothing while waiting for the market to expand.
In contrast, atomic engine makers may see an opportunity to manufacture and sell 15 units, each with 20 MW of capacity.
Depending on the distribution of the power customers, there might an opportunity to produce 150 machines, each with 2 MW of capacity. Though 2 MW sounds small to power plant people, 2,000 kilowatts is enough electricity for several hundred average American homes.
Though it sounds incredibly far fetched to people intimately involved with present day constraints regarding fissionable material, that same market might even be supplied with 1500 machines producing 200 kilowatts each. That is enough power to supply a reasonably sized machine shop, farm or apartment building with electricity. It might even be supplied by 15,000 machines producing 20 kilowatts each, or enough for a small group of cooperative neighbors to share. Current gas turbine technology begins at the 20 kilowatt level.
With the completion of each engine, the accumulated experience of design, production and engine operation will increase and provide opportunities for cost reductions.
There is plenty of competition and incentive for this cost reduction since there are dozens of fossil fuel engine makers who currently serve the need for power in smaller markets.
If the producers of Adams Engines are successful at providing the existing market need, the traditional nuclear suppliers may never see a demand build up for 1000 MW, and they may never even start on their own learning curve.
Note: This article originally appeared in the May 1996 issue of Atomic Energy Insights, when it was still a paper newsletter. It addresses numerous questions about small and micro reactors that are still being frequently asked today. For those questions, it is worth republishing. For historical reasons, I’ve decided not to change anything.
Good Article – Hard to Argue With!
So, I won’t.
I am surprised China is following the old script. They talk like the next HTGRs after HTR-PM would be larger. Small enough reactors CAN radiate off their decay heat. Not sure why they do not learn from history.
@Robert Margolis
Perhaps you know something I don’t. AFAIK, the Chinese HTR-PM reactor modules will not grow in size or significantly increase in thermal power output. Power plants using those modules may be larger, but only by using more reactor modules for each steam turbine.
Larger steam turbines are already in production for coal units. HTR-PM modules produce the right kind of steam conditions.
Just checked and stand corrected. HTR-PM600 is one turbine with six reactors.
How is the pebble fuel testing going? Do we have a supplier yet?
Prophet … Micro reactors are all the rage today.
I would love to see them being sold!
Excellent article about no clear data proving economy of scale. This should have been part of the planning a long time ago.
I shared the knowledge a long time ago. Note original publication date – May 1, 1996.
A couple thoughts. Some decades ago when interest rates were above 10% and reactors were scaling up from 600 MWe to 1100 MWe, and taking many more years to construct, we did a calculations in an engineering economics class. If it takes several years longer to construct a larger reactor, and construction work in progress interest (CWIP) and allowance for funds used during construction (AFUDC) get captured and capitalized at the time of commercial operation, one can demonstrate that fixed charges actually increase with larger reactors! A second example is the scale up from the Yankee Rowe 185 MWe plant to AP1000. Both utilize(d) canned reactor coolant pumps. While Yankee did not need external cooling of the RCPs, AP1000 was found to need to add such to maintain reliability. Added complexity, added cost. I believe economy of scale can be reversed if one could mass produce small modular reactors in a factory much like natural gas combustion turbines now are.
Brain dead management is the common thread.
MBAs who have never actually stepped outside to smell the coffee. Sadly they are everywhere, in government as well as in industry.
I have often wondered why the decay heat can’t be used to operate a steam turbine at low power during the period when heat still needs to be actively removed. If the turbine can’t be operated below about 5% then maybe a smaller turbine could used for the purpose. Would that be more expensive than the present method of dealing with decay heat?
@Bill
This is done; see RCIC (small) and HPCI (larger) for BWR. They use an impulse turbine called a ‘Terry Turbine’… turbines are exhausted to the suppression pool; pumps normally take suction from refueling water (RCIC) and suppression pool (HPCI). Some BWR use isolation condensers instead of RCIC (I think).
RCIC=Core Isolation Cooling
HPCI=Core Injection (small break LOCA)
I really do subscribe to the “economies of scale” arguments with regards to reasonably large reactors of 500-1300 MWe, installed at multi-unit sites being more economical (albeit uninsurable) than smaller units (or worse, distributed micro-Rx). Perhaps this trend is like a ‘bathtub curve’ where the reactors become more expensive as they tend towards civil engineering megaprojects. I am surprised to learn that this certainty is contested. I appreciate the ‘factory made’ argument where SMR (or worse, distributed micro-Rx) hardware is cheaper to make, but their not going to be cheaper to operate – big picure. One comical example in my sandbox (not an opinion): NuScale WILL require 24x the core design/licensing effort of a large PWR, since 12 cores will be designed and licensed with existing Framatome methods for half the MWe of a 1200MWe Gen2 LWR. The core design of NuScale is not in any way easier to design (likely more challenging) than larger LWR – especially if an assembly must be discharged early due to grid or FM damage creating an asymmetry or energy shortfall… There is no hand-wavy method that allows NuScale cores to be licensed without this explicit due diligence. [It is my opinion that] it will not be cheaper to maintain 12 Rankine plants over lifetime… micro-Rx typically have a cash flow of tens to hundreds of dollars per hour – it is pretty clear that salary to output ratio shall make them fully uncompetitive, even in markets where electricity retails for $300/MWhr if the large LWR is uncompetitive in markets where electricity retails for $190/MWhr (PJM).
$190/MWhe retail after T&D and fees.
One comical example in my sandbox (not an opinion): NuScale WILL require 24x the core design/licensing effort of a large PWR, since 12 cores will be designed and licensed with existing Framatome methods for half the MWe of a 1200MWe Gen2 LWR.
I don’t understand why a core that will be identical for every unit will require more design / licensing effort. Can you help me understand this?
@ michael scarangella
I Know little about how much money, time or manpower was actually saved by the fact that Byron/Braidwood and Marble Hill, all identical Westinghouse AP1000 plants saved on NRC licensing, but Byron/Braidwood used this tactic again on their plant license extension. so it must have helped. This should be applicable for the Identical NuScale plants. and by Identical, I mean Identical. They all had a common Safety analysis and PSAR Preliminary Safety Analysis Report and FSAR- Final Safety Analysis Report (Marble Hill did not have an FSAR as they did not get an OL). Shortly before Marble Hill pulled the plug, they received an order to relocate TG lube oil(?) piping in the Turbine Building because Byron/Braidwood had moved this piping, and ALL plants had to be identical.
A large problem of graphing costs is the attitude of the NRC toward the BOP (Balance of Plant) before the NRC most equipment other than the NSS (Nuclear Steam Supply System) and the absolute necessary equipment to cool down the plant were NOT within the bounds of the NRC review. With the establishment of the NRC, the ratchet wrench began. TMI-II cost twice as much as TMI-I, most of which was from NRC mandatates needed to obtain an operating license and the doubling in construction time due to these required modifications. Requirements to meet TMI Lesson Learned requirements added so many items to the “Nuclear Safety Related” classification that Rancho Seco had to add two more diesel generators, among millions of dollars of other changes. I seriously doubt that any of the latest licensed NPPs are a full order of magnitude safer than the very first NPP. For what end? How many thousand of years of total operation of US NPPs and no one has been killed or died from their operation.
Rich:
I believe it is important to recall the basis used to justify separating nuclear regulatory functions from nuclear energy development and promotion functions, which were initially both done by the Atomic Energy Commission.
The 140 days worth of public hearings about the adequacy of the Emergency Core Cooling System in the early 1970s raised considerable doubts in the minds of both legislators and the general public.
The ECCS controversy raised real issues that were EVENTUALLY resolved with very expensive testing, but a major cause of the controversy was a [somewhat legitimate] question about the scaling factors used as reactor power outputs doubled, doubled, doubled and doubled again. (60 MW, 120 MW, 240 MW, 480 MW, ~1000 MW)
Here is one version of the history https://nuclearhistory.wordpress.com/2014/01/29/the-eccs-controversey-of-the-1960s-and-1970s-usa-in-the-light-of-march-2011/
@David
The cores are not identical, if they are not identical. The NuScale core will not be loaded and discharged as a unit – it will be shuffled. The design certification shows the reference design and planned shuffle scheme, with fuel being loaded for 3 cycles. The biggest variables that will yield varying and compounding divergence from the reference design are availability (did the unit trip or did it run well?) or WORSE, must any fuel assemblies be discharged prematurely due to damage (leaking?). Refueling outages are planned a cycle or [typically] more in advance and are fixed to begin on specific dates – assumptions are made about how the reactor will operate and fuel is bought to meet requirements given all those assumptions. There is typically ~20 EFPD of slop that can be accommodated for unplanned shut-downs before the planned falls outside of its licensing space. If the reactor has an unplanned shutdown, loading the fuel you’ve already procured will make the reactor more reactive, which will push MTC more positive – at some point it violates tech specs.
So rather than assuming all the fuel in all the 12 modules will be at varying stages of the same trajectory – all covered by a single licensing analysis, you might want to imagine how the various cores might diverge, because that is what they will do. It will get really interesting with that 6-foot core when a damaged fuel assembly cannot be loaded for it’s 2nd cycle, there will be a tremendous radial tilt outside of anything we see in other PWRs.
Interesting thoughts! I recently saw a lecture (https://www.youtube.com/watch?v=A998uWPPtX4&feature=youtu.be&ab_channel=TitansofNuclear ) from the founder of The Energy Impact Center (https://www.energyimpactcenter.org/ ) who believes the costs that accompany nuclear energy aren’t tied to utilizing or creating the energy source itself — they’re tied to the construction management aspect. Energy Impact Center recently launched OPEN100 (https://www.open-100.com/ ) to collaborate with professionals across relevant industries to provide a solution that builds nuclear power plants quickly and at a significantly decreased cost. I wonder if their final solution will involve a smaller plant.