Pick up almost any book about nuclear energy and you will find that the prevailing wisdom is that nuclear plants must be very large in order to be competitive. This notion is widely accepted, but, if its roots are understood, it can be effectively challenged.
When Westinghouse, General Electric and their international competitors first learned that uranium was a incredible source of heat energy, they were huge, well established firms in the business of generating electrical power. Each had made a significant investment in the infrastructure necessary for producing central station electrical power on a massive scale.
Experience had taught them that larger power stations could produce cheaper electricity and that electricity from central power stations could be effectively distributed to a large number of customers whose varying needs allowed the capital investment in the power station to be most effectively shared between all customers.
Their experience was even codified by textbook authors with a rule of thumb that said that the cost of a piece of production machinery would vary by the throughput raised to the 0.6 power. (According to this thumb rule, a pump that could pump 10 times as much fluid as another pump of similar design and function should cost only four times as much as the smaller pump.) They, and their utility customers, understood that it was much cheaper to deliver bulk fuel by pipeline, ships, barges, or rail than to distribute smaller quantities of fuel in trucks to a network of small plants.
Just as individuals make judgements based on their experience of what has worked in the past, so do corporations. It was the collective judgement of the nuclear pioneers that the same rules of thumb that worked for fossil plants would apply to nuclear plants.
There have now been 110 nuclear power plants completed in the United States over a period of almost forty years. Though accurate cost data is difficult to obtain, it is safe to say that there has been no predictable relationship between the size of a nuclear power plant and its cost. Despite the graphs drawn in early nuclear engineering texts-which were based on scanty data from less than ten completed plants-there is not a steadily decreasing cost per kilowatt for larger plants.
It is possible for engineers to make incredibly complex calculations without a single math error that still come up with a wrong answer if they use a model based on incorrect assumptions. That appears to be the case with the bigger is better model used by nuclear plant planners.
For example, one assumption explicitly stated in the economy of scale model is that the cost of auxiliary systems does not increase as rapidly as plant capacity. In at least one key area, that assumption is not true for nuclear plants.
Since the reactor core continues to produce heat after the plant is shutdown, and since a larger, more powerful core releases less of its heat to its immediate surroundings because of a smaller surface to volume ratio, it is more difficult to provide decay heat removal for higher capacity cores. It is also manifestly more difficult, time consuming and expensive to prove that the requirements for heat removal will be met under all postulated conditions without damaging the core. For emergency core cooling systems, overall costs, including regulatory burdens, seem to have increased more rapidly than plant capacity.
Curve of Growth
Though the “economy of scale” did not work for the first nuclear age, there is some evidence that a different economic rule did apply. That rule is what is often referred to as the experience curve. According to several detailed studies, it appears that when similar plants were built by the same organization, the follow-on plants cost less to build. According to a RAND Corporation study, “a doubling in the number of reactors [built by an architect-engineer] results in a 5 percent reduction in both construction time and capital cost.”
This idea is extremely significant. It tells us that nuclear power is no different conceptually than hundreds of other new technologies.
The principle that Ford discovered is now known as the experience curve. . . It ordains that in any business, in any era, in any capitalist competition, unit costs tend to decline in predictable proportion to accumulated experience: the total number of units sold. Whatever the product (cars or computers, pounds of limestone, thousands of transistors, millions of pounds of nylon, or billions of phone calls) and whatever the performance of companies jumping on and off the curve, unit costs in the industry as a whole, adjusted for inflation, will tend to drop between 20 and 30 percent with every doubling in accumulated output.
George Guilder Recapturing the Spirit of Enterprise Updated for the 1990s, ICS Press, San Francisco, CA. p. 195
In applying this idea, however, one must realize that the curve is reset to a new value when a new product is introduced and that there must be competition in order to keep firms focused on lowering unit costs and unit prices. In the nuclear industry, new products in the form of bigger and bigger plants continuously were introduced, and, after the dramatic rise in the cost of fossil fuel during the 1970s, there was little competitive benefit in striving for cost reduction during plant construction.
When picking the proper size of a particular product, the experience curve should lead one to understand that high volume products will eventually cost less per unit output than low volume products and that large products inherently will have a lower volume than significantly smaller products.
In the case of the power industry, it is very difficult to double unit volume if the size of a single unit is so large that it takes a minimum of 5 years to build and if the total market demand is measured in tens or hundreds of units.
Engines vs Power Plants
The Adams Engine philosophy of small unit sizes is based on aggressively climbing onto the experience curve. If a market demand exists for 300 MW of electricity, distributed over a wide geographic area, traditional nuclear plant designers would say that the market is not yet ready for nuclear power, thus they would decide to learn nothing while waiting for the market to expand.
In contrast, atomic engine makers may see an opportunity to manufacture and sell 15 units, each with 20 MW of capacity.
Depending on the distribution of the power customers, there might an opportunity to produce 150 machines, each with 2 MW of capacity. Though 2 MW sounds small to power plant people, 2,000 kilowatts is enough electricity for several hundred average American homes.
Though it sounds incredibly far fetched to people intimately involved with present day constraints regarding fissionable material, that same market might even be supplied with 1500 machines producing 200 kilowatts each. That is enough power to supply a reasonably sized machine shop, farm or apartment building with electricity. It might even be supplied by 15,000 machines producing 20 kilowatts each, or enough for a small group of cooperative neighbors to share. Current gas turbine technology begins at the 20 kilowatt level.
With the completion of each engine, the accumulated experience of design, production and engine operation will increase and provide opportunities for cost reductions.
There is plenty of competition and incentive for this cost reduction since there are dozens of fossil fuel engine makers who currently serve the need for power in smaller markets.
If the producers of Adams Engines are successful at providing the existing market need, the traditional nuclear suppliers may never see a demand build up for 1000 MW, and they may never even start on their own learning curve.
Note: This article originally appeared in the May 1996 issue of Atomic Energy Insights, when it was still a paper newsletter. It addresses numerous questions about small and micro reactors that are still being frequently asked today. For those questions, it is worth republishing. For historical reasons, I’ve decided not to change anything.