1. Therefore the plant owners expect that the pumps will operate reliably for the full 60-year lifetime of the reactor plant that they have purchased.

    Or as long as the steam generators to which they’re attached, which appears to be much less.

    Failures in modes not predicted by modeling is how San Onofre’s SG replacement went sour.  You’d hope people would have learned.

    1. The “failures” that caused the project to go sour all occurred well before the design modeling failure of the new SGs. SONGS needed new SGs. The original US SG vendor was no longer in business making SGs, why? Does that represent a failure of something? What drove them out of business? Next SONGS management decided to do two potentially risky things at the same time, find a new vendor to make replacements (always a risk) and a substantial capacity upgrade (always a risk even with the original vendor, double risk with a new vendor). The path of least risk was to have the new vendor clone the old smaller design. But “something” drives these SONGS decision makers to take the route of maximum risk. It’s not hard to imagine exactly what that “something” process was; does that represent a failure?
      With hind sight about the mechanisms of the failure it is easy to understand it was uniquely related to an engineering failure of the upgrade. If you needed a part for your billion dollar machine, and the original manufacturer was no longer in business, would you ask the new guy to build another one, and oh by the way, make it 20% bigger?
      At this point there is nothing to be gained by knocking the SONGS management decision. But something pressured them to take this route. It’s more likely the failure rests in what causes those pressures.

      1. “But “something” drives these SONGS decision makers to take the route of maximum risk. It’s not hard to imagine exactly what that “something” process was”

        Could you actually _say_ what you think that something was?

      2. @ mjd and Jim Baerg

        When the main LP turbines were replaced at SONGs, they weren’t replaced with clones of the old ones. True, turbines are not important to reactor safety but the temptation is always there that as long as you are replacing a piece of equipment, why not replace it with something more efficient, something better? The turbines produced more MWs than the old ones and the new S/Gs produced more MWs than the old ones. I don’t fault the decision makers for trying to increase the MW output of the plants. I say this with the acknowledgment that I am not a design engineer and perhaps fail to appreciate the inherent risk in a new vendor with an upgraded design. SCE and MHI failed to insure the correct computer codes were used with respect to the tube arrangement and thus the testing was inaccurate. Had they done so, results would have been different and the industry would, perhaps, be hailing the innovative genius of SCE and one of our engineers intimately involved with the process, might still be alive (he dropped dead not long after testifying in court, my guess is of stress).
        SONGS’ S/Gs demonstrate that pushing the envelope doesn’t always result in success, but if engineers and decision makers refuse to push the envelope out of fear of failure, won’t our progress be stifled?

    2. From EMD’s website:

      EMD’s canned motor pump technology has been selected for the next generation Westinghouse AP1000 nuclear power plant. This advanced pump is based on technology that is currently operating in other power plant applications. The longest running canned motor pump has operated for over 40 years at Mitchell Generating Station in Pennsylvania and it is still operating.

      So, they are not going from a computer simulation directly to the ‘World’s Largest Canned-Motor Pump’, but have built smaller pumps (not clear how many, what size).

      From what I read about Mitsubishi’s steam generator problem, there was an error in the inputs to the simulation, not that the modeling method was flawed. So, one piece of advice could be ‘don’t make a mistake’ … I don’t know whether they could have done additional testing that would have revealed the problem, before constructing the generator. Perhaps they did rely too much on the models, but if you are implying that one should not employ simulations, I wouldn’t agree.

      1. My understanding is that Mitsubishi’s thermalhydraulics code underestimated the steam quality in certain regions of their steam generator design. The thermalhydraulics code output was one of the inputs to the code that analyzed steam generator tube vibration. While higher quality steam is great for overall plant performance, it does not supply sufficient damping of steam generator tube motion.

        The NRC later concluded that Mitsubishi did not adequately validate their thermalhydraulics code.

  2. BWRs are tested, safe and wonderful IMHO. They work. They could be standardized. I dont really understand why everyone seemed to jump on the AP 1000 boat before one was even built. I know the design has been tested some but there are always bound to be bugs to work out. Mind you we are not building that many, and anything is better than nothing, but old and boring is still fine with me.

    1. Sadly, GE just hasn’t seemed as interested in pushing the ESBWR as much as Westinghouse was the AP1000. GE has a lot of other businesses (including every other type of power generation), which is probably part of the explanation. Maybe Jeffrey Immelt’s personal preferences also came into play. Rod has written a few article on this.

      GE probably figures that if nuclear starts to take off, they can jump in late and still be a major player. Could be right or wrong.

      1. Fermi III set for COL. This first-of-a-kind ESBWR will probably cost close to $10 billion. “NRC Chairman Stephen Burns did not announce a date for a hearing on the staff recommendation, but did commit the agency to `issuing a final decision promptly.’”

        So there. Of course, DTE hasn’t committed to actually building it yet, and probably won’t until after the dust-and-feathers surrounding EPA’s Clean Power Plan settle a bit. Get a better feel for how much the country really values low-carbon energy before betting the the company on it.

        1. jheez Ed, thought they were a little further along and cheaper! I actually meant to say PWR as that is the workhorse of the nuclear fleet, an I said BWR as it was the last type specifically I read up on and evidentially liked a million years ago. Pressurized water reactors are just fine. They are tried and tested reliable not to mention extremely safe and efficient with updated tech. Clean, relativity cheap and fast and cheap up and running are the issues I am most concerned with in energy now.

          In perspective, all things I do understand together, I feel now like the AP 1000 is more of a bow to anti nuke influence than a better design that was actually needed. Probably will be proved wrong about that. Kinda hope I will.

        2. “Of course, DTE hasn’t committed to actually building it yet”
          Most probable next step:
          FSBO, 1 nuke plant with pre-approved build-able lot. DTE is a gas company, with a nuke plant.

          1. As a regulator in the State of Michigan I am fairly well informed of what is going on with Fermi 3, and even though DTE Electric keeps their cards close to the vest, their intentions are fairly clear. While they may be the first company to hold an ESBWR COL, they are in no hurry to build. I expect Dominion’s North Anna 3 to be the first ESBWR constructed. Dominion has an early site permit, and I expect they will move forward soon after COL is issued. DTE will watch, learn, and continue to forecast the situation and do what they think is best. I expect that they will consult with many other people along the way.

        3. They shouldn’t build the ESBWR; they should stick with the AP1000. If we get into a situation where everyone is building a different plant, costs increase. Yes, AP1000 has delays, but by the time construction begins, people have experience. That means that DTE can be confident in the schedule.

          1. @xoviat

            Did you read the post? How confident are you that the RCP redesign will work well enough to complete the units already in progress? Suppliers have been announcing that it is ready since 2012.

          2. @Rod Adams

            DTE will probably not start construction until after December 2015. By that time, it will be clear whether the AP1000 will work or not.

            But now I understand that you really think that the AP1000 could be a failure. I simply had not considered that possibility; I always thought that it was a certainty that Westinghouse would sort it out.

            If you are correct, then there is apparently cause for great concern. I shudder to think about the future of nuclear in the US if the power companies have to write down Vogtle and VC Summer. In that case, DTE might simply decide that nuclear is too risky.

            1. @xoviat

              But now I understand that you really think that the AP1000 could be a failure. I simply had not considered that possibility; I always thought that it was a certainty that Westinghouse would sort it out.

              As I said in my opening paragraph, I have had a few sleepless nights thinking about the implications of a complete failure of the AP1000.

              There is an historical example of a promising design (GA’s HTGR, with the demonstration unit at Ft. St. Vrain) that was done in by the inability to correct what might have seemed to be a fairly minor engineering issue promptly enough to allow future sales and revenues to make up for early troubles.

              I am confident that, given enough time and resources, engineers and manufacturers could resolve the challenges and produce reliable pumps. I am not confident that there is enough time left or enough available resources to successfully complete the task. In fact, I am leaning more and more towards pessimism.

              That will have huge implications, both financially and politically. It will do nothing for America’s technical reputation in nuclear energy and nothing for our “gold standard” of regulation.

          3. Folks,

            Please consider the fact that ESBWR does not use any large reactor water recirculation pumps as ABWR uses or reactor coolant pumps as AP1000 uses. It is entirely natural circulation. Instead of reactivity control being provided by control rods and recirculation flow variation, it is provided by control rods and variation in feedwater pre-heating from the 7th point feedwater heater.

            In a normal BWR and in ABWR, large RWR pumps (external for BWRs with internal jet pumps, and internal RWR pumps for ABWR) are varied in speed (although some plants have constant speed pumps and variable position control valves) to vary RWR flow.

            When RWR flow goes up, more steam voids are “blown” off the core, moderator density in the core goes up, reactivity goes positive, more fission, more power, steam rate goes up and pressure rises, causing the turbine control system to open the turbine control valves more to increase generator load. The opposite happens when RWR flow goes down.

            ESBWR is however different. RWR flow is entirely natural circulation, so to vary reactivity, flow can be bypassed around the 7th point feedwater heater. More bypass flow, more cool feedwater, more positive reactivity, more fission, power goes up, steam goes up, preheating returns to normal and the turbine control valves open more. The opposite happens when reducing bypass flow around the 7th point feedwater heater.

            This arrangement obviates any large rotating components in the reactor coolant system. Thus, the problem that Westinghouse is having with AP1000 RCPs is obviated altogether.

            I believe that ESBWR is a design superior to any PWR (Westinghouse AP000, Areva EPR, Mitsubishi APWR, South Korea’s APR-1400, etc). It has 1/2 the pressure that a PWR has in its reactor coolant system, it has no boric acid so no RPV head boric acid corrosion problems, it has no S/Gs so no primary to secondary leak problems (the secondary side is hot so maybe that’s not an advantage), and it has no large active components (like massive RWR Pumps or RCPs).

            But everyone is so accustomed to navy submarine PWRs (as I am, having been a sub RO a lifetime ago). My opinion? ESBWR is best for a super large plant, not AP1000.

          4. Loanes, if ESBWR is so good and simple then how come its priced at $10 billion a pop in the USA?

            10 billion dollars for a boiler is rather much. A factor of 10+ too much.

            (curious question from me – I’m actually a big fan of the ESBWR design and am hoping for some answers to these unbelievable price tags).

          5. Cyril,

            I do not have a good answer fr you as to why the Economic Simplified Boiling Water Reactor is neither economic nor simplified. Sorry. I couldn’t help myself. I am bad. But seriously, the only thing I can say is that the cot of regulation drives the price up, and GE-Hitachi is very risk-adverse.

            Now that said, if I were utility looking for a large reactor, then it would be the passive safety design of an ESBWR – no large rotating equipment in the reactor coolant system.

            And if I were looking for a small reactor, then it would be a NuScale PWR (again, no moving parts in the reactor coolant system).

            But I think in today’s climate, any new design – SMR or large Generation III+ reactor will cost $5 to $10 billion. And Generation IV designs like General Atomics’ EM2 gas cooled reactor or the newer molten salt designs won’t even be considered till the NRC has regulations for them. I am not even sure that the NRC could handle GE-Hitachi’s PRISM design. 🙁

          6. Ioannes,
            Neither is NRC, apparently. Four, five years ago and NRC was too busy with small LWRs to consider S-PRISM. Today they’re cutting back on staff. But about that ESBWR price tag, from the linked article:

            The cost of the reactor, at an estimated $5,000/Kw, which is low by global standards, will be $7.67 billion. Since this is the first-of-a-kind construction, a more conservative cost rate, speculatively set at $6,500/Kw, comes out to $9.98 billion. Add in the balance of plant costs and the whole project easily approaches $12 billion. That’s a bet the company number which is why the firm, relying on the “prudent investor” principle, will check and recheck everything before breaking ground.

            Then there’s the question of how many additional ESBWR customers will be needed for GE-H to make a production run.

        4. I did notice this is creeping closer (although since the AP1000 at Sanmen started construction in 2009 … still going to be well behind).

          $10B? The AP1000 costs less … although the ESBWR output is ~ 35% greater. I thought this would be cheaper to build …

          1. ESBWR Natural Circulation eliminates recirculation piping, the equivalent of RCS piping in a PWR. This reduces LOCA risk substantially. In my opinion, it might be the safest certified large commercial plant design by far. No ESBWR has ever been built, they’re modeled – that’s about it.

            Canned coolant pumps worked well in my military experience, but they were far from cheap. As a matter of fact, no Navy plant I operated made one thin dime of profit. In the end, economics will make or break a technology in the private sector.

  3. I still think that the entire notion of building one huge reactor is crazy. Simply from a logistical standpoint (Aside, logistics is where my experience lies and is what I’m currently studying) it is insane. Why force the use of specialist outsize haulage when we could just build multiple smaller units and use more standardised haulage to move the parts of those more sensibly sized reactors?

    I recall an old NASA tagline that the big players would do well to bear in mind; “Faster, cheaper, better”.

    1. Many people do not want to live next to a nuclear reactor. So building a plant with 6 huge reactors (not one) on the same site, some distance from a population center (not too far) … makes a lot of sense.

      Also if you are supplying electricity to a city of 10 million people (China has a number of these by the way), you need big, unless you want to build thousands of these things.

  4. From years of poking around in the older plants, I’ve realized that the plants themselves were novel in their day, but the components themselves were fairly ordinary and identical to those used in fossil stations, paper mills, etc., so the components were not likely to be the source of many suprises to the schedule or budget. Plus practically everything is designed to be tested, bypassed, and replaced. I think at some point, it’s wise to say “it’s good enough”

    Replacing an obnoxious RCP seal with a canned pump might have seemed like a good idea at the time. But a seal can be replaced in-kind in a few days. A totally new seal design installed in a refueling outage. Replacing a canned RCP is not so easy.

    As my boss used to say “today’s solution is tomorrow’s problem”

  5. NS SAVANNAH had canned main coolant pumps, supplied under two contracts in order to obtain interchangeable but not necessarily identical pumps.

    Initially the ship went to sea with four Allis-Chalmers pumps; after a failure of all four late in the ship’s life due to gas penetration of pump bearings (during a containment leak test) the ship ended up with two Allis-Chalmers and two Westinghouse pumps.

    Well done article, Rod. Without dual suppliers this is a problem. Keep us posted!

  6. Every time I see something like the following phrase, I cringe:

    “Your confidence is based on software modeling, right?”

    Software is only as good as those specifying the software requirements, developing the design, implementing the code, testing the product, V&V’ng throughout the software life cycle, and managing the configuration of all configuration items, whether code or configurable logic or documentation or hardware platform.

    People do not seem to understand that if you are going to rely on the code to do your blood and guts engineering instead of doing it the old fashion way, then you had better darn sure engineer your code in the right way.

    Oh, I’ll wager that “they’ll” say, “My code is NQA-1 compliant” and make that claim because the NRC endorsed NQA-1 in RG-1.28, so they think that is sufficient. Well, it isn’t. Subpart 2.7 in Part II of NQA-1a-2009 doesn’t begin to approach the perscriptivity of the IEEE or IEC or CE standards for real software engineering (when it comes to this area of nuclear software engineering, the Canadians put us to shame). Besides, NQA-1 is from the American Society of MECHANICAL Engineers. It’s great for mechanical, civil / structural, etc., but NOT for either design and analysis software or digital I&C software / configurable logic. Yet that is what gets defaulted to for design and analysis software engineering (they’d do it for digital I&C if they could, except that RG-1.152 and 1.168 through 1.173 prevent them).

    When someone says, “I rely on my code to qualify my pump seal,” I want to see the Software Requirements Specification for that code, and how in a Requirements Traceability Matrix every single decomposed requirement is traced through the Software Design Description, the lines of code, and unit and integration and acceptance test steps. Such an RTM would be tens of thousands of line items if done right. But because that requires real engineering, no one wants to do it. Too heavy, too high, too hot, too hard – the 4H club.

    BTW, if the FAA and airline industry can do this successfully, then why can’t we in nuclear? Airplanes aren’t falling out of the sky due to faulty design and analysis software calculations or due to bugs in digital I&C controlling aircraft. The same is true in medical. Go to a hospital and get hooked up to all that digital I&C, and have your blood work analyzed by all those different medical software codes. If those industries can do it, then why can’t we? Why do we default to ASME of all things for design and analysis code?

    1. This kind of fits the old Richard Feynman quote:

      “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”

      Although in this case it is not public relations, but adequate modeling in the software. The modeling is best when based on actual experience, i.e.reality. I guess the basis of the software must be known for it to be a legitimate tool.

      You’d think they would have built enough boiler feed pumps over the years that they would have been able to properly assess the performance of these RCPs.

    2. Airplanes aren’t falling out of the sky due to … bugs in digital I&C controlling aircraft.

      As I recall, an early A-350 did just that.  It was in a test of autopilot in engine-out mode.  The single-engine climb worked okay, but when the aircraft approached the target altitude the software switched to altitude-capture mode.  This mode did NOT incorporate any constraint on climb angle imposed by the dead engine.  It commanded a pitch-up that there was insufficient thrust to accomodate, and the airplane stalled, spun and crashed.  Airbus’s top test pilot and a number of others died.

      My experience with a very software-dependent car does not make me relish the idea of self-driving vehicles.  Even when I’ve reported bugs, they’ve taken a long time to fix.

      1. Unfortunate for sure, but interesting that the test pilot allowed this result. What fooled him? Most all modern commercial planes are certified for auto pilot take off and land at 0/0 (ceiling and viz), but it’s not used for a reason. When people think software of this type equates to artificial intelligence… well, they are using “artificial” intelligence. When an SMR developer says “this design is walk-away-safe” what I hear is “we don’t even need those operators.” Just my opinion.

        1. The test pilot may not have been able to react in time to recover control of the aircraft, and the airshow incident where the A-350 flew itself into the ground because the avionics refused the pilot’s throttle-up command and refused to raise the nose because of insufficient thrust (because the pilot hadn’t said “Simon Says” and pressed the go-around button!) indicates that he may have been unable to override the faulty code anyway.

          I have written DO-178B-compliant code.  The requirements traceability is rigid, and if somebody omitted a requirement that the altitude-capture code incorporate a limit on climb angle and rate due to impaired thrust, that ought to have turned up in the root-cause analysis.

          1. With all this talk about requirements tracing and specification, kindly google and read volume 2 of NUREG/CR-6734. It has excellent examples – mostly non-nuclear – of screw-ups in specifying requirements. Volume 1 says that the majority of all accidents in which software was involved were due to missing or improperly specified requirements. The Therac-21 accidents of ancient history should be sufficient to teach us to never completely trust software.

          2. “A-350 flew itself into the ground because the avionics refused the pilot’s throttle-up command”

            That is not what happened. The engines take a few seconds to spool up and the plane takes a few seconds to increase speed. The pilot only had a few seconds from the time that he realized he needed to pull up to the crash. The computer, as you said, would not allow him to pull up because he would have stalled the plane.

            From the point that the pilot realized that he needed to pull up, irrespective of the software, a plane crash was the only possible outcome. No combination of positions of the fuel valves, ailerons, elevons, and rudder could have avoided a crash from that point. There was simply not enough time.

          3. My bad:  the Airbus that flew itself into the ground was an A-320, not an A-350.  It was the Habsheim airshow incident in 1988, Air France flight 296.

            Since this is both too long and OT for AI, I’ll just mention that I put stock in the account given here, and note that the issues listed in the OEBs are sufficient to have caused the Habsheim crash even if the aircrew made no procedural errors.

  7. “You’d think they would have built enough boiler feed pumps over the years that they would have been able to properly assess the performance of these RCPs.”
    Read the referenced part 21 reports. The first couple passed performance tests & were shipped. A latter pump impeller part failed on the test stand due to a latent manufacturing defect in an impeller part supplied by a sub contractor. Due to “loss of process control” by the sub. Read: QA problem. Sound familiar?

  8. TMI-II had a failure of the Main Steam Safety Valves (Code Safeties) during the Power Escalation Testing just a year before the incident. These valves were designed by a well respected US valve manufacture and their valves were (are?) widely used as Code Safeties. All valves had to be replaced with a different vendor due to availability/time constraints and still set back the intended operational date more than a year. After extensive investigation it seems that they had simply scaled up a smaller version to the desired capacity. They also tested them, certified them and delivered them to TMI. They were verified to lift at the desired pressure during Hot Functional testing with no problems. They broke (each and every one of them) on their first actual full capacity, full length actuation. This problem also gave the NRC a chance to ratchet in more “safety” changes which added additional costs.

  9. This appears to be a materials problem in the impeller. Conventional pumps with mechanical seals have impellers. All centrifugal pumps have impellers. This does not appear to be a canned rotor technology problem. It could have happened to conventional pumps just as well.

  10. “Therefore the plant owners expect that the pumps will operate reliably for the full 60-year lifetime of the reactor plant that they have purchased. ”

    Rod, the pumps appear small enough to be moved in through the equipment hatches. So you could bring in a new pump and weld it on (bolt off and cut/burn the old one).

    Parts of the canned rotor pump should be easier to replace. Basically the stator and support systems are outside the primary loop. Way I understand it there is a wet stator design but the stator wetting is caused by a separate cold cooling water flow or purge flow. So you could replace those components without opening the primary loop. To replace the can or impeller you need to open the primary loop but its just some burning/cutting of a relatively thin metal layer. It doesn’t look terribly difficult to me compared to replacing the steam generator itself.

    1. “To replace the can or impeller you need to open the primary loop but its just some burning/cutting of a relatively thin metal layer. It doesn’t look terribly difficult to me compared to replacing the steam generator itself.”

      If nobody can build a QA qualified 60 year impeller replacement part it looks “terribly difficult” to me, and your point is moot without that part. These are FOAK units, this is 1 problem, large module QA is another problem. Recent construction error another problem. How many more? Especially if they ever finish and try to run it, Rod’s concern is real. History of cost and schedule for every nuke plant ever built is beginning to repeat, and this design was to be the solution.
      If you are a utility watching and waiting in the wings, and you decide “nope” on a new big FOAK design, where do you go? An SMR? SMRs are “no go” without digital, and NRC hates digital. NRC will drive the SMR certification cost alone to 3X the price of a completed 1980s unit. NRC is a rudderless boat without vision, indirectly steered only by the requests for new design certifications and over reacting to external non-applicable events. And it was set up that way. Why is it still running with 4 commissioners?

      1. Folks,

        If what MJD wrote above is NOT true and correct, then why does the NRC put all its faith and trust in analytical and calculational software codes for reactor protection analysis, seismic analysis, fire loading analysis, etc ad nauseam, but makes things so onerously difficult when using analogous code for real time digital I&C systems? Following NQA-1, Part II, subpart 2.7 for analytical and calculational software is a piece of cake compared to the IEEEs endorsed in RGs for digital I&C:

        IEEE 730 on SQAPs
        IEEE 828 on SCMPs
        IEEE 829 on Software Test Docs
        IEEE 830 on SRSs
        IEEE 1012 on SVVPs
        IEEE 1016 on SDDs
        IEEE 1028 on Software Review and Audits
        IEEE 1063 on Software User Documentation
        IEEE 1074 on Software Life Cycle
        IEEE 1228 on Software Safety Plans
        IEEE 1233 on System Requirement Specs

        I could go on, but you get the idea. Now I am not saying that those aren’t good IEEEs or shouldn’t be followed. Quite the contrary. What I am saying is why does the NRC apply one set of rules for design and analysis software which it doesn’t even enforce for itself and then establishes a different set for real time systems which it enforces with a paranoia unequaled in any other industry UNLESS MJD is 100% correct?

        1. “and then establishes a different set for real time systems which it enforces with a paranoia unequaled in any other industry ”

          You have asked a really, really good question. And I have scratched my head over this for awhile. Everybody knows the whole world is going digital, so why the reluctance? My conclusion is the paranoia is not “unequaled” in any other industry, and is not paranoia at all. It is real, and home grown. Don’t say the other industry name out loud, but they know if “they” can do it to “them”, it works the other way too. Fear of counter-cyber attack. Anybody have a better explanation?

          1. It’s not that hard to prevent cyber-attack.  Keep the digital systems stone-simple.  Write the code in assembly language and hand-verify it.  Blast it into PROM chips that have no auto-reprogramming circuitry.  Keep all data links to the outside world optically isolated and one-way.

            Keeping the protocols and algorithms small and simple aids verification, preventing bugs as well as hacking.

          2. Engineer-Poet has a valid point, and it’s equally applicable to logic embedded in FPGAs, CPLDs, ASICs, etc. Keep the logic / code simple. If developing logic, then use the VHDL or Verilog on isolated development platforms that are scanned and certified virus / malware free. If developing code, then once again do the coding on isolated development platforms that are scanned and certified virus / malware free. Once software / configurable logic is integrated with the intended hardware platform, keep the platform completely isolated.

            But this is the problem. Licensees want data output from the platform, so they want communication out of the platform to feed things such as the Plant Information Network for Maintenance Rule Monitoring. And invariably within a platform some channel to channel communications exist for voting logic.

            And many times the Licensee wants the clock on the safety-related platform synchronized with the clock on the non-safety-related platform, so that requires communication. And of course everyone wants a GUI Windows feel to the displays (Lord forgive me the heresy – whatever happened to the old VAX VMS DEC days?), so that brings a whole host of other problems.

            The days of using an isolated Foxboro 761 or Moore 351 digital controller are gone. Licensees want an integrated platform. And of course the software has to be classified into 1 of 4 Software Integrity Levels per IEEE 1012 as endorsed in RG-1.168 to determine the level of V&V rigor to be applied per Table 2 in that standard.

            And Cyber Security classification (or course) uses an entirely different set of definitions for the 5 security defensive zones (4 to 0) in RG-5.71 (which the NRC doesn’t define but they will be happy to tell you if your definition is wrong). And these defensive levels are supposed to tell you the CyS rigor to be applied throughout the life cycle (but there isn’t any analogous table to table 2 in IEEE 1012). Sorry, folks, it isn’t as simple an Engineer-Poet has wisely recommended it ought to be. 🙁

          3. Interestingly you can get non-reprogrammable digital controllers on the market. They are basically one time programmable and then you burn a fuse.

          4. Folks,

            Forgive the instructor within me and be grateful it isn’t Catholic theology in Latin. I am sure Engineer-Poet and some others here already know all this, but for those who don’t, here goes my brief dissertation on different kinds of programmable chips – correct me where I err:

            ROM or Read Only Memory Chips

            Similar to RAM chips, ROM chips contain a grid of columns and rows. But where the columns and rows intersect, ROM chips are fundamentally different from RAM chips. While RAM uses transistors to turn on or off access to a capacitor at each intersection, ROM uses a diode to connect the lines if the value is a “1.” If the value is “0,” then the lines are not connected at all. ROMS are programmed at the substrate level once and never reprogrammed. The old late-1980s Dynavision x-ray machines used in plant security at various placed (airports, nuclear power plants, etc.) were microprocessor based and programmed with ROM chips.

            PROM or Programmable Read Only Memory Chips

            Creating ROM chips totally from scratch is time-consuming and very expensive in small quantities. For this reason, mainly, developers created a type of ROM known as programmable read-only memory (PROM). Blank PROM chips can be bought inexpensively and coded by anyone with a programmer. These chips are essentially fusible link arrays programmed by a chip burner. An intact fusible link between word and data lines represents a logic 1. A “burned out” link represents a logic 0. Like ROM chips, these are programmed only once. The fusible link array cannot be reconfigured. The old early-1990s Lovejoy feedpump turbine digital speed control circuit boards used PROM chips.

            EPROM or Electrically Programmable Read Only Memory Chips
            UVPROM or Ultraviolet Programmable Read Only Memory Chips

            Working with ROMs and PROMs can be a wasteful business. Even though they are inexpensive per chip, the cost can add up over time. Erasable Programmable Read-Only Memory (EPROM) addresses this issue in that EPROM chips can be rewritten many times. These chips have a clear glass or plastic window through which UV can be applied to erase the memory. Care must be exercised, however, to ensure that a chip with an uncovered window is NOT inadvertently exposed to UV. There have been examples in the nuclear industry where flash camera use has resulted in erasure of the software within unprotected UVPROM chips, with resultant equipment shutdown. Most mid-1990s fire protection microprocessors and HVAC controllers used UVPROMs, and if memory serves me rightly, so did the Sorrento Electronics RM80 radiation monitors (I can’t remember that far back any longer)/


            Electrical Erasable Programmable Read Only Memory chips use microtransistors as UVPROMs do. However, they are electrically erasable (as the name implies) instead of being erasable by UV light. These account for the majority of programmable chips in use today. Most everything is electrically reprogrammable.

            Logic Gate Chips

            Logic Gate chips use Boolean algebra logic gate arrays that are externally configured by a desk top software program, but the chips themselves are without software. An external computer is used to configure the AND, OR, NAND, NOR and other Boolean logic gates within the chip for a specific application. Usually a “hardware” language such as VHDL (VHSIC Hardware Description Language where VHSIC means Very High Speed Integrated Circuit) or Visilog (there are IEEE standards providing guidance on these “hardware” languages) is used to configure the logic in the chip. In this manner the chip can be treated as hardware while the software configuring the chip resides externally. However, the US NRC regards these not as mere hardware but as software and subject to the requirements of BTP 7-14, RG-1.152, and RG-1.168 through 1.173. The NRC intended to issue a Regulatory Issue Summary on this in 2014, but the draft still has not been approved (I provided a link to it somewhere in this comment stream). A few such chips involved include the following – see NUREG/CR-7006 (just Google it) for more information:

            FPGA: Field Programmable Gate Arrays
            ASIC: Application Specification Integrated Circuit
            PLD: Programmable Logic Device
            CPLD: Complex Programmable Logic Device
            PAL: Programmable Logic Array

        2. You’ve basically made the point. Endless norms and codes being endlessly reinforced to the letter. This does not make things safer, it can make things outright dangerous as people think they have code compliant seawalls and then find they aren’t tall enough (just a hypothetical example of course).

          We need to get away from this paper safety and into a safety by design (inherent and passive safety) accompanied by a more functional regulatory and code approach.

      2. QA is part of the problem. There’s this silly idea that without nuclear levels of QA a component is no good. That’s just nonsense. I have a very reliable pump in my home boiler system, it is not nuclear I can say with confidence.

        The issue is not tech. It is business and regulations.

        1. Does your boiler pump contain a fluid pressurized in excess of 2000 PSI at a temperature in excess of 600 F? Is the fluid radioactive? Does it move tens of thousands of cubic meters of fluid/hour?

          1. “Does your boiler pump contain a fluid pressurized in excess of 2000 PSI at a temperature in excess of 600 F? Is the fluid radioactive? Does it move tens of thousands of cubic meters of fluid/hour?”

            In the earlier days of PWRs they built 2 loop plants that had the same pressure, temperature, radioactivity ratings and actually a significantly larger m3/hour rating.

            In fact, the AP1000 primary pumps are not at all the biggest in the world. 2 loop plants had primary pumps at least 30% bigger in flow. They were much more complicated too with a lot of support systems such as whole separate pumps needed for seal injection. Those darn mechanical seals. Yet these pumps were fabricated just fine. There were issues with fabrication but they were resolved and they moved on rather than making an endless fuss about it.

            What has changed for the worse is not the technology. The AP1000 canned pump is smaller and simpler than the shaft sealed pumps of 2 loop plants. It is the business and regulatory environment that has changed drastically.

            1. @Cyril R

              First a clarification question – which 2 loop plants had primary pumps that were 30% larger in flow than the AP1000s?

              As far as I know, both B&W and CE built 2 loop plants with 2 pumps in each loop. Those reactors produced slightly less power than the rated power of the AP1000 (which is ~ 1120 MWe). Westinghouse built 2, 3 and 4 loop plants, each with one RCP per loop. Their 2 loop plants had considerably lower ratings than the AP1000, their larger output plants used either 3 or 4 loops.

              The claim that they are the largest canned motor pumps in the world is not mine, it is CW’s.

              I’m not sure that you are completely “getting” my primary point here.

              There is no fundamental reason why the issues associated with the RCPs for the AP1000 cannot be resolved, given sufficient time and budget. I am sure there are talented, dedicated teams working diligently on producing reliable pumps.

              My point is that these pumps are vital to the operation of a very large investment, yet they have apparently been inadequately tested and are experiencing early failures. There are no alternative suppliers who can provide a product that can perform the same function in the same space allowed for the CW RCPs. The first repair required the pumps to be shipped from China to western PA and back and they are still not right.

              This is a concern because there is so much riding on the success of these pumps.

          2. Hi Rod, here’s a document with many specs of the AP1000, it has pump data in table 1.3-1


            I do not contest they are the largest canned pumps in the world. I just don’t see this as particularly relevant if you are experiencing sub component materials issues such as impeller failure. Impellers for canned pumps are not odd and can be supplied by others.

            The point I want to make is the problem is with the crazy nuclear quality assurance. Many suppliers can make large impellers, as they are used in all steam plants in the world. That we do not allow this because of “quality assurance” leading to a monopoly of a few suppliers is a fabricated problem that should never exist.

            Nuclear plants fail badly because of design flaws. Not failures in QA. The choice is between letting go this crazy nuclear QA that no one can meet and in stead broaden the supplier base, or letting the nuclear industry wither and die slowly in the West.

            1. @Cyril R

              You will get no argument from me on your contention that the notion of making everything special for nuclear raises both costs and risks. Suppliers that can meet NQA-1 standards are not “lucky” or only good at paperwork, they have chosen to accept the burden imposed by the requirements and have built and maintained the staff that can keep the program alive.

              You wrote: That we do not allow this because of “quality assurance” leading to a monopoly of a few suppliers is a fabricated problem that should never exist.

              Agreed. My question is “Who fabricated the problem?” Was it just empire-building government employees seeking to create secure careers? (I’ll grant that some of the government employees I’ve worked fell into that personality type.) Were the natural bureaucrats enabled politically by suppliers who wanted to keep out competitors AND by competitors from other energy industries who recognized that adding regulatory burdens onto nuclear energy projects was a good way to keep down the competition from energy-dense, ultra-low emission actinide fuel sources?

              Many of the regulatory burdens that were layered on over the years were driven by recognition of real problems that needed to be addressed; there are many benefits associated with good quality assurance programs that really do ensure that suppliers use the correct materials, use the correct welding techniques, adequately test an appropriate number of parts to detect manufacturing defects, appropriately review engineering designs down to the last detail, review calculations, review methods, build reliable code, provide sufficient redundancy, etc.

        2. I totally agree with you on a component functional level. Not sure I can buy “silly.” Nuclear functionality is not totally just a perception problem, it is in fact a real issue considering the potential consequences of failure, like it or not. Rickover’s philosophy worked for that program, and was totally different than the NRC structure. But it was military. His NR program was the Executive, Legislature, and Judicial sections, plus judge, jury, and executioner. He knew his design was good (safe) because he had the smartest people in the room, and all the national labs behind his design. So he new his “design spec” was good. Then he understood that to ensure the “as built” met the spec, his answer was bullet proof QA. It worked well for him and still does. But one other difference… he would not tolerate a generic anti-nuke in his organization. If you “didn’t believe” he would get rid of you. If you said “this isn’t safe” he would say “convince me, and we will change it.”
          The concept of even a neutral, or worse an anti, NRC Commissioner is something I just can’t get my head around. Rickover’s organization was the designer, approver, builder, operator, regulator, etc.so that system can work, without being accused of it being “in bed with an unsafe idea” situation.
          However I can get my head around the idea that maybe another agenda wants to maintain the status quo in the current structure of civilian nuke power regulation.

          1. Rickover approach works for building a few military style installations. It does not work if you want to go meet coal power expansion head on. National labs aren’t building coal plants so any approach with nuclear builds that requires national lab level of experience and intelligence is bound to be a total niche, fail-to-scale operation. The nat labs are for tech development. For a production environment we are totally missing the link to real people, jim the concrete worker and joe the welder need to be able to make this plants and they will make mistakes and we will need a system that can deal with this efficiently rather than just penalizing mistakes.

            I appreciate what you say. It is very true. Things were fine during the AEC time. The focus was on things that mattered and improved safety and functionality and there was close cooperation at all levels. Then came the NRC and it was all bad. There was a massive wedge that dislocated communication to a sick level, the focus shifted to things that don’t improve safety but retard the industry supplier market and innovation in general.

            1. @Cyril R

              Things were fine during the AEC time. The focus was on things that mattered and improved safety and functionality and there was close cooperation at all levels.

              Don’t be so sure. It’s never good to look at history through rose-colored glasses or to believe that things were always better somewhere else or at another time.

              The AEC era was not a good way to begin an industry. For the first 9 years, the AEC focused almost exclusively on building bigger or more sophisticated weapons. By law, they monopolized not only all available actinide material, but all nuclear knowledge and patents. They — along with the Joint Committee on Atomic Energy — chose who would be authorized clearance and which ideas would receive funding.

              They also — knowingly or unknowingly — stoked the fear campaign by stubbornly insisting on their right to explode hundreds of nuclear weapons in our shared atmosphere, purposely spreading scary sounding materials all over the world. AEC leaders — Lewis Strauss, John McCone — and scientific spokesmen — Edward Teller, Ernest Lawrence — tainted excellent scientific work measuring the health effects of low level radiation.

              They used the best available arguments disproving the “no safe dose” model in statements defending the bomb testing program in the face of strong public pressure from very credible and rightfully concerned citizens like Linus Pauling, Edward Lewis, Albert Schweitzer, Harrison Brown along with a strong political base of support that actively campaigned against the testing program with its obviously uncontained, indiscriminate spreading of radioactive materials. Using good arguments in that context may have forever tainted the arguments and provided cover for LNT defenders whose real motive is to retain one of their best weapons against the use of ANY technology that requires or produces ionizing radiation.

              Admittedly, the AEC improved under Glenn Seaborg’s leadership, but it still added huge cost and schedule burdens to nuclear energy programs by continuing to divert useful resources and materials to bomb programs. Though I am a proud Cold Warrior, I can understand the antinuclear animosity shown by those who were raised in abject fear of sudden annihilation.

          2. Rod,

            I don’t have any rose colored glasses on. I just look at the facts and ask questions.

            During the AEC era plants were built for $50-200/kWe. Cheap. Then there was the NRC era and nothing got built. Some designs like ABWR were licensed, but they weren’t built. Older designs were struggling with NRC rules and business environment and couldn’t build plants for $2000/kWe. They couldn’t build plants for 10x the money that plants with very similar engineering and materials quantities were built in the AEC era. Now a licensed design is being built, with advanced passive safety features and lower per MWe materials quantities than all previous designs. But they can’t build it even for $4000/kWe, 20x or more the cost of the more complicated higher material quantity designs of the AEC era.

            My job is to analyse data, look at empiric events and jump to some workable conclusion. Its clear that something has changed for the worse, and it isn’t technology. What is left is the business and regulatory environment. That is where we should be looking for answers and for change. If we ignore this then we will just accept a slow poison death while we try to build new passive plants in an old environment that can’t get anything done.

            1. @Cyril R

              There is no doubt that things were somewhat better before the AEC was broken up and the NRC took over its regulatory mission.

              The grass may have been a bit greener under AEC domination, but there were still lots of bare spots and dandelions. I’m all for dramatic change, but believe we need to do better than aiming to return to the 1950s/1960s.

              I’d add a caveat to the following: “Some designs like ABWR were licensed, but they weren’t built” — in the US.

  11. “NuScale, for example, has no worries about finding a reliable design or a reliable supplier for reactor coolant pumps. Its natural circulation design does not include any pumps in the first place.”

    Wouldn’t nuclear submarine design be in the forefront of this since it eliminates pumps for making noise?

    James Greenidge
    Queens NY

    1. You still need pumps for the feedwater system. And since NuScale has tiny turbines, they need enormous more numbers of feedwater pumps, and with that, more opportunity for flawed impellers to plague projects. NuScale has more of everything, as a result it has more opportunity for issues everywhere.

      1. @Cyril R

        More of everything makes each one less important. More smaller pumps means more options for suppliers not only of complete pumps, but potentially multiple suppliers of individual parts in each pump.

        You mention that you have a reliable pump in your home boiler system. Care to make a guess about how many of those reliable pumps have been manufactured? I’d bet the design is pretty well wrung out and refined over several iterations.

        1. The reason why the pump in my home boiler is cheap and effective is because it is produced in a healthy industrial market with many suppliers and without mindless paperwork quality control disease that has poisoned the nuclear industry to near death.

      2. Would the feedwater pumps need as stringent a qualification or could commercial grade equipment be procured and somehow blessed through a dedication review?

    2. Nope. The NuScale units are too tall for submarines. And too dependent on being level.

      1. I’m trying to compare the NuScale core to the LWBR core used at Shippingport.  The Shippingport files don’t list overall dimensions for anything but fuel-rod height (104 inches).

        An LWBR core in a NuScale would probably be able to run 5+ years without changing it.

        1. Buy the book “Shippingport Pressurized Water Reactor”, Library of Congress Catalog Card Number 58-12595, first printing Sept ’58. It will give you more design numbers than you can absorb. You can also see where the roots of our current regulatory (and PWR design) structure started, as those project folks eventually branched into other early organizations. If you have a specific question that the book may answer, ask Rod to hook us up off thread.

          1. I have the specs on the LWBR core, I just need to convert the rod center-to-center dimensions to a rough linear measurement of the sides of the hexagon (which is the radius of the circumscribed circle).  It’s not hard, just tedious.

            You can get my contact email at The Ergosphere, ergosphere.blogspot.com

    3. Concerning nuclear submarine design, see the Wikipedia page “S8G Reactor.”

  12. @eino: “You’d think they would have built enough boiler feed pumps over the years that they would have been able to properly assess the performance of these RCPs.”

    There is no real similarity between boiler feed pumps and RCPs. The RCPs make about 6 times the flow rate but 1/6 the head compared to a BFP. BFPs are multistage (like 6 or 7 stages), the RCPs are single stage pumps.

    1. Still it is a good point. The largest boiler feed pumps I know of are 70,000 HP or more. Compared to the AP1000 pump of 7300 HP. Pressure for the modern boiler pumps is higher too, 300 bar vs 155 bar. RPM is higher too. Technologically the boiler pumps should be a lot harder yet we build GWe coal plants with GWe class turbines all the time.

      1. @Cyril R

        Technologically the boiler pumps should be a lot harder yet we build GWe coal plants with GWe class turbines all the time.

        Who supplies the pumps for those power plants?

        What is their annual production rate and primary markets during the US war on coal? I’m really quite curious about alternative suppliers.

        1. There are a couple of big names in the business, Ingersoll-Dresser used to be a big name. Here’s a document on the boiler feedpumps of the utility AEP.


          There’s many more names that have the know how and production and experience to back it up. The real problem is many suppliers aren’t allowed in the elite since they don’t have the N-stamp. What happens if you produce an artificial monopoly that artificially locks out many major names and their experience and leaves only a few brands that just happen to be good at paperwork?

    2. @Gmax137:

      I’ll take you at your word as I am not a pump guy. I thought the pressure for both would be quite high, but I guess the Steam Generator Feed Pumps would be more equivalent,

      Although, it does make me wonder a bit whether pumps for molten salt units could be expected to have a lot of teething pains.

  13. Rod’s skeptical-engineer look at the wordings of the news items made me go and look at a similar announcement regarding the Flamanville EPR:

    EDF said the delayed completion of the Flamanville EPR results from “difficulties encountered” by supplier Areva in the delivery of certain pieces of equipment and the “implementation of the regulation on equipment under nuclear pressure (ESPN).”

    According to EDF, Areva has had problems delivering the reactor vessel head and the internal structures of the vessel. It said that Areva has also provided an update on the ongoing analysis performed on the steam generators’ welding defect, on qualification tests of the pressurizer valves, and on a detailed metallurgical analysis of the vessel head material.

    In its way, this sounds at least as problematic as the AP1000 difficulties, indicating as it does that there are process issues as well as materials issues.

  14. It is always possible to solve the issue by decreasing the max. capacity of the pumps, hence the reactor.

    Though it may increase the cost of the produced electricity, it has also the benefit of less wear, hence higher overall reliability, etc.

Comments are closed.

Recent Comments from our Readers

  1. Avatar
  2. Avatar
  3. Avatar
  4. Avatar
  5. Avatar

Similar Posts