Toshiba Announces $6.3 B Writedown On A $229 M Construction Company Acquisition
Toshiba’s chairman, Shigenori Shiga, has accepted the responsibility for his company’s financial challenges and operating losses and resigned his position as chairman and as a board member. He remains an executive with the company.
After some early confusion related to a discussion about its financial statements with its auditors, Toshiba has announced that it will report a loss of 390 billion yen, or $3.4 billion, for the fiscal year that ends in March 2017.
Why Is Toshiba Losing Money?
The net loss is driven by the determination that Toshiba’s Westinghouse subsidiary, which includes its recently purchased CB&I Stone and Webster construction arm, is worth $6.3 billion less than the value it is currently carrying on Toshiba’s books. Westinghouse purchased the portion of CB&I involved in nuclear plant construction for $229 million and assumption of liabilities in 2015.
The new valuation of the subsidiary includes provisions for some of the already expended costs that are under litigation between Westinghouse and the former owners at CB&I. It also provides for future costs related to the completion of the four Westinghouse AP1000 projects at Plant Vogtle and V.C. Summer.
By contract, Toshiba is committed to completing the projects at a cost to customers that has already been established. It is responsible for covering any additional expenditures that will be required in order to complete the projects.
Loss Larger Than Early Warnings Indicated
Earlier this month, Toshiba had issued a warning that it would be reporting a large writedown, but that warning provided an estimated range of values for the loss that topped out at $6.0 billion.
The fact that the reported loss was another $300 million larger than that might account for the initial indications earlier today that Toshiba might be seeking to delay its promised financial statement by another month. Traders were nervous when it appeared that the company was going to request a reporting delay; share prices fell by 9% early in the day in reaction to that possibility.
What Will Happen To Danny Roderick?
Reports indicate that Danny Roderick, Westinghouse’s popular CEO, is being relieved of the additional responsibilities that he recently assumed as the head of Toshiba’s energy unit. Those reports indicate he is being reassigned to focus on improving Westinghouse’s prospects. Presumably, that will include efforts to seek the best possible outcome at Vogtle and Summer.
He is an admirable and inspirational leader for Westinghouse and the U.S. nuclear industry, it is impossible for him to avoid primary responsibility for the huge losses his company faces, but there is probably no one else better equipped to dig out of the current hole.
The expanding magnitude of the potential losses on the Vogtle and Summer projects are related to company’s decision to press forward and accept a risky settlement in a contentious legal battle over responsibility for project delays. From all reports, Westinghouse accepted the settlement in order to allow it avoid any more costly and distracting time in court and move one to complete a much delayed group of major construction projects.
Unfortunately for Toshiba and its shareholders, Westinghouse’s settlement with its former partner and customer put the cost burden squarely on Toshiba and doesn’t appear to allow it to mitigate the damage by sharing responsibility or passing the added costs to its customers.
Construction Industry Neophytes
Neither Westinghouse nor Toshiba appear to have the corporate culture that is traditionally required for success in very large, multinational, multi-jurisdictional regulated construction projects. There is a huge difference between designing and servicing nuclear reactors and building large concrete and steel structures.
I recently had a lengthy, after hours discussion with a man with significant experience in the construction business. He related an old story about a presentation given by the eponymous CEO of a successful company in that line of work. The last slide in the deck described the company’s primary core competency, pointing to it as the main explanation for its decades of dominance in the construction industry. “Mine the contract.”
Westinghouse and Toshiba have apparently decided to fall on the sharp end of a disadvantageous contract instead of mining its provisions to their advantage.
My conversation partner also expressed his complete confidence that the projects would be completed. He rested much of his argument on his belief that the Japanese culture would ensure that Toshiba honored its commitment to cover all excessive costs overruns. I’m not as certain of the projects’ prospects. Even the largest companies have a limit to their ability to refuse to admit defeat if costs get too far out of control.
Note: A version of the above article was first published at Forbes.com under the headline Toshiba Announces $6.3 B Writedown On A $229 M Construction Company Acquisition
Now I am aware there are other factors involved here that contributed to the losses at VC Summers and Vogtle. I have traveled to 20 different Nuclear power plants; I have done outage work, installs of overhead cranes and refueling equipment. I spent about 3-4 months on multiple tours at Watts Bar 2 doing installs. In my opinion and I know this may get some heat but unions are a huge problem at these jobs sites.
When we were at Watts Bar we shared a trailer with pipefitters there were 10-15 at any given time. They bragged they have been sitting in the trailer for a year and have gone out into the plant maybe 5-10 times as they wait for work. During our time at Watts Bar they fired the first firm and laid off 100s of workers due to cost overruns again in my opinion partly due to union labor. We were non-union and while doing an install on the turbine crane there were jobs as non-union we were not allowed to do. We finished our work ahead of schedule and after sitting and waiting and trying to find work for a week and some heated discussion with the union management and our management our management pulled us off the job and sent us home for entire month so the union workers could catch up.
Again I am not blaming all the issues on unions or saying all union workers use the system to do as little as they can. There is no doubt in my mind that they are a problem in new Nuclear construction and part of the problem why so many existing sites are having financial difficulties now.
You should really get your facts straight before making such an ignorant comment.
V.C. Summers is a non-union project. Vogtle is a union project. Both sites are approximately the same percent complete. The caveat is that the majority of the modules came from the same vendor and needed extensive re-work when they arrived to these sites. Oh by the way…it was a non union vendor.
Just because you don’t agree with it doesn’t mean its “ignorant”. Instead of getting all huffy and puffy because I stated my OPINION (you should probably google the meaning of that word) about the problems with unions in Nuclear, not just at VC and Vogtle, you should take a deep breath and re-read the entire comment. I was very clear that unions are NOT the only problem and there are many other issues at hand here. I was very clear and was not just referring to VC and Vogtle. From my experience again, that’s 20 nuclear power plants Unions are an issue.
Not too many other jobs can workers get away with 1 guy working 5 guys watching. If you have spent any time in a Nuclear Power plant during installs, refueling outages or day to day operation you would understand the issue plants are experiences. Again before you get all huffy and puffy there are other issues at existing and new plants, unions are NOT the number one issue but they are part of the problem.
Unfortunately if a person brings up the issue with unions like I expected here the unions goons come out swinging.
Mark Twain said it best.
“Never argue with stupid people, they will drag you down to their level and then beat you with experience.”
Your the winner !
God bless you brother, you have one heck of a resume.
I suspect that the difficulty here is that like all useful tools, people can find ways to abuse unions. Does that mean we should discard or denigrate this useful tool? No. But it does mean we need to examine whether unions are being misused and if so, figure out how to fix their use while keeping the very real benefits they provide.
The foundation of unions is collective bargaining. Management has much more power than labor, because labor is diffuse and management is concentrated. Most proponents of free markets in labor ignore this salient fact. Unions are a method of putting the balance of power on a more even keel, but like all tools they can be abused. One should also acknowledge that like all tools they can be demonized and blamed even if they aren’t the cause of the perceived problem.
In this particular case, a bunch of guys sitting idle most of the year sounds like aproblem, but are you certain that the union is the root cause?
@NukGuy79
I disagree with your opinion that unions should carry most of the blame for cost and schedule issues.
As an officer in the Navy, I never had a unionized labor force. However, I occasionally had a large number of sailors waiting to work because of scheduling and sequence issues, especially when the job wasn’t routine.
When managers and workers do not know exactly when certain tasks are going to be performed, waiting in a staging area is sometimes the most efficient way to spend time. Preventing visible “time wasting” by getting involved in a different task in a hard to reach location might result in additional delays in a critical path job.
I sometimes had trouble explaining this concept to Type A bosses who walked around and saw what they thought were just idle hands.
I’m trying to get my head around the time line here. When was this cost established? Was it (a) before the NRC-forced redesigns, (b) after the redesigns but before the highly-expensive consequences of them was realized by CB&I, or (c) after these were realized but before subsequent construction problems further ballooned the price?
From what Rod has previously written I get the impression that the answer is (b), but it would be informative to have some numbers on the expected/actual construction costs at the various stages.
Thank you Rod for all your work, btw.
The contract was re-negotiated to a fixed-price in late 2015. This was after the NRC required design changes and construction had been underway for several years.
I don’t believe that the Chinese projects were subject to the aircraft impact rule (I could be wrong about this). Nor do they use union labor. Yet the Chinese projects are also several years behind schedule.
If union pipe fitters walk the site and come back without work, is it really their fault? To be fair, I have heard that there is not the most efficient use of labor at the site I am familiar with.
Nevertheless, perhaps much of the blame could be placed on a factor the US and Chinese units have in common.
Rod, I read your resume, but I strongly disagree with your opinion that Danny Roderick is the one to fix the mess. You really haven’t researched your facts very well. The industry needs these projects to succeed and the completion of both is in real jeopardy. This “deal” was negotiated by Mr. Roderick, I’m sorry he is not the one to fix it.
@Bubba Humphries
You are correct in stating that I have not done much research on the specific portion of my article mentioning the fate of Danny Roderick.
I do not have any contacts or sources that have shared any internal information and can only go on information that is publicly available. There isn’t any doubt that part of the problem here is a somewhat poorly negotiated deal, but the overall project has been battered by a number of forces that the deal makers could not have legitimately foreseen.
Do you have a suggestion or suggestions for someone who is more capable of pushing these projects through to the finish line?
In these times when unions are probably at the weakest point that they have been in the past 100 years, can they really be the cause of these project’s dilemma? Isn’t construction what pipefitters and other crafts do for a living? Aren’t there a lot of other jobs where the project is completed on time and within budget?
Most of the people in the crafts that I have met take pride in their work. If properly motivated to show what they can do, they will excel.
Seems like there was some poor management to have highly skilled craftsmen sitting around collecting money. If I were one of those pipefitters, I’d probably be smiling right alongside of them.
To blame unions for a troubled project occurring in the South, a land resplendent with anti union sentiment where “right to work” laws are rampant seems somewhat ridiculous. Perhaps, lurking in the background of this tale is a story of greedy management. That may be a better fit for modern times.
If the components, designs or drawings are not available to the construction crews, not even Chuck Norris could build an AP1000.
I’m leaning to your viewpoint on the issue of craftsmen sitting around with no work to do being a management issue. Seems to me if you don’t have your workflows planned properly you’re going to have either a lot of idle time or too much work for too few people. If it was a contract let with a guarantee of on-site hours then that is also a management issue. Don’t negotiate those kinds of deals. The US auto industry almost self-destructed by caving in to the demand for the “Jobs Bank” concept. The unions agreed to do away with those and it was the right thing to do. Likewise, if you’ve got workers on-site with nothing to do, somebody screwed up on the planning side, and that means the on-site managers.
That said, the unions are not entirely blameless on the troubles facing the broader industry. I’ve experienced a lot of negative blowback doing consulting work at plants and having unionized I&C techs almost run me out of my work area because they feared what I was doing was intruding on their territory. Other incidents, admittedly anecdotal, also make me wonder if there isn’t some underlying attitude towards on-site plant work that lends itself to abuse (featherbedding).
Listening to SCANA’s earnings call with investment anslysts, I think the only way SCANA will pull the plug is if the South Carolina PSC tells them to.
I don’t think the Japanese government/banks will let Toshiba go under. So, I expect the project to go to completion.
One wild card is how the Chinese AP1000s perform during startup and initial operation. Any serious problems could appear soon enough to make the incremental completion costs and schedule uncertainties high enough to consider replacing the project. The RCPs come to mind.
I would not be surprised if, after this experience, no more domestic AP1000s are ever ordered. It’s hard to imagine how any public utility regulator would allow it. Likely the end of large nuclear in the US for decades. Of course if the construction pace picks up significantly such that the Vogtle/Summer units are completed close to or better than current expectations validating the modular construction approach, there may be some grounds for optimism. The supply chain issue will also have to be fixed.
Perhaps the AP1000 experience will serve as a contrast for NuScale’s SMR which may have a prototype in an advanced state of completion by the time Vogtle/Summer are in service.
I have also been hearing some tales of people stringing out the work (and adding cost), because “there will be no more projects on the horizon”. There are stories of the exact same thing happening at the end of the first wave of nuclear construction.
Anyway, my impression is that the SMR approach would be less vulnerable to this. Much more of the overall construction is done at the SMR (assembly line) factory. In theory, at least, that (assembly line) factory will always have a steady stream of new reactors to make. Also, they will be doing the same thing, over and over. Perhaps some of it could even be mechanized (like the assembly lines for cars and other consumer products). This, as opposed to large amounts of on-site, craft labor.
Any truth to my impression?
@JamesEHopf
One of the economies of smaller units is the ability to maintain production by serving a more diverse set of customers. One of the huge challenges of supplying “utility scale” equipment is that the power generation business is led by a people who all love to be the second one to order new technology. Once they believe something is proven and they hear about it from their buddies, they leap on the bandwagon. They will also leap off of the bandwagon in a similar manner.
That is a difficult pattern for any supplier.
Based on what I’ve read, I totally agree. The future is small modular reactors, closing the fuel cycle, molten salt, etc. These huge 3000 acre builds of 1200 Megawatt Units are over. There is too little support and too much risk to the traditional utility models to do them. As we are now seeing, it can quite literally bankrupt a utility/company. I remember the Entergy CNO stating in an all hands meeting a few years ago that any utility building a new reactor is literally betting the entire company….well, guess what? We’re there.
I’m really, really hoping NuScale and Fluor pull this off, because I really think that’s the future.
Also hoping that NuScale will succeed, but worried that 50MW each is too small. I wonder if the idea of a small evacuated containment would not be better suited for a BWR design, which can achieve higher output with natural circulation and decay heat removal through isolation condensors.
@RRMeyer
What is the basis for your concern about the size being “too small?” Do you have any detailed costing information or are you using textbook scaling formulas?
Here is a thought experiment. If you calculate that you need 40,000 square feet of living space to accommodate a family, which would be cheaper 10 identical homes on standard suburban lots each with 4,000 square feet or a 40,000 square foot custom built estate home on a block of property with the same footprint as the 10 homes?
Of those two choices, which do you imagine would provide the most flexibility, freedom and overall happiness for the occupants?
I do think that economies of scale should be dismissed so easily. There is a reason why 4 engine aircraft are more expensive than twin engine aircraft of the same size and I cannot see a trend towards 12 engine aircraft.
While I appreciate that 1.2 to 1.6 GW is too big for many potential markets and utilities in terms of investment risk, 200-300MW would be a nice size, especially when also providing district heating. I have the impression that the limiting factor for the NuScale design are the limitations of natural convection of the primary coolant at full power.
For a BWR type design natural recirculation and decay heat removal scales to 1.5 GW (ESBWR) so maybe 200 to 300 MW could fit into a NuScale sized module.
@RRMeyer
I’m sorry, but I’m still unconvinced with the level of detail you are offering. Can you point me to a source that provides cost-component level details supporting your case that the total cost of ownership for a four engine plane is higher than that of a two engine plane with exactly the same passenger and load carrying capacity.
I, for one, would feel much safer about flying very long distance trips in a four engine plane compared to a two engine plane. If I was a military planner, I would rather have 4 engine bombers over 2 engine bombers. Not only is there a better chance of passengers not having to swim sometime during the experience, but there is probably a higher chance of delivering to the desired destination as opposed to having to divert.
Investment risk of very large machines does not stop when they are finally constructed and running. It remains with them throughout their life and should be part of the cost analysis. How economical is a piece of production machinery if the entire investment can be lost with a single error or mechanical failure?
All I know is that the A340 was discontinued due to competition from more fuel efficient twin jets. This in spite of the the obvious safety advantage and a fear-based advertising campaign to capitalise on it (4 engines 4 long haul). Engines have become very reliable, but it is likely that at some point a twin jet will run into trouble that could have been avoided with 4 engines.
The ability to swap a defunct module is indeed a good argument for NuScale.
@RRMeyer
Fuel efficiency for airplanes is far more important than nuclear fuel efficiency. It is possible to make a case that the overall operating cost for a twin engine plane is lower than that for a four engine model, but it is not because of a reduction in initial capital expense or even due to a reduction in the number of required operators or maintainers.
It’s all about fuel economy and the associated flexibility it gives to routes that might otherwise be unreachable in a single leg.
That’s a completely different argument compared to nuclear plants where the cost of fuel is a much lower portion of the overall operating costs.
One thing I am wondering about concerning SMRs is the O&M costs if a multi-unit plant is ever built. Will the NRC require the same number of operators per reactor? If so, the O&M costs could kill us. Same deal with license fees. There was some discussion about this some time ago but I can’t recall where it ended up.
@Wayne SW
Both issues have been addressed adequately to allow NuScale and its funding sources to proceed with their design certification application.
One of the big advantages of having modules that are fully functional power plants producing less than 100 MWe is that there is a break point in the Price Anderson act that treats them differently so they already do not have to accept a per reactor share of the fleet’s liability pool.
The NuScale design will have to go through the same Integrated System Validation process required of the AP1000. The ISV uses the simulator to evaluate the plant design, control room layout, procedures and operator training. NuScale’s proposed control room staffing would have to satisfy the NRC that it is safe.
Keep it simple and use automation effectively.
@FermiAged
Part of NuScale’s technical history is that it arose from an engineering school project to make use of the integrated system test facility that was used to support the DC for both AP600 and AP1000. The man in charge of the scaling effort in support of that test program and also responsible for running much of the testing program is none other than Professor Jose Reyes, the inventor of the NuScale Power Module and the CTO of NuScale Power.
http://www.nuscalepower.com/about-us/history
NuScale is going about things far more sensibly. Building a prototype at INEL, working with ONE customer at a time. Hopefully, both NuScale and the NRC have learned from the AP1000 experience. I also hope the AP1000 experience has not poisoned the well.
Oh, please. Look at the historical record. 3-engine Boeing 727: 184,800 GVW vs. 2-engine Boeing 787 at more than 500,000 lbs GVW. The fewer the engines, the less the expense of operation and maintenance. The 787 lifts off with more gross weight per engine than the heaviest variant of the 747. If big enough engines had been available at the time, the 747 would have been a triple or a twin itself.
EVERYTHING is about cost in the aircraft world. I’ve been watching this since flight training in the 80’s. There are few mysteries if you look at the salient factors.
The analogy between aircraft engines and NPP’s is totally misplaced. The biggest aircraft engines are small enough to be built on an assembly line and to be rigorously tested full scale before being certificated and sold to the market. NPP’s should be a big as they can be and still meet those requirements.
From an engineering and ratepayer’s perspective, the AP1000 projects have been a disaster. These projects will not meet expectations in construction, testing, or commercial operation. They are poor flagships for nuclear power and the nuclear renaissance. Utilities should have waited for satisfactory completion of the China AP1000s before undertaking such a risky “first of a kind” and unproven project.
Westinghouse sold the AP1000 to Southern Company and Georgia Power based on performance claims. They will be held accountable to Southern Company per the Engineering, Procurement, and Construction contract. However, Southern Company and Georgia Power sold the AP1000 project to the Public Service Commission, legislators, and public (ratepayers). They must also be held accountable to the ratepayers for the project not meeting these expectations.
In my opinion, the smart thing to do at this points is to place the construction of domestic AP1000 projects (US) on hold and allow the limited Westinghouse resources that are currently available to focus on the growing list of issues plaguing the Chinese projects. Once issues at Sanmen and Haiyang have been resolved sufficient for the units to safely and efficiently complete an operating fuel cycle, including required maintenance, testing, and refueling outage, then Westinghouse can incorporate the required changes and consider the AP1000 design final. When these changes have been satisfactorily incorporated into domestic construction documents, then US utilities can more reasonably assess the feasibility of completing the projects. The involved domestic utilities will amass significant financing charges while the projects are on hold, but it is better to lose a dime than a dollar.
Some of your criticism of how the AP1000 was implemented are valid. We don’t know for sure how well they will operate since none have entered operation. The AP1000 shares many similarities to the 3400 MWt CE design, so it is really evolutionary, rather than revolutionary.
The major cause of the AP1000’s problems (IMHO) is that it has been an almost design-as-you-go process. There are still numerous design changes and License Amendment Requests in process. The ability to made field changes is very limited which makes the design engineering output a frequent choke point. I’ve also heard that the AP1000 is a very tight configuration making it difficult for getting numerous people to work in the same area in parallel. Specific to the domestic problems has been the module fabrication at the Lake Charles facility. Fortunately, alternative fabricators have been found. But the other problems remain to some extent. Some of the component fabricators have had quality or production problems.
A major handicap for nuclear development is that licensing and financial constraints largely preclude the prototyping process that has benefited other engineering disciplines. The availability of temporary financial incentives encouraged a roll of the dice on the AP1000.
Your recommendation would probably delay Vogtle by another 2-3 years and would certainly kill the project.
There are obviously many in the anti-nuke pro-gas community who have tirelessly worked to kill the project from the start. I suspect our concerned GA ratepayer is one of them, no matter what this would cost the actual GA ratepayers.
@RRMeyer
My guess is different. I suspect our concerned GA ratepayer was a strong supporter of the project who became disillusioned as he learned more about the technical details compared to the sales presentations.
I’m curious what the “growing list of issues is” that are affecting the Chinese plants. Can someone elaborate?
The issues are Westinghouse proprietary and cannot be legally provided to individuals that have not signed the Engineering, Procurement, and Construction non-disclosure agreement. This prevents full disclosure and limits public knowledge of actual status and challenges. Also, think beyond construction and testing. Embedded operational issues will impact reliability, maintenance, and surveillance testing that could lead to early retirement, IF one of the AP1000s makes it to commercial operation.
Contrary to the perception of others, I am very pro-nuclear. I do believe in using gas as a bridging fuel until safe and efficient nuclear baseload can be brought on line.
I very much support SMRs, MSRs, and Thorium. I prefer nuclear technologies such as TWR and LFTR that better utilize the energy potential of fuels and leave waste with significantly less radioactive half-lives.
I do not, however, support placing the burden of R&D or “design-as-you-go” on a specific utility’s ratepayers for technology that will benefit the country and the world. It should be shared costs. Our Nuclear Construction Cost Recovery tariff has GA ratepayers pick up the tab and provide Georgia Power healthy profits for undertaking the venture. No matter how long or how much the project costs.
Speaking generalities, items of concern for the AP1000 are:
• Reactor Coolant Pumps and lower thrust bearings
• Mechanical Shim
• Cyber Security
• Inadvertent actuation of Passive Safety Features (Design Basis Accident LOCA)
• Testability
• Maintainability
• Reliability (number of forced outages)
• Outage performance (claimed to be 17 days based on poor engineering assumptions)
• Narrow margin between design Containment pressure and accident analysis (check UK articles for disclosure – and remember the Passive Safety Feature for core cooling is a LOCA)
The margin between design Containment pressure and accident analysis is an old Arne Gundersen “concern”. That 0.3 psi margin is the difference between the minimum guaranteed containment design pressure and the sum of all the maximum uncertainties on peak containment pressure. He is engaging in fear-mongering as he did at SONGS.
The mechanical shim issues were identified in the simulator and have been addressed through design changes.
We won’t know about the maintainability/reliability issues until an AP1000 is in operation. I know things are tight and working on the RCPs could be difficult and the RCPs have had a challenging developmental period.
Interesting. First stage of denial.
I think this will be my last post so you can enjoy the dream.
In addition to the nuclear instrument shadowing you believe has been properly addressed by a simulator that is not qualified as a Reference Plant Simulator and that has not been updated to reflect actual performance data from the Sanmen and Haiyang testing, I believe AP1000s will experience significant control rod spider failure and dropped rodlets. If I am wrong and the AP1000s are successful, I believe you will see Rod Control in manual with all rods out similar to legacy PWRs.
As far as Reactor Coolant Pumps, they are inverted canned pumps. The motor, impeller, external heat exchanger, and all internal components are in contact with the reactor coolant. Once fuel is irradiated, these will be contaminated. It will not be repair on site; but remove, replace, and ship faulted pump to Waltz Mill. Since the pumps are inverted (seal welded and bolted to the channel head of the Steam Generators) the weight of the pump/motor and downward thrust from the impeller places significant force on the lower thrust bearing. Hence all the recent design changes to pass the vendor testing.
Considering the AP1000 Containment and all of the conservatism, does the Containment pressure accident analysis and maximum flood volume include the volume of a Steam Generator and potential additional feedwater until successful isolation? Things get tight when you put an AP1000 in an AP600 Containment.
Remember, “design-as-you-go”! I just wish it was not at our expense.
@Informed GA Ratepayer
The issues you mention with regard to mass distribution and forces were made even more difficult when accident analysis indicated that the system needed to coast down and provide coolant flow during the transition to natural circulation.
The chosen way to achieve that was to add a large flywheel to give the pump additional momentum.
Rod Adams says February 23, 2017 at 6:37 PM
You said: “The passive way to achieve that was to add a large flywheel to give the pump additional momentum.”
No… the passive way to do that is take all the non-mechanistically possible and mutually exclusive assumptions out of the safety analysis and run it again. IOW analyze a real plant response transient. The only thing a safety analysis identifies is a computer code plant transient result that won’t/can’t happen as analyzed. E.g. why is the post reactor trip decay heat assumed to be 120% of the ANS decay heat curve? In this day and age does nobody really know what the actual post trip decay heat will be? It’s pretty hard for me to believe the actual error margin is 20%.
The problem is, in an historical context, ‘this is how we’ve always done it’, to ‘gain confidence’ the computer analysis will bound an actual plant event. But in actual plants of this large scale assuming every computer code input value are all at the extreme limit ALL AT THE SAME TIME (and some are mutually exclusive conditions) such that it dictates a hardware over design pushed to the limits of engineering and manufacturing knowledge what has actually been added besides uncertainty and cost? So is the flywheel actually needed?
An easily understood example… In PWR LOCA safety analysis computer run the injection water (RWST) T is assumed to be at its maximum allowable limit (say 90 F) to minimize the effectiveness of the injection water for core cooling. This is defined as a ‘conservative’ assumption. But at the same time when analyzing the containment back pressure from that same LOCA the RWST T is assumed to be at its minimum allowable limit (say 40 F). This is also defined as a ‘conservative’ assumption because the lower the containment back pressure from the Containment Spray System the faster the core will empty on that LOCA. This mutually exclusive condition comes back to dictate hardware design requirements, because it ties directly to the analytically required response time for start and load of the EDG to recover the empty core. All assuming a coincident Loss of Off-site power… because ‘I said so.’ So can the RWST water T be at 90 F and 40 F at the same time? Yet it dictates the EDG response time!
I don’t have a problem doing Safety Analysis this way in the ’50s with FOAK reactor technology, small proposed plants, and no actual running plants for bench-marking assumptions… But it ain’t the ’50s anymore except for the way computer Safety Analysis is done.
@mjd
As usual, you make an excellent point.
I’ve revised my comment to replace “passive” with chosen
Rod, what’s your beef with the RCP flywheels? AFAIK, every PWR has them. It is a power-to-flow thing, during the first 5-10 seconds after loss of power to the pumps. The reactor trip takes a few seconds for the rods to drop in and reduce the core power.
@gmax137
I don’t have a beef with “RCP flywheels.”
I am pointing out that they add complications to an already complex engineering task of massively scaling up and adapting a canned coolant pump to a unique mounting configuration.
There are places where far smaller canned coolant pumps have operated for decades of extremely reliable service when mounted in a configuration where the motor is above the pump.
That fact should never have been used to support a design that requires pumps mounted upside down and weighing 91 tons.
“I believe you will see Rod Control in manual with all rods out similar to legacy PWRs.”
I agree. I think that feature was partially in response to a design philosophy intended to play nice with renewables. Once the climate change concerns are rightfully recognized as exaggerations, “dispatchable” renewables will go the way of disco shirts.
The RCPs are a question mark. Maintenance could be problematic.
Are you postulating both a LOCA and a Feedline or MSL break?
The PRS issue was largely a result of an ITAAC requirement that WEC had not closed out (ISV, discussed above). The simulator testing plan and acceptance criteria used to successfully get a Commission Approved Simulator were identical.
It is a wonder how Neil Armstrong was able to land the Lunar Module by practicing in a simulator that had no landing experience data to validate it.
@gmax137 says February 24, 2017 at 8:20 AM
“Rod, what’s your beef with the RCP flywheels? AFAIK, every PWR has them.”
The point is are they needed? Also please provide a plant specific FSAR Accident Analysis reference showing: “It is a power-to-flow thing, during the first 5-10 seconds after loss of power to the pumps. The reactor trip takes a few seconds for the rods to drop in and reduce the core power.”
I doubt it, because the CRDMs are powered by non-essential power, just like RCPs. So if power is lost to ALL RCPs it is also lost to CRDMs at exactly the same time, and all CRDs immediately unlatch. Even before any auto RPS Trip signal is generated.
The Acceptance Criteria for such an event usually is a DNBR ratio. But please provide to me the actual time when the rods hit the bottom so i can verify ‘a few seconds’…
And why the DH power will then be 120% of ANS value…
After even ‘assuming’ infinite irradiation (burn up) of the core to maximize fission product inventory… (magic core… never needs refuel, runs for infinity)…
And why the most reactive control rod ‘just happened to’ stick out (they are exercised frequently)…
And why the initial power was assumed to be 102% but nobody knew it (good grief in this day and age!)
And why the QPT and API were both at the maximum allowed Tech Spec limit at the same time, at event initiation but nobody did anything…
Etc., etc.
And on top of that physically impossible combination of assumptions, you still have to take a single Active or Passive equipment failure.
That’s the whole point… these Safety Analysis assumptions are dictating plant hardware design beyond what is even a remotely credible occurrence.
Because “We’ve always done it that way…”
I currently don’t care one-way-or another if the current PWR fleet has flywheels in RCPs or not (besides, they provide an easy way to install reverse rotation devices in idle RCPs; which is probably cheaper/easier than RCS Loop check valves to prevent core bypass flow with an idle RCP), but the point is are they really needed? Or are they strictly needed just because of the way Safety Analysis is done?
And it is a fair question to ask if they present a ‘show stopper’ design and manufacturing challenge to the AP1000 RCP.
Hi mjd —
I don’t think the SARs are available in a cohesive integrated location online anymore (since 911), but SAR updates and Tech Spec LARs are on the NRC ADAMS system. You have to poke around and hope to stumble on what you’re looking for.
Here’s one I found, see ML16257A130. The first set of figures is for the total loss of forced RCS flow. This shows core power, heat flux, flow, and DNBR. You can see how the reduction in power due to the reactor trip isn’t “complete” until around 4 seconds. The DNBR – which is in this context really a power to flow parameter – reaches its minimum at about 2.5 seconds. If the core flow reduction were more rapid, the minimum DNBR would occur sooner and would be lower. Unfortunately the figures don’t include the rod position vs. time.
ML15226A346 has a table showing the position vs. time, you can see the “original” curve (90% inserted at 3.0 seconds). This letter is all about extending the allowable time, because the measured times were too long.
The really interesting thing would be the reactivity vs. time but that isn’t shown here. It depends on the power shape, which can vary within bounds established in the Tech Specs. A bottom-peaked shape would delay the reactivity worth until the rods were in the bottom half of the core. Since the rods are driven in by gravity, they accelerate during the drop; so the time to be halfway in is much longer than the time to go from half to full in. In other words, most of the reactivity worth is done during the last part of the drop time. All of this goes to show that the reduction in core power on a scram is not instant, it really takes 3 to 4 seconds for the core power to come down. It takes even longer to approach the decay heat level.
Sorry it took me so long to get back to this, but I was boiling sap into maple syrup all weekend.
I agree with a lot of what you say, and with your overall point. But I think we need to be as factual as possible when discussing these things.
@gmax137 says February 27, 2017 at 7:09 AM
Thanks for the refs, but again it is easy to get fooled without the text narrative describing the assumed sequence of events. There is nothing ‘factual’ in the assumptions used in safety analysis, that’s the problem. The event can’t happen in the plant EXACTLY as analyzed, because there may be 100s of assumed values all in the ‘worst’ direction ALL AT THE SAME TIME, and some are mutually exclusive. It’s the nature of Safety Analysis by computer code. I noticed the Rx power curve (in your ref) stayed at 100% for ~1.5 sec, why? Is that the RPS trip delay time for Power/Flow? I’m an operator… so why are all my RCPs off? LOOP? If so rods unlatch and drop at the same time (non-safety grade AC power is the latch/motive power for CRDMs, only the RPS trip breaker control power are safety grade DC power. If you lose RCP motive power you lose CRDM latch power at the same time. A ‘Sneak Rx trip’, rods are dropping into the core before the RPS even generates a Rx trip signal.)
Your discussion on API peaking at the bottom, rod worth is good… I’ll give you an ‘A’ on Rx Theory. I’m an operator… I don’t sweat DNBR much at the core bottom (and neither do tech spec limits. That’s why original API Core Safety Limit curves would allow a -40% API at 50% power)… that’s where the water AKA Tc comes in and cools good-er than at the core top where +API limits get very restrictive (wink, wink).
If you want to see the AP1000 Safety Analysis for loss of RCPs it’s here ML11171A370. Not much on needing flywheels, eh?
Hi mjd –
This is at the bottom of a thread that’s getting older and further down the page every day. You and I are probably the only ones reading. I’d like to continue the discussion offline somehow. Maybe Rod can give you my email address.
No you’re not. I believe this blog is also searchable.
However, the thread gets auto-closed after a week or so regardless of activity, which puts something of a damper on duration and usefulness for hashing out issues.
@E-P
Actually, threads stay open three times as long as you guessed.
LOCA analyses are moving toward a statistical combination of uncertainties where inputs are sampled over their assumed ranges for each run.
I wonder if the practice of ridiculous conservatism in analysis inputs was started by the vendors as a way to sell back thermal margin. I worked at a utility that bought the reload analysis methodology from the fuel vendor. I remember using a reactivity coefficient from beginning of core life and a fuel gap conductivity from end of core life in the same analysis!
Picking “mutually exclusive” inputs (like mixing the BOC and EOC values) made a lot of sense when mainframe CPU time was $6000 per hour (that was about six months pay) and the entire set of licensing analyses had to be completed in a year or two. The NRC reviewers would have demanded “proof” that some consistent (burnup-wise) pair of values was “the worst” possible combination. What do you think your site VP would have said if we told him the operating license was being delayed while we ran another 10 cases to answer the RAI?
Conservatism was started by the ‘wise old men’ who originally developed the technology in the early years. Selling back margin is a normal business decision, when you have huge staff with nothing to do because TMI killed new growth. Pencil whipping margin resulted in equivalent of building several new plants. When even that work ran out the US reactor vendors experience base died. That’s what started this discussion.