69 Comments

  1. Thank you for this article — it explains something I could never quite grasp. Background: I was Manager of nuclear safety at Ontario Hydro on that fateful day. The even sent us off on a multi-month search for “lessons learned”. We read the admonition about never letting the pressurizer go solid; our puzzlement grew into a question: “Why in hell not?”. You may not know it, but OH operated 4×500 MWe units at Pickering WITHOUT PRESSURIZERS, and subsequently built and operated four more. Those units (six still running) operate in solid mode 100% of the time. They incorporate a power-indexed secondary side pressure setpoint droop on the secondary side, of course. Oh, and they do incorporate liquid pressure relief valves, as per ASME Code.

    The most intriguing lesson came from a recently-retired USN engineer-officer, who asked essentially the same question as above: “Why in hell not”? The statement was made in the question period following a brief for Canadian safety engineers conducted in the NRC Head Office. The shocked silence that followed made me think that the USN-retiree had done something messy in the middle of the conference table. He didn’t say one more word, that day.

    By the way, a shift of operators at Bruce NGS experienced much the same sequence of events later that same year. Thanks to the TMI lessons, they operated correctly and survived the event and lived to err in some other way on another day.

    Regards
    Dan

  2. Dan, “Why in hell not?”, is an interesting observation. The new post TMI Symptom Based emergency procedure guidelines for B&W plants have a new section for total loss of all feed water (both main and emergency). There is only one way out of that box, initiate HPI and pump water through the Pressurizer PORV and code safeties to cool the core, until feed water can be regained. Hind site is 20/20, but it also doesn’t hurt to actually analyze events outside the scope of plant licensing events. That approach was a huge plus benefit out of the lessons learned.
    BTW, when I was an instructor at one of Rickover’s Navy training plants in the ’60s, we actually pumped the plant solid at power because he wanted data to develop Navy procedures for operation with an isolated Pressurizer. After a week of nervous training the actual event turned into a yawn. mjd

  3. Thanks Rod and Mike,

    This is a very clear description of this most unfortunate event for the nation (not a “disaster” as nobody was injured or received any harmful dose, despite public Salem-witchhunt-like misperceptions). No, the unfortunate legacy of TMI is the alternate universe we live in still dependent on fossil fuel for so much of our energy, when we could have headed for a world with several hundred more 1000 MW nuclear plants providing most of our electricity, be well on our way to mostly electric cars and be somewhat on the way to electric home heating (at least south of I-70).

    Mike’s story shows in such clear terms the need for effective, precise operating experience programs that produce actionable information that when used will prevent another incident. This applies to any hazardous industry.

    I hope to spread the word on this post and Mike’s story. Thanks again.

  4. That was GREAT Mike. Thank you for that. As someone who has made an effort to study these events the thing that always bugged me about TMI II was why didn’t someone refer to a steam table? Why didn’t they realize that the RCS was saturated? Was the training in place at the time such that it was impossible to get boiling in the primary of a PWR?

    I also wonder what happened to the operators that were on duty? With the blame that was paced on them I imagine that they probably lost their careers?

  5. Sean, I can’t really answer the why they didn’t figure out the Sat. If you look at my slide show, towards the end in “Prolog” heading I speculate, but I can only speak for my own thoughts accurately. One thing is different, I discuss the “slow period”, at least I got a break and I never moved forward of the RO desk so I maintained oversight. My guys were handling the small stuff. And my boss (staff SRO) took over the Make-up Pump panel operation to give us extra hands. So I actually had time to really focus on what I was seeing. If you look close at their time line, they about never got a break from one-thing-after-another. Also as near as I can tell, the actual unit supervisor didn’t get there immediately, the site supervisor did. So I can’t judge how much of a normal op “team” was trying early as a practiced team when it really counted. I also believe it actually helped me that this was my first ever Reactor trip in a commercial plant. I had no expectations (or bad habits) other than the plant would respond according to my training. When it didn’t, my nature is to think “I smell a rat, what’s really going on here.”

    I just flat don’t remember any PWR training discussing boiling in the RCS, ever, including my navy training, back during ’70s time frame. In fact the common current operator term “sub cooled margin” was never heard of in operator training (or procedures) pre TMI. Probably hard to believe that, but it’s true. That is the whole point, the PWR industry did not understand a PWR fell to Saturation for a Pressurizer steam space leak and pushed the Pressurizer full. mjd

    1. @mjd

      As someone whose nuclear training began in 1981, a little more than 2 years post TMI, I can testify that understanding Psat – Tsat was an integral part of Navy operational training. I cannot speak from personal experience for the commercial industry, but I have heard that they also learned that particular lesson. I don’t think it took too long before the simulator programs incorporated the correct plant response to a steam space loss of coolant, even if there was not much public discussion about why reprogramming efforts were needed.

      1. Rod, I totally agree with your time frame comment, But I qual’d on my first Navy plant in ’67. Agree with your simulator upgrade comment too. B&W had theirs fixed less than a month after TMI. They kludged it, it still couldn’t calc 2 phase flow but the Pressurizer level showed the right direction for Sat.

    2. In the US Navy, there was an operating curve that Reactor Operators were required to comply with during critical operation.
      If you complied with the allowable areas of that particular curve – you avoided both low and high RCS pressure limitations associated with those plants.

      The fact that a Navy Nuclear Operator states that they newver knew the basis of the operating curves is an eye opener for me.

      1. @Rob Brixey

        While I agree that the operating curve has long been a feature of the training provided for certain Navy nuclear watch stations, even before TMI, many of the people who moved from the Navy program into operator positions in the commercial world spent little or no time in maneuvering.

        They were not all EOOWs or ROs and did not have exactly the same training syllabus.

      2. Rob, I agree with your observation about an operating curve (with a P vs, T box for critical operation limits). We had one too, and it was well understood, including the basis for the box limits. Please review my slide show, we were inside that box, until the Pressurizer level approached it’s limit for critical operation, when the RO manually tripped the Reactor. The crap hit the fan after that, post trip. RCS P fell to P Sat, while the Pressurizer level swelled to off scale high from the Loop boiling. Totally a new situation within our understanding.

        The new EOPs, and operating curve that goes with that, now has a computer generated P vs T display curve. with a Sat line, and a ” 20F sub-cooled” line above the Sat line. When the plant P vs T plot goes below the 20F sub-cooled margin line, you must HPI, and don’t ever turn it off while below that line. All the way down to either DHR removal entry conditions, LPI injection flow, or and empty BWST which requires Containment Sump recirc switchover.

        Simple, clear, a quantum leap forward for plant emergency ops. One of the benefits that came from the accident.

  6. Thanks Mike, I appreciate the lesson. As a reactor operator, I fully understand the Monday morning quarterbacking syndrome. To partially answer Sean’s question above, San Onofre had 3 guys who were at TMI during the event including the individual who’s actions started the whole affair, the Full Flow operator (no blame here, the check valves in the air system failed). One of the guys told me in about 1985, the NRC had only recently stopped pestering him.
    Although a little off topic, I was curious as to whether your RCPs were damaged from operating without adequate NPSH.
    Also, it appears in your description that DBNPP did not have the ability to feed both S/Gs with a single AFW pump–is that true?
    Before San Onofre through in the towel, we had become so much better in the simulator than when I first licensed in early 93, and had learned so much and so many better ways of doing things, that it still amazes me why we didn’t know those things in 1993.
    Lastly, Mike, we in the industry who came after you have profited mightily from the experiences you had and my hat is off to you guys. I know by experience, thankfully only in the simulator, what it’s like to wonder “what the heck is going on?”

    1. Dave, no damage to RCPs, but about 2 days of testing to prove it. NPSH was just a short term condition and pumps could take it. The couple minute run time with de-staged RCP seals was a bigger concern, much test data was taken and analyzed to come to that conclusion. There was even a “core lift” analysis done but not sure exactly why, as that problem is 4 running pumps below 525F (colder water, more core delta P, can lift fuel off lower grid & push up against top hold down springs), and we had 2 off early. Maybe an STA (wink, wink) can explain that… pumping bubbles causes higher core delta P?

      Yes our AFPs can feed & get motive steam from/to either SG. My slide show sketch shows feed side cross ties, but not steam side (drawing too “busy” already). But the actuation logic has to do it (or operator over ride). When it sees <600PSI on an SG (steam leak) it will stop AFW to that SG and line that side AFP up to the "good" SG getting steam also from the good SG. So 2 pumps end up on the good SG. That is what would have auto happened if we ignored the "low steam P block permit" alarm we received.

      What the system will not do is feed 2 SGs with 1 AFP. Pretty sure that is because design size is each pump is sized for 1 design DH load, safer & simpler to do that on one SG. AFPs can also get steam from Aux Boiler, suction from service water, etc. We also had an electic Start-up Main Feed Pump, but did not need it in this event. mjd

      1. ” There was even a “core lift” analysis done but not sure exactly why, as that problem is 4 running pumps below 525F (colder water, more core delta P, can lift fuel off lower grid & push up against top hold down springs), and we had 2 off early. ”

        I believe the concern here is that the higher momentum of the colder, more dense RCS fluid could possibly displace structure / components in the core. I know at SONGS and St. Lucie the fourth RCP was not started until RCS temperature was over 500F. The AP1000 has variable speed pumps with the lower speed used at lower temperatures.

        1. After re-reading your post (and a cup of coffee), I understand your puzzlement. The only concern I could envision would be some kind of water-hammer under two-phase flow. Off the top of my head, I wouldn’t expect a significant void fraction, nor void / liquid separation between the RCPs and the core to lead to conditions causing water hammer.

    2. I worked on that SONGS simulator. I wonder if they are still keeping it for E-Drills or have they sold it off at auction. The night before they announced the plant closure, I was doing calculations to benchmark the transient response of the simulator. I got the news the next morning. I still have the draft of that report and the compute code output files which I use to ballpark certain responses on the AP1000 as there are many similarities – 4 RCP / 2 SGs, almost identical Mwt.

      1. The simulator is NOT being used as there is nobody left who knows how to run it. I believe they intend to sell it.

  7. Thanks for sharing this Mike, and for presenting it Rod.

    I agree that the TMI operators were “set up” to some extent by inadequate training, procedures, and equipment. But as mentioned above, were they not at least trained at that time, either by the Navy or the utility, on use of a steam table?

    And unrelated to the HPI issue, do you have any special insight as to why the Emerg FW (“12”) MOVs were left closed? And why they were not designed to receive an auto-open signal on EFW initiation?

    Also, any insight regarding the initiating event? I have heard both that the instrument air and service water systems were (inadvertently?) crosstied, but also that the water intrusion was simply the result of a stuck IA check valve.

    Thanks in advance for your response.

    1. @Atomikrabbit

      Have you read the following three posts and the related discussion threads?

      https://atomicinsights.com/three-mile-island-initiating-event-may-sabotage/

      https://atomicinsights.com/sabotage-started-tmi-part-2/

      https://atomicinsights.com/sabotage-tmi-part-3/

      Even though there were many specific differences, it is interesting to note that the “reactor accident” depicted in the first half hour of The China Syndrome also featured an operating crew that stopped the high pressure injection system when they had an indication that the pressurizer level was off scale high. The often pictured scene of Jack Lemmon with the worried face and sweaty brow was from a time when he was trying to figure out where the water was coming from.

      Weird, huh?

      1. “it is interesting to note that the “reactor accident” depicted in the first half hour of The China Syndrome also featured an operating crew that stopped the high pressure injection system when they had an indication that the pressurizer level was off scale high.”

        If I recall correctly, the dialog was a mish mash of PWR and BWR terms. I think they refer to Reactor Feed Pumps etc.

        I have heard that that the control room scenes were shot in the Trojan simulator. Seems unlikely that a utility would help the production of a blatant anti-nuclear film. But stranger things have happened.

        1. @FermiAged

          Sure, there was a miss mash of PWR and BWR terms. Poetic license was certainly a part of the story telling. The technical advisors to the movie were GE BWR engineers – the infamous GE Three – and there was a lot of involvement from the activists at the Union of Concerned Scientists.

          In certain scenes, it appeared to me that the real facility being depicted was Rancho Seco, which, like Davis Besse, and TMI was a B&W 177.

        2. It felt entirely BWR. The language/jargon was BWR jargon. The scenario lined up with a very likely BWR post trip response (sudden level drop, post trip, which wasn’t detected due to a stuck indication). The actions in response line up with what was expected at the time. Even the “the book says you can’t do that” makes sense, because in The China Syndrome they violated their 100 degree F/hr cooldown rate in order to use their LPCI (low pressure coolant injection) pump.

          1. @Michael Antonelli

            What about the part when they stopped injection because the stuck level indication showed that it was “out of sight high” and Lemmon was trying to figure out where the water was coming from? What about the part when the accident was halted by shutting isolation valves for what appeared to be a stuck open relief valve?

            1. The China Syndrome fictional event that has some echoes of the Three Mile Island accident starts at 12:40 and is declared to be over by 21:33.

              https://archive.org/details/ChinaSyndrome

              Though most of the terms used are BWR terms, the crux of the matter is a reactor water level indication that is so high that it causes the operators to take actions that are not in accordance with “the book” in order to address that particular indication. Once the operators realize that the indicator they were focused on was wrong because it was stuck and that water level is actually too low, they take other actions to reduce plant pressure and allow low pressure injection pumps to put water into the core, thus avoiding core damage – that time.

              As we all know, The China Syndrome filming was completed well before TMI and the movie was in the theater when the accident happened. I am pretty sure that the movie was not completed when the Davis Besse event happened.

          2. BWRs, as they operate at saturation conditions, have many causes of false high level indications, in addition to true high level conditions like I described above. Some examples of causes of false high level indications include rapid pressure reductions/steam throtting changes, ‘notching’ of instrument reference legs, “Rosemount Syndrome” (A failure mode of rosemount transmitters in the 70s/80s). The worst false high level condition is caused by drywell temperature increases. Operators need to be very aware of these things, as you can falsely believe your reactor inventory is adequate (See Fukushima unit 1 response, their drywell temp was very high, and their instrument reference legs were all in saturation and indicating false high. This affected accident response/decisionmaking).

            In the film, they halted the event in the film by reducing pressure with SRVs to inject with LPCI. The LPCI (low pressure coolant injection) system has a ~300# shutoff head, while BWR post scram NOP is around 900#. In a BWR design, a low-low level alarm automatically shuts the main steam lines, so SRVs would have to be used to reduce pressure and get LPCI’s injection permissive met. That’s when Lemmon opened the 4 relief valves. Going from 900# to 300# busts the 100 degF/hr cooldown limit (post scram I’m not allowed to reduce pressure to < 500# for the first hour unless an EOP override is in effect. These overrides and EOPs did not exist in the 70s).
            I did not see them close a block valve or isolate a leaking system to end the event in the China Syndrome. (Added notes: I ran a loss of all high pressure feed in my plant's simulator. After the initial level drop and stabilization, it takes about 1 hour for inventory to reach top of active fuel, so the movie obviously sensationalized the timelines). I'm a BWR operator.

            Post scram in a BWR, your level drops about 50-60 inches in 3-5 seconds due to shrink/void collapse. At the same time, steam flow throttling losses go away and the pressure regulator response will cause a rapid 100# change in vessel pressure. This makes Feedwater sees a large level deviation which winds-up the controllers and over-feeds the vessel to the point of flooding the steamlines. The steamlines are well above the normal level indications (narrow/wide), and are only visiable on the upset and shutdown level indications (which are non-safety and powered by non-vital busses, and typically not even near the main feed controls). GE trained operators against flooding the steam lines in a BWR almost as hard as PWRs trained against going water solid. Water in the steam lines is an ASME emergency service condition. In the 70s, BWRs did not have vessel overfill protection, and several overfills occurred. Today there are high level trips your feedwater and high pressure injection systems to prevent it. But back in the 70s, and even today in many plants who don't have a digital feedwater system, the post scram response that an operator is required to take on a scram is to first, immediately trip 1 feed pump and run the second feed pump to minimum flow in manual to prevent vessel overfeed.

            I think I may have done a poor job in my above reply getting at what I was trying to get at. The fact of the matter, is that in both plant designs, there are conditions that operators must avoid (PWR = solid/ BWR = steam line flood), in both designs there are many things that can cause actual and false high level indications, and in both designs, operators were over-trained to respond by shutting down high pressure injection as a result.

            Sorry if I'm a little all over the place with this response. Also, I really liked this article, great job to you and mjd.

            1. @Michael Antonelli

              Thank you for the detailed schooling on BWR water level control. Your comment did a pretty good job of poking a big hole in one of my favorite pet theories regarding the initiation of TMI. The incredible timing coincidences still nag me; I generally don’t accept the idea that a string of low probability events will happen in just the right sequence especially when I see that there are identifiable interests that benefit from the string happening.

              However, your comment helps me recognize that this particular coincidence was most likely not caused by someone connected to a big budget Hollywood movie that was not doing so well in the box office and who decided to increase its audience figures by forcing life to imitate art.

          3. I just watched the movie for the first time since it came out when I was a junior nuclear engineering major. Some things that struck me:

            No hearing protection. Even in 1979 this was a no-no.

            They had HPCI valved out for maintenance. Pretty lenient Tech Specs.

            NRC asked why alternative instruments weren’t looked at to confirm reactor water level. Godell has a blank look. However, another level indicator WAS checked – that is what clued them in to the stuck pen.

            Ventana drawing shows a B&W OTSG. Dialog appropriate for a BWR, however.

            Operators and Kimberly Wells nonchalantly sit on control boards and place bags, jackets over controls.

            At the Foster and Sullivan work site, radiography of pipes is taking place with people casually walking about.

            Does it really take an hour to figure out how to trip the plant?

            The RCP looks more like a pump for a PWR than a BWR. Nevertheless, how could a plant be so seismically flimsy (particularly in California) that a RCP vibration could show up in the control room?

            How does the plant Sequence of Events computer know when an “event” begins and ends?

            I don’t ever remember armed guards ever being stationed in control rooms. The control room was on their rounds but not as a station. At least not at any plant I have been to.

            Don’t they have to pull the Shutdown Banks before the Reg Banks to go critical?

            These operators were totally non-professional, even by 1970’s standards.

            Notice that whenever a pro-nuclear person spoke at the hearings, the dialog was muted and the scene quickly changed?

    2. AR, I don’t have any special specific insights, other than “been there, done that, walked in their shoes” And I know how I process multiple problems, just like you do, one rapid conversation at a time inside my head, using my language. But I can only have one conversation with myself at a time. This is not a wise crack answer at all, it is a real limit on one’s ability to focus on several concerns at once. I think it is the direct cause of my delay between wondering why the P stopped dropping and putting together the P vs level relationship. My priorities got constantly changed by other concerns going on, so I shifted the conversation to a different subject. I think this I one of the hardest things for a non operator to understand in these events. There is a human limit to how much info you can process in these stinky ones, And operators will always inherently seek the answer in their training. One lesson I reinforced by my event was maintain the team, when the “big flick” guy gets info over load, you are on the slippery slope.

      I’m going to publish an extensive e-book on this whole subject, I discuss how these events get autopsied. Teams divide up in piece parts, with a narrow single piece focus and make a conclusion. The conclusion is correct in that single focus environment. That ain’t the Control Room in real time. That really is something you must experience to understand. We had ~800 annunciation alarms in our control room. In five minutes we had ~300 of them flashing. That’s an operator’s reality, not a table top environment.

      A good write up about initiating event (hose) specifics is here:http://www.insidetmi.com/

  8. I read them at the time.

    This seemed like a rare opportunity to ask an actual licensed B&W operator from the TMI time period some questions that I have had. Not accusations, nor fault finding, but genuine questions.

    If I am reinventing the wheel by asking something that is already apparent to all, then apologies.

    1. @Atomikrabbit

      I’m not trying to inhibit questions or discussions, just trying to make sure that you were aware of the background provided in those posts. I try really hard, but I cannot keep track of everyone who has participated in each discussion here. 🙂

      1. Right. And although I read the original blogs and the comments at the time, I understand it’s quite possible my questions were completely addressed by subsequent commenters.

        Besides, I’m trying to work on my “humility” (whatever that is), and I think I’m getting really really good at it! 😉

  9. Atomikrabbit, I’m fine with any tech question. I think my post speaks to my opinion about judgmental conclusions. I’ve lived with that for 35+ years. Rogovin Report accused me of cognitive dissonance and operator error, and used my name in the document. I live with that; but even they didn’t ask me what a steam table was (I do know, I worked in a cafeteria). So I don’t resent any questions. Besides, I hung it out there with this post. But it really is about an impossible thing to understand an operator’s job unless you have been there. So I generally don’t even try to explain it to non operators. One thing everyone should keep in mind. There are operators reading this blog. When they don’t pipe up, and defend them self any more because they have “circled the wagons” in response to constant second guessing of their actions, you have a real problem. And I see a general trend on nuke blogs of very little operator comments.

    1. FWIW, I’ve been licensed on W plants since 1983, sim instructor since 1996.

      So the question for now is, did the 12 valves get an auto open on EFW initiation? Was there no alarm indicating they were in off-normal (closed) position? Shiftly log readings on their position? Since they were MOVs, I assume they had handwheels?

      I realize D-B not TMI, and devil in details, but I think its safe to assume the plants were very similar.

      Looking forward to your e-book, please advise us when it is available.

      1. Atomikrabbit, I think you would be amazed today at the differences between the ’70s design AFW/EFW systems among the B&W plants. Almost everyone of them was different, except maybe Oconee because Duke built their own plants. Some of that was driven by regulations of that era. In the earlier plants they were not even Safety Grade, but a category called Important to Safety. If your plant was finishing construction towards the end of the ’70s it was about a “coin toss” what you’d end up with. At DBNPP we got held hostage (our license) over High Energy Line Break rules, but I think TMI 2 (slightly later) didn’t. Six months before we were scheduled to take initial operator license exams they installed our Safety Grade Steam/Feed Rupture Control System. Coming complete with new Tech Specs, Surveillances, a wall and whip restraints in an already crowded room, etc. I think the only common design requirement was the ability of one train to remove post trip decay heat.

        Even its trip functionality was foreign to us operators, it wasn’t 2 out of 4, it was 1 out of 2 times 2. Then, since it was being back fit on a completed MFW system and had to be single failure proof we ended up with some half trip valves. The one that started this event was such, a single spurious trip of one SFRCS channel tripped it closed, causing a loss of MFW to SG 2. All of its original instrument inputs were “bouncy”, especially at low power. It takes operating time to sort that stuff out, but you know how that goes when people smell those generator breakers about to close.

        The system we trained on at the Simulator was for Ranch Seco, which used the normal plant control system to control SG level on AFW/EFW. The one we started Power Ops testing with used the new SFRCS. So no, the commonalities between the plants are not there.

        The TMI folks, of that era, I’ve talked with about the 12s say their Plant Process Computer did not even log most valve position changes at that time. Which in theory could show the closing/reopening of the 12s during the test run 2 days before the accident. The 12s are a dead end. Whoever knows ain’t talking.

    2. mjd, the ‘constant second guessing’ never goes away. We need to be self-critical and be willing to honestly assess our performance, but the pressure to be perfect, to be corrected constantly over the minutia is, I believe, corrosive over time and distracts from the big picture. I am probably wrong, but I blame INPO for a lot of this and feel they do a better job at insuring they’re always needed than they do at actually making us better.
      Operators do not get involved much. Few of us go to company functions/information meetings, etc. I am currently battling the anti nukes in my local newspaper, San Clemente Times, and it is difficult to motivate others to contribute to the battle. Who wants to go to a community meeting and listen to 30 speeches attacking what you do? In this industry we are used to dealing with people with integrity, at these meetings one is confronted with those who have none.

      1. When SONGS was under fire, I distinctly remember being told in no uncertain terms to leave the public response to corporate communications.

        The anti SONGS people were pushing a scenario about how a MSLB would cause such a high pressure drop across the SG tubes that the ones that were found to have failed their in situ tests would have failed. The NRC asked this question and I did a series of calculations / simulations to show it was not possible – the primary drop from the reactor trip was enough to avoid this situation.

        I couldn’t put my results out publicly this to dispel this so I anonymously responded by pointing to a public NRC publication of a MSLB in a CE NSSS that showed primary and secondary pressures differences getting smaller after the trip, never approaching the failure threshold. Lotta good THAT did!

        I sensed that SCE corporate management thought SONGS was a burden. There was a corporate publication about the history of SCE and there was only one or two pictures of SONGS and dozens of windmills, solar panels and other fantasies.

        For all the public outreach that SONGS did including beach cleanup, science fairs and simulator tours when the chips were down, what good did it do? Did ANYONE from the public raise a voice in support? I only recall that AFTER it was announced, some local businesses lamented the loss of business.

        1. FermiAged, you must know me. I would be VERY interested in your calculations as well as the public NRC publication of a MSLB. Your description of the anti-nukes is dead on because that is exactly what they said–most of them are clueless as to what it all means.

          1. I no longer have the calculations in my possession as we had to turn over anything having to do with the steam generators that might be proprietary or have some kind of relevance to a potential legal action against MHI.

            The public calculation I referenced when addressing the anti-nukes is here:

            http://www.osti.gov/scitech/servlets/purl/5025979

            It is a 1980’s study of a CE MSLB concurrent with a SG tube rupture by Brookhaven National Lab. Although dated, the results using current codes looks about the same (at least thermal hydraulically; the tricky part is the asymmetric core power distribution).

            Primary and secondary pressures are given in Figs. 10 and 13, respectively. The anti-nukes believe that the secondary pressure drop is so rapid that there must necessarily be a large primary to secondary differential pressure. They ignore the fact that primary pressure also drops rapidly due to the reactor trip and subsequent cooldown. The maximum pressure differential remained well below that which resulted in failure during in situ testing.

            I know you by name but I don’t believe we were ever introduced. We would probably recognize each other.

            Are you still at SONGS?

      2. David, I couldn’t agree more. And you have hit one of my concern triggers. It’s beyond the scope of this thread so I’ll be brief. INPO did a lot of good in the ’80s, mainly convincing the plants to do Preventative and Corrective Maintenance, and Engineering Root Cause analysis the Navy way. But once everyone really believed that message, why do you need INPO? That effort is directly responsible for raising capacity factors from mid 60% to 90% plus. The downside is the plants run so well the operators don’t get challenged, except on the Simulator, so that training better be exceptional. When I see an old time operator commenting (on a blog) that he spent 2 hours in a critique for a 1 hour Sim session on an INPO SOER; as he said “Where’s the beef.” When I see an NRC inspection report with 40% failure rate on an operator written requal exam I have to wonder… was this a surprise? Do they have a QA organization? Is anybody listening to their operators? When I see a 361 page report in a Corrective Action System for a minor problem, I have to wonder what is the overhead to keep that process in place. Including every 5th page has 7 signatures!

        INPOs slogan “Striving for Excellence” was a great goal in the ’80s when the bar was pretty low. We’re past that, now it equates to “LNT”, it not only costs overhead, but it demoralizes the operating crew. But it does provide jobs for clueless nit pickers. I’ll stop.

  10. I am really looking forward to this e book. I hope that my question was not taken the wrong way. I was just trying to delve further into the training that was given at the time and frankly I think that any operator could have been in that position at that time unfortunately it was Frederick, Zewe, and the other operators at TMI 2. I am glad they were able to move on in the industry. What happened was not fair to them.

    That is amazing to me that DBNPP precursor event was your first commercial plant trip, wow.

    Again, thank you for sharing and answering our questions.

    1. Sean, ” I was just trying to delve further into the training that was given at the time…”

      Here’s a good start, from the Essex Corporation, the Human Factors experts hired to do the training portion for the Rogovin Report. You really should read the whole section.

      A quote from the e-book:
      • Summary conclusions in the Rogovin Report by the Essex Corporation, the Human Factors Engineering experts hired by NRC during the Rogovin Investigation to look at the TMI Operator training.
      Operators were exposed to training material but they certainly were not trained.
      They were exposed to simulators for the purpose of developing plant operation skills, but they were not skilled in the important skill areas of diagnosing, hypothesis formation, and control technique.
      They were deluged with detail yet they did not understand what was happening.
      The accident at TMI-2 on the 28th of March 1979 reflects a training disaster.
      The overall problem with the TMI training is the same problem with information display in the TMI-2 control room application of an approach which inundates the operator with information and requires him to expend the effort to determine what is meaningful.

      Well… at least somebody “got it.” Too bad they were never asked to identify just who had made an “error.”

  11. Poor people operating the plant. that must have been a difficult time for them, then to be left out to hang as this suggests.

    Im not trying to diminish the incident by what I say next. It was bad in a expensive mistake way; by technological and training issues it looks like. But not really that bad as industrial accidents go. It was probably more on the spectacularly wonderful possible outcomes side of that class of incidents. Thats how I am beginning to think about it now.

    Its weird I know far more about TMI and hear about it far more today than the Bhopal chemical disaster or the Banqiao Dam failure; both on a industrial tragedy scale make TMI truly insignificant. And whats really sad is even by judging energy accidents since in the US, in terms of people harmed; after all that feverish reporting and media frenzy then and the references now, its still insignificant.

    Hats off to the technology and the operators. Even if it wasn’t perfect in hindsight. If thats as bad as it got, early on, with older technology we clearly were on the right track. I think events at Fuku even strengthen that conclusion.

    1. Well I am not as technically proficient obviously as you guys in areas of N plant operation but after some consideration here is some thinking out loud about the general implications of a accident :

      Obviously buoyancy, mass, and potential dilution plays a role in all these types of industrial incidents. I am wondering now if NP does not seem to be unnecessarily singled out as the most threatening by very bad reasoning. Obviously, firs off it doesn’t work anything like a atomic weapon. Also a plus with reactor type incidents is that any release will likely be delayed from the initial incident, even be foreseeable and also be quite hot and therefore lighter than the surrounding air.

      Anyway here is the weather map for that week.
      ( http://docs.lib.noaa.gov/rescue/dwm/1979/19790326-19790401.djvu )

      I feel like the radius maps that are so commonly displayed with respect to nuclear reactors are misleading at best and evacuation zones based on them and misconceptions of low dose radiation are they themselves are potential safety hazards that are arguably, in some cases, much more threatening than potential radioisotope releases.

        1. @John T tucker

          That is an interesting account of the events. It was sadly amusing to me to note that Roger Mattson’s name is used when he gets the last word, but it was conveniently replaced with a non specific “scientists” in the following passage:

          Friday also brought a new, more terrifying revelation: a hydrogen bubble had formed above the reactor core. Over the weekend, scientists from the Nuclear Regulatory Commission argued about whether the bubble might explode at any minute.

          Mattson, was not really a scientist, but a frightened little regulator specializing in modeling the behavior of emergency core cooling systems. He had insufficient plant operating or engineering experience and was the source of the controversy because he could not get a grasp on the fact that hydrogen will not explode without plenty of available oxygen.

          Light water nuclear reactors always have some excess hydrogen; we use it to scavenge oxygen that is released when water gets disassociated in a neutron flux. There was never any danger that a hydrogen build up inside of an intact reactor coolant pressure boundary would explode. It just needed to be gradually vented out using a slightly modified “degassing” procedure.

          http://www.pbs.org/wgbh/amex/three/peopleevents/pandeAMEX88.html

          In an action that still irks some operators today, the NRC saw fit to give Mattson a cash award for his “performance” during the event. I may have the numbers wrong, but I think it was something like $15,000, which was a considerable sum of money for a bureaucrat in 1979.

          1. A hydrogen explosion, and I thought it was possibly some complex reaction I didn’t know about. On the public side of things, including communications with civilians, local governments and especially the press the whole thing seems rather poorly handled, explained and even less successfully communicated at best.

            1. @John T Tucker – You’ve got that right. There were poor communications all around from the government, the utility, and the vendor.

          2. I just found this:

            Three Mile Island anniversary: the lesson the nuclear industry refuses to learn ( http://www.csmonitor.com/Environment/Energy-Voices/2014/0328/Three-Mile-Island-anniversary-the-lesson-the-nuclear-industry-refuses-to-learn )

            Areas as far as 300 miles away from Harrisburg were advised they might need to evacuate….

            ….the nuclear industry worldwide has not learned the most basic lesson of Three Mile Island – to get accurate information to the public in a timely manner….

            He talks about the hydrogen thing too. I think he makes a good point despite the first impressions from the sensational title. Accurate being the key word here and technically relevant and in correct perspective he probably should include too considering his assessment of San Onofre.

            Despite the mistakes the most common thread that runs through it all, up to today, is probably media sensationalism. Thats the ubiquitous lowest common denominator seemingly in all but technical nuclear power reporting.

  12. @david davison April 11, 2014 at 10:10 PM… ” I am probably wrong, but I blame INPO for a lot of this”
    From April 11 NRC event reports.
    OFFSITE NOTIFICATION DUE TO A DEAD DUCK FOUND ON SITE

    “Monticello Nuclear Generating Plant personnel discovered the remains of what appeared to be a deceased duck on plant property. The cause of death was not immediately apparent, no work was ongoing within the vicinity at the time. Notifications to the Minnesota Department of Natural Resources and the Division of Fish and Wildlife will be made for this discovery. This event is reported per 10CFR50.72(b)(2)(xi).
    “The licensee has notified the NRC Senior Resident Inspector.”
    Plant personnel could not determine if the duck was an endangered species.

    I can’t help but wonder if there is a back story here, was an Operator overheard saying “If INPO finds out about that, he’ll be a dead duck.”

    All kidding aside, this is an example of just part of the regulatory burden all nuke plants have to add to their overhead. It is pathetic. Do dirt burners report this stuff? And folks really wonder why nukes throw in the towel. They are being financially strangled by crap. And as Rod frequently points out, most of it is part of the “plan.” mjd.

    1. Turd burners, NG, etc. probably are supposed to report on stuff like this. Who is to know if they don’t? There is no NRC on site day after day so any dead duck is either buried or eaten; problem solved.

    2. The research reactor at Battelle used to have to file security reports all the time because they had their perimeter monitoring motion sensors tuned up so high that rabbits and squirrels were setting off the alarm every time they darted through the exclusion path around the fenceline. More reports and “incidents” to file.

      Mike, I used to work with you on some of the DBNPS Tech Spec CBE systems we developed for the training department back in the ’80s. I think all that got deleted when they had the one “purge” of training dept. personnel.

      1. Wayne SW, sorry I can’t place you. That was a long, long time ago, on a planet far, far away. And “CBE” doesn’t ring a bell either. I left the training department ’81-ish. I went to the department that was responsible for development of the new Vendor Guidelines, through the Owner’s Group, that was to be the Technical Basis for the new Symptom Based Emergency Procedure. Also worked on the SPDS development.

        I so believed in the new EOP concept that once I finished writing the procedure, I V&V’d it at the B&W Simulator, did the class room training sessions, 50.59’d it for Station approval, and we used it for the upcoming requal training cycle at the Simulator. Then the real work started, fixing my bugs, from the Operator comments from the Simulator Requal sessions. Rewrite, retrain, etc.

        I believe were were the first plant to make the shift, even before NRC approval of the Vendor Guidelines, which was actually unnecessary under 50.59. If I remember correctly we made the shift over during a refuel outage.

        1. CBE=computer-based education. It was the old CDC “Plato” system that used on-line instructional touch-screen terminals. One of the first uses of that technology in that application. The development contract went to Ohio State and I was part of that team. I met with you a couple of times to review materials. The screen name is just a pseudonym. I’ve had my share of stalkers on the internet so I don’t like to put my real name out there if I can avoid it.

          We also put in a bid to do training on the ATOG system which was new at the time, but didn’t get that contract.

          1. Wayne SW, yes Plato, now I remember. You probably worked with “wto” below also; we were the total Operator Training Dept in those days! WTO single handedly set up our non licensed operator training program; zone quals, qual cards, training goals, objectives, lesson plans, etc. all pre-INPO. He also did all the NRC pre-license training at the same time, and got a Eng Degree at night. He’s the guy I sent into containment to look after the event; I sure wasn’t going to go… I had caused it. But I did sign the RWP!

            I remember how resistant every one was to CBE back then, too novel, now it’s probably mainstream. There was also resistance to small part-task trainers (sims) that are now taken for granted. You guys were too cutting edge for a stodgy utility. Tell B.H. I said hi; I see he’s a consultant to ACRS these days too. Go Buckeyes!

  13. Rod et al,
    I thought you might like to read a CNS paper I wrote in 1995 about a very interesting loss-of-coolant incident in Pickering Unit 2. It was handled very well because the operator understood what was happening and did not intervene in the automatic activation of the ECCS. Very important lessons were learned that were immediately shared with all the CANDU reactor operators. The paper is in dropbox.com and available at: https://db.tt/ujp6msro
    Fracture of the rubber diaphragm in a liquid relief valve initiated a loss of coolant in Pickering Unit 2, on December 10, 1994. The valve failed open, filling the bleed condenser. The reactor shut itself down. When pressure recovered, two spring-loaded relief valves opened and one of them chattered. The shock and pulsations cracked the inlet pipe to the chattering valve, and the subsequent loss of coolant triggered the emergency core cooling system. The incident was terminated by operator action. No abnormal radioactivity was released. The four units of Pickering A remained shut down until the corrective actions were completed in April/May 1995.

    1. Jerry, very interesting event. Do you know if Simulators are used in operator training for these units? And if so, was something similar to this event actually trained for, before this event occurred? I know Sims can not always mimic the exact failure mode or location, but something where the principles of the plant response are accurate. You said the Operator understood what was happening, curious if it was from classroom or Sim training?

  14. “Are you still at SONGs?
    FermiAged, still hanging on by my finger nails. 23 licensed ROs, they call us certified operators, and about a dozen or so SROs.

  15. Mike,
    I find your description of what really happened at TMI very compelling and does provide vindication to the operators at TMI. I grew up as a Navy Nuke serving from 1968-1974. I had the opportunity to be both a prototype staff instructor and serve aboard a fast attack submarine. I then entered the civilian nuclear industry and held virtually every position up the operations chain from AO to Ops Manager at a B&W plant.

    I was RO licensed in 1978 and SRO in 1979, as you see, both before TMI. I did hold my RO license for two years and SRO for eleven. I went on to ultimately become VP nuclear at a BWR. After retiring, I served on a number of utility Nuclear Safety Review Boards so I do consider myself capable to make comments on your article.

    I received the same simulator training as you, and yes we were emphatically “trained” at B&W to shutdown HPI if pressurizer level exceeded 290″. You are also correct in stating that you would fail your comprehensive exam if that was not done during a SBLOCA. I must admit that our training overall back then was clearly not as substantial as operators receive today with the INPO accredited training.

    Our procedures back then left a lot to be desired compared to the information and formatting of the current operating, abnormal, and emergency procedures. However, in some respects, “older generation” operators did have to prove their integrated system knowledge better that today’s operators.

    For example, in my days, both in the submarine Navy and civilian commercial power, you were required to draw systems from memory. Checkouts today allow operators to use a drawing to demonstrate flow paths etc. I am not saying this is necessarily wrong, I just believe personally that I feel I really did understand my system interrelationships better than what I observe in some operators today.

    OK, I have digressed some. The generic simulators back then also were not capable of really demonstrating system response. Back to TMI, as you know, we were not really trained to respond to a SBLOCA in the steam space of the pressurizer, and yes, once we shutdown HPI, we were outside of the assumptions in the design basis for a B&W, and for that matter, any PWR including the navy plants.

    In your case, you and your crew were successful, not because of your training, but because you knew something was not right, your procedures were not helping you, and you now were on your own left to the collective knowledge of you and your control room staff. It worked!

    It is a shame that our industry was not provided information about some events that had occurred in the international nuclear world and also that B&W decided to ignore from your experience at Davis Besse. TMI would have been prevented. However, I do believe good came from the TMI event. We now have much better procedures, simulators, and training to assist the operators in today’s control rooms.

    1. @wto

      Thank you for your perceptive comment and for sharing your experience with in this discussion.

      I hope you don’t mind, but I added a few paragraph breaks to your comment to make it a little easier to read in the relatively narrow columns provided on the Atomic Insights web site. I did not make any other changes.

  16. I want to make a general overview comment. First off I need to maybe apologize for not being totally effective in my communication skills. Second, please don’t anybody take this personally. The whole intent of my story was to provide some background insight on some history leading up to the TMI accident. I understand I have a unique view because of my position in the history, but I also understand I may have some bias because of my position in the history. I also realize, just like everybody else, I can be thin skinned about criticism.

    The point I wanted to make in the lead-up to TMI was this was not a specific Davis Besse problem, or a specific TMI problem, or a specific B&W problem. It was a PWR problem. All the referenced TMI precursor events prove that. The initial ten minute sequence would have likely been identical in any PWR. At five to ten minutes or so, you have a SBLOCA in the top of the Pressurizer and no HPI on.

    Please read the Rogovin Report about the The Beznau Incident at the two loop Westinghouse NOK 1 plant in Beznau Switzerland, on August 20, 1974. And this incident had been predicted by the H. Dopchie letter to the AEC on April 27, 1971. The crux is the misunderstanding of the Pressurizer level response to a SBLOCA in the Pressurizer steam space. So now we add Westinghouse to the misunderstanding.

    The gory details are that the Westinghouse Emergency Core Cooling System (ECCS) actuation was dependent on low RCS pressure coincident with low Pressurizer level. This was because they had the Pressurizer level response backwards. So at Beznau they have an event, the PORV fails open, Pressurizer level goes up, and at Time=X they have a SBLOCA with no HPI.

    Does that Reactor core really care why HPI is not on? Does it really make a difference why it is not on? An error has been made, who made it?

    Now let’s go to the “steam table” question asked. Since you know already knew the answer, you’ve asked it for a hidden reason. But not so hidden. So my message to you has failed. That question is not even relevant to my discussion. I will ask “why should they have even needed one?” And further why did I have to even face this event eighteen months before TMI? That’s the point of my discussion. The design basis was not understood by the designers and trainers, despite the warnings before TMI.

    Pre-TMI there was absolutely no accurate understanding, by the whole PWR industry, of the correct response of the RCS to this particular SBLOCA. And the training was wrong and the procedures were confusing. I guess I’m clueless, just what does this have to do with a steam table and “operator error”? Sorry, I just cannot make that connection. The way I connect the dots the TMI Operators were set up to fail by the Institutional Arrogance of the Whole System at that time. And I think that fact should be acknowledged.

    1. @mjd

      With all due respect here, I think you might be a little thin skinned and somewhat harsh on the designers.

      I’ll postulate that there is a reason, though perhaps not such a good reason, why the designers and trainers did not have a good understanding of the response in a real live system to a postulated event of a leak in the steam space of a PWR.

      Based on my continuing investigation of the history of our still quite young technology, the problem was that the trainers and designers had little, if any, experience in operating the systems they were designing and teaching people how to operate. There is little evidence — outside of a few testing programs and inside the somewhat translucent Navy program — of people learning to design nuclear power plants based on the experience of testing physical plants through the full range of events that could happen.

      It’s hard for some people to comprehend, but many nuclear plant designers have never even been inside an operational power plant, except perhaps for brief tours. That was especially true in 1979. At that time, I think that the oldest B&W plant had been operating for less than 10 years.

      Many of the designers depended on data provided by the AEC from test reactors, but that organization made a conscious, politically expedient decision that light water reactor technology was sufficiently mature in 1963 that they could keep on licensing without ever completing the planned testing program on scale model analogs.

      Many of the tests, originally planned to be completed by the mid 1960s using the planned Loss of Flow Test (LOFT) facility were, in fact, not completed until the 1980s since the LOFT facility was subjected to repeated delays in funding and management incompetence. (The contracted program manager, by the way, was the Phillips Petroleum Company and the man who kept deferring or reprogramming the funds that were supposed to be used was Milton Shaw. He was focusing all available resources on the Liquid Metal Fast Breeder reactor program.)

      In other words, please be just a little forgiving of the designers – they might have been able to predict system response through exceptionally accurate computational models, but their computational resources were pretty limited. They might have been able to realize that their assumptions were wrong if they had actually had data from a real system operating at real temperatures and pressures with real fluids, but they didn’t.

      They might have even been able to make a correct prediction if the analysis of the 1974 Beznau event or the 1971 H. Dopchie letter had been properly shared and discussed, but apparently neither of those discussions took place in an accessible venue.

      The people who programmed the B&W simulator apparently used a common assumption about system performance that happened to be quite wrong.

      They should have been able to correct their faulty assumption after your event, but “communications failure” reared its ugly head.

      1. I agree with your assessment. And I really don’t blame the designers singularly. My (current) bottom comment explains my position. It was a collective failure by everybody who touched it. If I emphasized criticism of my experience with a particular designer, it was only to set the stage for how I arrived with the mindsets I had prior to my event.

    2. MJD, I do not think that any of your response’s have shown any bias or “thin skin” a lot of people would not even attempt the essay or discussion you have here and I for one think it takes some courage to put it out there and try to communicate with those of us who have not been through the training and hard work that it takes to serve in the armed forces, or become an operator (licensed or not) but I digress.

      It is hard to look at a story objectively where you already know the outcome and it is hard not to ask “loaded” questions. While I asked was “why didn’t anyone think to look at a steam table” what I really wanted to know was what the training and procedures in place at the time said about subcooling and if it was a failure of training or “human factors” as far as the way the information was presented to the operators in the control rooms of the day. (was there a convenient way to trend and monitor Tsat and Psat in the midst of a high pressure situation and did the training or procedures give guidance to this, it was in no way meant as a dig against the operators)

      1. You, specifically, have not asked a loaded question, and I didn’t take it that way. We’ve interacted before, several months back on a TMI sabotage post, where I piped in about the DBNPP event. At that time you were trying to get a handle on the way we were trained, at that time. It was so different, and so bad, that even today’s operators probably can’t grasp that it was actually done that way. But why should they even care as they are learning under the post-TMI improved system. It is only important in the historical context of understanding a historical event. Questions will get asked, by newer folks, totally out of disbelief that things could actually have been that way. But they were. So things that are obvious today were not obvious 35 years ago.

        Rod @ April 13, 2014, 11:32 gives a good thumbnail sketch of the growing pains of the industry. There was virtually no trading of Operating Experience, and actually not a lot to be had anyway. There were only 60-ish plants even operating. So at that time the transient response understanding (and thus the training) was all based on the “Licensing” Safety Analyses Transients. Those were done for a specific purpose, and using specific constrained rules like only a single failure. They had no connection with the real world where simple multiple failures can pile up to create the stinky events.

        Then the Emergency Procedures of that time were written around that Safety Analysis transient response, with one EOP per transient. And that EOP was only 100% accurate for that event happening alone. At both DB and TMI we found ourselves in several at once all in the first five minutes (Loss of MFW, Reactor Trip, Turbine Trip, Safety System Actuation, etc), and none of them was 100% accurate for the combination of the total. That system was just flat unworkable when failures started to combine.

        Now if you overlay the training for events on top of that, it was done the same way. Since the only transient info available was the Safety Analysis transients, that is how it was taught; a single event using plots for that event and Immediate Operator Action steps from the EOP for the single event.

        So to address the steam table and system saturation questions the simple answer is no, it was not taught for any transient response, because that transient was not specifically analyzed. Since the Pressurizer steam space leak transient was not understood, with the system falling to saturation and hanging there, the whole thing fell apart when it happened. We had not been trained for that possibility.

        But at least in my case, my whole training package experience back to and including navy nuke basic principles training about steam plants and steam/water properties etc did in fact work. It’s what clued me in; I said this can’t be possible (we didn’t pump that water into the pressurizer). But my basic principles training is what helped me figure it out. So in fact, my training and understanding of the basic stuff was there at the foundation. It allowed me to figure out that we were saturated. But it did not overcome my (and several other folks) conditioning to never pump more water into a full Pressurizer. And believe it or not, I never heard the term “subcooled margin” used at all, until after TMI.

        That’s the historical context, and it can be hard to believe in today’s world I know. Once-upon-a-time it was hard to believe the world was round also. To understand the causes of TMI the whole historical context of how we got there has to be understood. I am really grateful Atomic Insights gave me the chance to introduce some of it. And I can tell by the comments that a lot of folks do really want to understand that history. Thanks. mjd

        1. Thank you MJD you have helped to really put this into context. It never really made sense to me before. I’ve read and re read Rogovin and Kemeny I actually bought physical copies of the published reports. For some reason I have been fascinated with these types of events and how so many little (and big) things that wouldn’t seem related come together in a “perfect storm” I mean what are the chances that if the PORV drain line wasn’t known to leak they would have recognized that the temperature there was significant? It’s really a small detail but could have made a difference. I read the chapter you cited by the communications experts and it was really eye opening. Like I said I had this gut feeling that the operators were set up to fail in more significant ways than the reports stated. After reading your piece and your reply’s here I understand that the training was not just deficient it was dangerous.

          The mainstream texts on the subject only mention the going solid issue that is why I could not figure out why they didn’t realize they violated their sub cooling margin as today that is (AFAIK) an important plant parameter that is monitored at a PWR to realize now that at the time it was not and that the training did not teach that blows my mind. It was the missing piece.

          To me it is like sending someone to driving school without teaching what a speed limit is because the vehicle can only accelerate to 20 mph under normal circumstances. Now take that driver put the speedometer in the trunk and tell them not to use the brakes too much or they might warp the rotors and you have TMI II

          I have really enjoyed communicating with you here and I was concerned that I had offended you which I didn’t mean to. I think that you would make a great guest on an episode at the ATOMIC SHOW podcast :HINT HINT: and I for one would really enjoy hearing more about your experiences as a RO and SRO

          1. Sean McKinnom April 15, 2014 at 3:23 PM
            Sean, a final thought; you said:”After reading your piece and your reply’s here I understand that the training was not just deficient it was dangerous.”

            I wouldn’t really characterize the training as “dangerous” at all. To do so is to ignore everything it got right and focus on one specific single data point for a conclusion. Which is exactly what I am saying should not be done. After all it was my whole training package all put together that led us to a successful outcome. The problem was the correct understanding of that event by all concerned, which had fallen out of the way events were analyzed to license the plants.

            My whole training package, especially my over all B&W Simulator training, prepared me well enough to get me though an un-analyzed and un-anticipated event. And that is really success for any training program, after all everything can’t be “cook booked.”

            Unfortunately the ball got dropped in one very specific event where the training was deficient, because the event understanding was deficient. My beef always has been “they” threw the TMI Operators under the bus. And by association me also. I ain’t going there; and I’m trying to pull them back out.

  17. I totally agree with the fact the whole industry benefited greatly from TMI. My open point of contention is the simple operator error conclusion is not that simple. All complicated technologies eventually face a crisis. How they deal with that is a matter of their maturity and their integrity.

    The Apollo 1 training crew disaster and the Navy’s loss of the USS Thresher are examples. In both of those cases those organizations said “we collectively” are doing something wrong and we need to change. They did not blame the people on the bottom of the technology.

    In the investigation of the loss of one of the Space Shuttles, Richard Feynman coined the term “normalization of deviance”. where simple deviations are ignored, accepted until they are looked at as normal, more deviations occur, more acceptance, and thus risk piles up.

    This is what i see as leading up to TMI. Normalization of deviance with the precursor warnings. But yet the majority opinion is TMI was caused by operator error. i don’t accept that, and i think it needs to be acknowledged, to provide closure of that event. Especially for the TMI Operators who have had to unfairly live with this. That is my message. Mike Derivan.

  18. SRO at Davis Besse-licensed in 1977. I was bothered a lot by the TMI reports. I had an article published in Public Utilities Fortnightly (NOV 19,1981). I just re-read it and stand by it still. “The root cause of the accident was a failure to analyze plant behavior during small reactor coolant system breaks. thus, proper operating emergency procedures were not developed to cope with these breaks and operators were not trained to cope with this type accident.”-No amount of training would have helped if it remained based on defective lines of defense. Good write up Mike.

Comments are closed.

Recent Comments from our Readers

  1. Avatar
  2. Avatar
  3. Avatar
  4. Avatar
  5. Avatar

Similar Posts