To briefly recap, we previously found that nuclear power construction costs in the US and around the world steadily increased from the late 1960s through the 1980s. In the US, this cost increase seems largely due to a large increase in labor costs, especially labor from expensive professionals. This labor cost increase, in turn, was at least partly due to frequently changing regulations during the period, which caused extensive design changes, delays, rework, and general coordination issues on in-progress plants. For instance, an analysis of cost overruns of the David-Besse power plant, which was budgeted at $136 million at the start of construction in 1967, and ultimately cost $650 million when completed in 1977, found that “NRC modifications and their chain effects” were responsible for over 60% of the total cost. By contrast, the lower construction costs of French plants during the same period have been attributed to a “lack of changes during construction.”
We can get a sense of the frequency of NRC regulation changes by looking at the dates of regulatory guide issues and revisions (though this doesn’t include many of the changes instituted post Three Mile Island, only some of which resulted in regulatory guide updates.) We see a huge spike in the early 1970s (and 1980s, once we include TMI), a decline in the 90s, and something of a rise since then (though with a lot of variance from year to year.)
One issue with frequently changing regulations is that it’s often unclear to both builders and regulators how regulations should be interpreted. This creates expensive and time-consuming coordination failures, as builders and regulators gradually work their way to a mutual understanding. (For a more detailed look at this issue in building construction, see my article on automated code checking.) This sort of issue is described in the previously mentioned study on nuclear power craft productivity:
Varying interpretations of the plans, specifications and building code stipulations among quality control inspectors was another frequent occurrence, according to the craftsmen. Each of these predicaments was said to cause continual postponements and removal and reinstallation of work that was deemed to be non-conforming.
And similar issues seem to be behind the cost overruns at the Flamanville plant in France:
In 2005 the regulation introduced new requirements concerning qualification, resistance to hazards, material properties and welding procedures, as well as quality control, quality assurance, and surveillance by third parties. It then took 4 years for the licensee and the safety authority to reach a common understanding on interpreting these new technical requirements. The requirements were further updated in 2012, and the regulation itself was further revised in 2015.
These new requirements were imposed while large components were being manufactured for the Flamanville EPR, requiring the industry and certification bodies to interpret and agree upon the evolving requirements.
The issues involved in qualification of the Flamanville pressure-vessel head illustrate how this lack of predictability – and to some extent stability – of safety regulations, combined with challenges in supply chain capabilities, can lead to delays and therefore contribute to costs overruns:
-Although the reactor’s pressure head was forged in 2006, prior to the new regulation on pressure equipment, Framatome had agreed to comply with the imminent regulation.
-However, as one of the largest forged nuclear components, it proved especially challenging for the industry and the safety authority to reach a common understanding for the qualification process. Initial discussions were completed only in 2011.
-Further tests were then requested, which resulted in identification of non-compliance with the new specifications in 2014.
-This led to a large-scale programme that lasted until 2017 to further demonstrate the safety case, despite the identified non-compliance.
Eventually, in October 2017, the French safety authority qualified this component for a reduced period until 2024.
This is another reason why regulatory stability is so important. Building the shared consensus of how regulations are interpreted takes time, and the process is disrupted when there’s frequent, substantial changes.
QA/QC requirements
As the Flamanville plant issues suggest, regulations also influence nuclear plant construction costs via QA/QC requirements. Since many nuclear plant components and systems need to continue to function following an accident that might create extreme environmental conditions, nuclear plant components (particularly anything safety related) require extensive testing and verification to ensure they’ll perform correctly. This often takes the form of carefully recording what happens to every component at each step of the manufacturing and construction process, to ensure the correct part with precise performance characteristics is put in the right place.
This sort of documentation can be extremely burdensome to create. For example, here’s a description of some of the QA requirements during the construction of the Diablo Canyon nuclear plant (via Komonoff’s “Power Plant Cost Escalation”):
For instance, simple field changes to avoid physical interference between components (which would be made in a conventional plant in the normal course of work) had to be documented as an interference, referred to the engineer for evaluation, prepared on a drawing, approved, and then released to the field before the change could be made. Furthermore, the conflict had to be tagged, identified and records maintained during the change process. These change processes took time (days or weeks) and there were thousands of them. In the interim the construction crew must move off of this piece of work, set up on another and then move back and set up on the original piece of work again when the nonconformance was resolved. Installation of wire must be done according to written procedure and must be documented. Every foot of nuclear safety-related wire purchase is accounted for and its exact location in the plant is recorded. For each circuit we can tell you what kind of wire was used, the names of the installing crew, the reel from which it came, the manufacturing test, and production history.
Similar documentation requirements apply to the manufacture of nuclear-grade components. Here’s a description from a former engineer who worked in a facility that made nuclear-grade components:
Many moons ago I did [design and manufacturing] for a company that made both (section VIII and section III vessels) and my memory is that it was essentially the same design work with much more documentation and paperwork required for the "N" stamp vessel.
When the paper weighed about what the vessel did, it was ready to ship is how we put it. It is all about material accountability and documentation of the traceability of the materials and calculations.
Dawson 2017 estimates that quality control requirements make up 23% of the cost of concrete, and 41% of the cost for structural steel on nuclear plants [0]:
And an analysis by EPRI found that nuclear-grade components were in some cases 50x more expensive than off the shelf industrial grade ones.
Nuclear grade components don’t necessarily have higher performance requirements than conventional components - the cost instead mostly comes from the documentation and testing to ensure they’ll meet their performance requirements. Because these QA/QC requirements are difficult for manufacturers to implement (since they require significant process changes), many manufacturers simply don’t manufacture nuclear grade components. This (on top of the fact that the US spent a long period of time not building new nuclear plants [1]) limits the pool of potential component suppliers, which often makes it hard to obtain components and further increases their price. Some experts think these QA/QC requirements and their downstream effects are the prime reason for high nuclear construction costs:
...the main factor leading to high plant construction costs is not the design of the reactors, or various safety features that they employ, but the uniquely strict QA requirements that apply (only) for the fabrication of safety-related nuclear plant components (i.e., "nuclear grade" components). Conversely, I believe that in terms of safety, fundamental reactor design, employed safety features, intelligent operation/training, and maintenance are much more significant (effective) than the application of extremely stringent fabrication quality control requirements.
Outside of nuclear plants, these sorts of testing, documentation and traceability requirements are typically only seen in situations where a single component failure is considered catastrophic - for instance, on manned spaceflight. Here’s a description of similar documentation requirements used for the Apollo Program (From “Angle of Attack”):
The only way you could infallibly predict the behavior of a given part was to be certain of everything that went into it - its manufacturing heritage, its precise chemical makeup - and that meant tracking the metal all the way back to the mine. The system they worked out was called “traceability,” and it was rigorously applied to every piece of Apollo and the huge Saturn booster as well. As each part moved through the manufacturing process, it was accompanied by a packet of documents that established its genealogy and the pedigree of every switch and resistor and screw and zoot fastener that went into it.
…A whistle-blower at North American wrote a letter to his congressman charging that the company was mismanaging Apollo and price-gouging the government. One piece of evidence he pointed to was a particular half-inch steel bolt used in the command module. The man said he could go to any hardware store in town and pick up a bolt like that for about fifty-nine cents, and North American was paying $8 or $9 for the things…Charlie Feltz had to drop everything and fly to Washington with a bunch of flip charts and try to calm the lawmakers. He explained to the committee…that there were eleven steps in the manufacture of these bolts and they had to be certified at every step. Not only had the bolt itself been subjected to rigorous testing, but the steel rod it was milled from had been tested, as had the billet from which the rod was extruded and the ingot from which the billet was forged. Indeed, they knew where the iron ore had come from - the Mesabi Range north of Duluth - and they knew which mine and what shaft. And when you factored in all that extra rigmarole, said Charlie, it turned out the actual cost of the damn bolts was not $8 or $9 but more like $32.
…this concept of accountability pursued each piece of hardware through the manufacturing process and followed it out the door and onto the launch pad. By some estimates, nearly half the effort that went into building Apollo went into testing.
“Building to the requirements of manned spacecraft” is perhaps a good one-sentence summary of why US nuclear plants are so expensive to build.
Technical ability
Because the requirements for constructing nuclear plants are so strict, in practice it’s often very difficult for builders and manufacturers to meet them.
For instance, consider concrete. Nuclear plants require a significant amount of concrete for the foundations, as well as the containment building. Because it performs a shielding function, nuclear plant concrete must meet the stringent QA/QC and documentation requirements that other safety-related components meet. At the VC Summer plant in South Carolina, a concrete work package took three volumes of documents: “one volume [has] safety bulletins, quality control signoff sheets, and general information associated with the work, one has drawings and specifications, and one has design changes. In some packages, the design change volume is twice as thick as the drawing volume.”
Meeting these requirements for a site-produced material is difficult. Nuclear concrete typically has multiple closely-spaced reinforcing bars that can be difficult to arrange in the proper position, or pour concrete around (the Royal Academy of Engineer’s 30-page Guide to Nuclear Concrete mentions “congestion” 13 times.) Concrete placement issues seem to have plagued every recent nuclear project, and are frequently the source of delays and cost overruns. Incorrectly placed rebar on Vogtle 3 and 4 in Georgia caused a 6 month project delay. A similar issue caused a 4 month delay on the VC Summer plants, as well as delays at Flamanville in France. For the Olkiluoto 3 in Finland, poor concrete composition (along with other issues) caused a 9-month delay.
These sorts of issues seem common historically. In the study of craft productivity on nuclear plants, 35% of tradesman blamed “foreman incompetence” as a major cause of productivity problems and delays. More generally, nuclear construction requirements are sufficiently different from other types of construction that building nuclear plants is its own field of expertise. For instance, the contractor for VC Summer and Vogtle 3 and 4 was from the oil and gas industry, and had no previous nuclear construction experience [2]. Though oil and gas and nuclear facilities seem like they might have significant overlap in requirements (both require large amounts of piping and process equipment, both must be designed for severe environmental conditions triggered by catastrophic events), they apparently were unprepared for the difficulty of meeting nuclear construction requirements. Similar issues seem to be responsible for delays and cost overruns on Flamanville and Olkiluoto.
The origins of increased regulation
Since so many issues of nuclear plant construction seem to stem from regulatory requirements, it’s useful to have some context for how they initially evolved.
Nuclear power stations first began to be built in the 1950s (the world’s first nuclear power station at Shippingport was commissioned in 1958), and by 1962 there were 6 commercial reactors in operation in the US. These early reactors were small, with none exceeding 200MWe, and were followed in 1963 by a series of larger reactors up to 800MW in size. These “turnkey” plants were built by Westinghouse and GE under fixed-price contracts (almost certainly at a significant loss), in the hopes of jumpstarting the nuclear plant market (this became known as the “Great Nuclear Bandwagon Market”.) The jumpstart worked - 20 new nuclear power stations were ordered in 1966, 31 in 1967, and 17 in 1968, many of which were over 1000MW in size.
As the size of nuclear plants increased, concern grew about their safety mechanisms. Whereas a core meltdown on a 100MW reactor could likely be contained within the reactor itself, a 1000MW reactor produced enough heat that, in a sufficiently severe disaster, a meltdown might burn through both the reactor and the containment building. The AEC began to focus more on the problem of core meltdown, and loss of cooling accidents.
And as the number of plants grew, anti-nuclear efforts began to coalesce. Proposed plants in New York City and Bodega Bay, California were canceled in 1963 and 1964 partially due to citizen protest. The Union of Concerned Scientists, a science advocacy group formed in 1969, began to voice concern about reactor safety and the design of emergency systems. New plant licenses were increasingly contested, and by 1971 there were local efforts to place a moratorium on new nuclear plants in Oregon, New York, Minnesota, and California.
In the face of increasing concern about nuclear accidents, and increasing number and size of nuclear plants, the stance of the AEC (and later NRC) became one of increasing regulation stringency, in order to prevent the probability of a nuclear accident from rising. As early as 1965, the Advisory Committee on Reactor Safeguards (ACRS) recommended that safety standards be increased as the industry grew in size:
The orderly growth of the industry, with concomitant increase in number, size, power level and proximity of nuclear power reactors to large population centers will, in the future, make desirable, even prudent, incorporating stricter design standards in many reactors.
It later stated “large increases in the number of reactors lead to the desire to make still smaller the already small probability per reactor that an accident of any significance will occur”. Similar concern about keeping the overall probability of accidents from rising seems to have driven increasing regulatory efforts through the 1970s.
This desire to continually reduce the probability of an accident was complicated by an increasing realization that potential reactor failure modes weren’t well understood, and had in many cases been underestimated. Early small-scale tests, for instance, revealed that emergency cooling systems may not function properly in an accident. New plant applications often revealed that some types of accidents were more likely than had been thought. For instance, prior to the proposed power plant at Bodega Bay (which was located near the San Andreas fault), seismic design had not been considered in the design of nuclear plants. Subsequent analysis revealed that the potential for severe seismic events was much more widespread than had previously been thought. Similarly, tornado design requirements were created following an application for a plant in a high tornado area, which revealed that tornado risk was more widespread than had been assumed.
As more reactors were brought online, and more experience was gained operating them, more was learned about potential things that might go wrong. Plants experienced, among other things: loss of normal and emergency power, safety systems not properly connected, failure of control rods to operate properly, large pipe failures, fuel leakage, malfunctioning valves and cables, and structural failures. In some cases, such as the fires at San Onofre (1967), Indian point (1971) and Browns Ferry (1972), the accidents were quite serious. The AEC/NRC began to learn that their previous attempts to model reactor failure were inadequate.
One reflection of this is the accident at Three Mile Island (TMI), where a partial core meltdown occurred. A reactor meltdown was thought to be an astonishingly unlikely accident, and yet TMI experienced one after a comparatively short number of reactor-years of operation.
Another way this is reflected is in reactor capacity factors, the fraction of time they’re online and producing electricity. Initially regulators assumed that nuclear plants would achieve an average capacity factor of 0.8 (online 80% of the time.) But as of 1976, the average capacity factor of nuclear plants was just 0.57, with newer, larger plants often being below 0.5. This low capacity factor was largely due to a large number of unplanned outages caused by equipment failures. (The industry has since improved its capacity factors significantly.)
These sorts of issues aren’t necessarily surprising in the rollout of a new power-generating technology. But they take on a different flavor in a world where accidents are (for better or for worse) considered unacceptable. The NRC’s regulatory stance became a deterministic, defense in depth approach - the NRC imagined specific failure modes, and specific ways of preventing them, and then tried to layer several redundant systems on top of each other to compensate for uncertainty. (After Three Mile Island, this philosophy gradually began to be replaced with one more focused on Probabilistic Risk Assessment.)
A brief note on ALARA
Nuclear regulation stringency is sometimes blamed on the policy of ALARA adopted by the NRC - that radiation exposure to workers and the public should be “As Low As Reasonably Achievable”. Critics point out that strict interpretation of this philosophy would result in ever-stricter regulations that would prevent nuclear from ever being cost-competitive. ALARA, in turn, is based on a “linear no threshold” model of radiation safety that many believe is incorrect.
In practice, the AEC/NRC do seem to have had a deliberate policy of creating increasingly strict regulations to minimize potential radiation exposure. But I think it’s easy to over-index on ALARA as a specific driver of high US nuclear construction costs.
For one, every country in the world has adopted the ALARA standard, as has the US Navy, so on its own the ALARA philosophy is not an especially good explanation for US nuclear plant construction costs. And blaming ALARA suggests an overly simple causal chain of nuclear regulatory increase. In particular, it omits the role of public concern and controversy, which historically has been a major factor - the word “controversy” appears 26 times in the NRC’s 116-page “Short History of Nuclear Regulation.”
For instance, negative public response was a major factor in the NRC abandoning attempts to reduce regulatory requirements for materials with low levels of radiation:
In June 1990, the NRC published a policy statement outlining its plans to establish rules and procedures by which small quantities of low-level radioactive materials could be exempt from regulatory controls. The agency proposed that if radioactive materials did not expose individuals to more than 1 millirem per year or a population group to more than 1,000 person rem per year, they could be eligible for the exemption. However, the NRC would not grant this exemption automatically; it would consider requests for exemptions for sites that met the dose criteria through its rulemaking or licensing processes. It intended that the BRC policy would apply to consumer products, landfills, and other sources of very low levels of radiation. The NRC explained that the BRC policy would enable it to devote more time and resources to major regulatory issues and thereby better protect public health and safety.
The NRC’s announcement of its intentions on the BRC policy was greeted with a firestorm of protest from the public, Congress, the news media, and antinuclear activists. Some critics suggested that the agency was defaulting on its responsibility for public health and safety and that BRC policy would allow the nuclear industry to discard dangerously radioactive wastes in public trash dumps. One antinuclear group alleged that it was “a trade-off of people’s lives in
favor of the financial interests of the nuclear industry.” In public meetings that the NRC held to explain BRC, aroused citizens called repeatedly for the resignation of the Commissioners or their indictment on criminal charges.
5 states banned the disposal of nuclear waste with low levels of radioactivity in their landfills (which would have been allowed under this policy), and dozens of environmental groups filed lawsuits against the NRC. In response, the NRC declared a moratorium on the Below Regulatory Concern policy.
Concern over even small amounts of radiation does seem to be a driver the nuclear “regulatory ratchet” (though just as often regulation seems to be driven by concern over small risks of large accidents [3]) but it often does so in a way that the NRC doesn’t necessarily have a great deal of influence over.
Fixes that haven’t worked (yet)
There have been several attempts to improve the US nuclear plant construction process, most of which don’t seem to have worked.
One major change was in the plant licensing process, which originally involved two steps. Applicants would first apply for a construction license, which would allow them to start construction on the plant. The construction license didn’t require a fully specified plant design, only a basic safety analysis. Once the plant was complete, the operator would then apply for an operating license, which allowed the plant to start producing power. Building a nuclear plant without knowing whether the design was acceptable was obviously a source of difficulty - it was this licensing structure, for instance, that was partly responsible for forcing in-progress plants to meet changing regulatory requirements.
In the 1990s this was replaced with a 1-step licensing process, where applicants would provide a completely specified design as part of the application. However, rather than simplifying the process, it appears to have added another layer of complexity. Because the design was already approved, the ability to make changes on-site (already difficult) became even harder, and any deviation was required to go through several levels of approval.
Partly this difficulty seems to have been caused by plants permitted under the 1-step process (VC Summer and Vogtle 3 and 4) using a new reactor design, the AP1000 reactor from Westinghouse. This reactor was much simpler than previous reactors, with “60% fewer valves, 75% less piping, 80% less control cable, 35% fewer pumps and 50% less seismic building volume than usual reactor design”, and an emergency cooling system that works passively via gravity (and thus theoretically less susceptible to LOCAs.) The reactor was also designed to be prefabricated and installed on-site in large modules, reducing the requirements for site labor.
However, the AP1000 seems to have had significant constructability issues. The reduced footprint seems to have made everything much closer together and difficult to install - nuclear plants already tend to have constructability issues due to the amount of piping, wiring, and other services - requiring frequent design changes. There also seem to have been issues with prefabrication, with modules often behind schedule and out of spec, and often requiring significant rework (one downside of prefabrication is that problems are more difficult to fix.) And despite attempts at minimizing regulatory changes, the NRC required changes to the containment structure for the plants to ensure it could survive a plane strike (a requirement instituted post-September 11th), adding more costs and delay.
It’s also been typically assumed that first-of-a-kind (FOAK) plants will be more expensive, and that re-using the same design on future projects (nth of a kind, or NOAK, plants) will reduce costs. But the Eash-Gates study showed this hasn’t occurred in the US, likely due in part to the frequently changing regulations - it doesn’t matter how standardized your design is if you end up needing to change it on every project to meet new requirements.
This will conclude next week with Part III
These posts will always remain free, but if you find this work valuable, I encourage you to become a paid subscriber. As a paid subscriber, you’ll help support this work and also gain access to a members-only slack channel.
Construction Physics is produced in partnership with the Institute for Progress, a Washington, DC-based think tank. You can learn more about their work by visiting their website.
You can also contact me on Twitter, LinkedIn, or by email: briancpotter@gmail.com
[0] This number is somewhat confusing, as it’s calculated based on nuclear grade commodity prices and installation rates, both of which imply a substantially higher nuclear premium. A few paragraphs up Dawson quotes: “The commodity price of structural steel is 120% more than the price of non-nuclear grade steel. The installation time of nuclear grade concrete is 33% to 105% more than the installation time of non-nuclear grade concrete. The installation time of structural steel is 345% more than the installation time of non-nuclear structural steel.”
[1] From “Lessons Learned in Nuclear Construction Projects:”
“[In the US and Europe], the long intervals between nuclear power plant projects meant that the supply chain lost the knowledge and experience it had acquired and needed to re-gain proficiency in the rigorous quality management that the nuclear industry demands. Suppliers also had to reinforce their safety culture in order to meet new regulatory and utility requirements and expectations.”
[2] - Confusingly, Stone and Webster had previously built a significant number of nuclear plants, and had “designed or built” 20% of the US’s electricity capacity. But apparently the firm collapsed in 2000 after a scandal, and its assets were bought by Shaw, a Louisiana-based petrochemical company.
[3] - For instance, here’s former NRC Commissioner David Jaczko describing his motivations for strengthening regulations:
Before the Fukushima accident, there were many nuclear professionals in the United States and Japan who believed there would never be another significant nuclear accident. Unfortunately, they were wrong, and for a very simple reason: no one can design a safety system that will work perfectly. Reactor design is inherently unsafe because a nuclear plant’s power—if left unchecked—is sufficient to cause a massive release of radiation. So nuclear power plant accidents will happen. Not every day. Not every decade. Not predictably. But they will happen nonetheless.
The designers of nuclear facilities would not agree that accidents are inevitable. When building their safety backups, they essentially say, “Whatever you need, double or triple it.” If it takes one pump to move water during an accident, for example, then put in another pump somewhere in the plant. However, this fail-safe setup only reduces the chance of an accident; it does not eliminate it. What if a failure disables both pumps simultaneously? And what about the problems that no engineer, scientist, or safety regulator can foresee? No amount of planning can prepare a plant for every situation. Every disaster makes its own rules—and humans cannot learn them in advance. Who would have thought a tsunami would cause a nuclear disaster in Japan?
Uncertainty about when an accident will happen is exactly why the industry makes the argument for doing nothing. “Why spend billions of dollars to prevent something that might not happen for thousands of years, if at all?
…When I realized how flawed the safety technology was—not just in Japan but at U.S. nuclear facilities—I decided I would do everything I could to fix it.
Great summary! I just finished my PhD on the construction cost for advanced nuclear projects. You accurately describe many of the problems these projects face in both Part I and Part II. From the megaproject scale and complexity to the high degree of regulatory oversight and indirect costs. However, I think you overstate the impact of regulation on the recent cost overruns. It is correct to say that the primary issues for Westinghouse at Voglte have been change orders, but they were mostly internally driven, not caused by changing regulation. Westinghouse had only completed a fraction of detailed design when they started construction, so there were hundreds of design changes from stud spacings to wall locations to penetrations to tank sizes. Each of these required license amendments, but the change was usually driven by a constructability challenge not a license amendment.
Fingers crossed the industry has learned a few things when going into the SMRs. Some designs approach of eliminating systems entirely seems the right way to keep costs under control. For example (following it as the first build will be near me by OPG) the BWRX-300 is eliminating the main core circulation pumps, which eliminates the backup and redundant backup pumps, and the drives and instrumentation associated with them, and pipes and valves and all the potential interferences those imply. Trading steel for whole systems seems to be a good approach.
Do that with a bunch of the other systems, and maybe we start to get to something manageable. We will have a workforce that has been doing a whole bunch of reactor refurbishments in the leadup to this as well so I take this project as one of the best chances for a reasonably under control FOAK new build.