Managing the Risks of Deepwater Drilling
The Deepwater Horizon incident last April brought the possibility of catastrophic oil spills to the public's attention. The nature, magnitude, and duration of the spill clearly demonstrate the need for a system wide change in regulatory approach to effectively address the low-probability risk of catastrophic spills. Considering these spills is important because they cause the majority of damage. Spills greater than 1,000 barrels account for only 0.05 percent of spills, but for nearly 80 percent of the total volume spilled.
Federal regulatory agencies, like the Federal Aviation Administration and the Nuclear Regulatory Commission, have dealt effectively with low-probability, high consequence events through risk-based approaches. Their experiences can inform a shift away from prescriptive regulation and toward a risk-informed approach at the new Bureau of Ocean Energy, Management, Regulation and Enforcement (BOEMRE), the successor to the Minerals Management Service (MMS), which was reorganized following the spill.
Are these steps likely to be enough to ensure future readiness and effectiveness to contain the next deepwater spill? Our research indicates that while developments so far are positive, much more needs to be done, in terms of both government policy and industry commitments.
Estimating the Risk of Catastrophic Oil Spills
To assess the risks posed by oil spills, MMS had used a model for regulatory analyses developed by the U.S. Geological Survey, including three steps—estimating the probability of an oil spill, simulating trajectories of spills to critical environmental resources, and combining the results of the first two to estimate the risk from potential oil activity. Modeling deepwater spills is a bit trickier due to the higher pressures and colder temperatures, but these complications have been addressed in modeling and validated with field experiments.
Modeling done for the Deepwater Horizon project estimated that the most likely size of a spill greater than 1,000 barrels was only 4,600 barrels of oil, and the maximum spill would be 26,000 barrels over the 40-year life of exploration, development, and production activity on six leases, including the one involved in the recent spill, the Macondo. However, the low probability of a catastrophic spill was not taken into consideration; the Deepwater Horizon spill spewed almost 5 million barrels of oil.
Because the USGS model is used in many aspects of the regulatory process, any concerns about it propagate through almost all oil spill analyses, such as those done for compliance with the National Environmental Policy Act (NEPA). In many cases, it generated such low estimates that MMS could conclude there would be no significant impact of drilling on the environment and so was able to rely on broad, planning stage NEPA analyses rather than additional site-specific analyses of environmental and other impacts.
MMS maintained and used historical data on oil spill occurrences to estimate the probability of a spill, and this partly accounts for the underestimations of catastrophic risk. Using historical data for low-probability, high-consequence events can be misleading. When risk analyses were calculated for the Macondo well, there had been no historical observations of catastrophic spills of the size experienced in the United States. Though a spill of this magnitude had not previously occurred in the Gulf, the probability of such an occurrence was not zero.
Risk modelers have developed methods to assess low-probability risks when there are not enough historical data to do so. One approach is called accident sequence precursor (ASP) analysis. Essentially, operators keep detailed records on “accident precursors,” or incidents in which one or more safety technologies or behavioral processes did not work as intended. With those data, engineers can compute the conditional probability that a precursor sequence of events could have gone on to catastrophic failure. Tracking and analyzing past performance to see where the opening steps to system failure are actually occurring is central to improving their performance.
Lessons from Elsewhere: Tolerable Risk
Many agencies within the United States and abroad use a tolerable risk framework to guide regulatory actions. Under this approach, first developed in the United Kingdom by the British Health and Safety Executive for regulating nuclear power plants, risks are divided into three categories, as shown in the figure on page 38.
Unacceptable risks are those that are not allowed and, when identified, must be reduced. Acceptable risks are those that are sufficiently small that further risk-reducing actions are not necessary. Tolerable risks occupy the middle ground: for these risks, actions should be undertaken to reduce the risk to levels that are as low as reasonably practicable. The tolerable risk approach requires that quantitative thresholds be set to demarcate acceptable, tolerable, and unacceptable risks.
Different agencies have different methods for making this determination. As might be expected, thresholds will differ depending on whether the risk is for individuals, society, or project failure. They will also vary depending on the consequences. For example, thresholds for a risk of environmental harm will likely be different from those associated with human health.
Once thresholds are determined, regulators will also need to determine what methods to use for determining whether a tolerable risk is as low as reasonably possible. One approach to making this determination is cost–benefit analysis, in which risk-reduction benefits are monetized and weighed against costs. Others include minimizing the worst-possible outcome, maximizing risk mitigation within a preset budget constraint, using multicriteria decision-analysis tools, or relying on assessments of the best available technology for reducing risks. Whatever approach is chosen, the analysis should be redone periodically to take account of changes in the technology or risk estimates.
A tolerable risk approach with quantitative risk targets is quite different from the more historical, prescriptive approach taken by MMS, which specified technologies and practices. A prescriptive approach to risk management can be problematic:
- Regulations may lag behind the newest and safest equipment and practices.
- Regulations may not cover all behaviors that influence safety.
- Regulators bear a large burden for inspecting facilities to affirm safety.
A tolerable risk framework would avoid these challenges and more closely mirror what other countries have adopted to regulate their oil drilling. Norway, for example, places responsibility on the operator to identify risks and then develop controls, mitigation strategies, and systems to reduce risks to predefined thresholds. Intuitively, equipment and systems determined to be highly important to safety are regulated more closely than others. In the United Kingdom, each operator must develop a safety case that first identifies risks on a systemwide basis, including both technical and procedural or human-behavioral risks, and then recommends a strategy to reduce them to specified thresholds.
Catastrophic spills, while of very low probability, are responsible for the vast majority of damage from oil spills. The Deepwater Horizon demonstrated that oil spill regulation in the United States had failed to fully consider the possibilities. Without analysis of their likelihood and the damage they would cause, there was no way to ascertain whether enough was being done to reduce this threat. BOEMRE and the industry as a whole could benefit from improved risk management in several ways.