Updated: 21/12/2024
Speaking at press conference soon after the accident began, the UK government’s former chief science advisor, Sir David King, reassured journalists that the natural disaster that precipitated the failure had been “an extremely unlikely event”.
In doing so, he exemplified the many early accounts of Fukushima that emphasised the improbable nature of the earthquake and tsunami that precipitated it.
A range of professional bodies made analogous claims around this time, with journalists following their lead. This lamentation, by a consultant writing in the New American, is illustrative of the general tone:
” … the Fukushima ‘disaster’ will become the rallying cry against nuclear power. Few will remember that the plant stayed generally intact despite being hit by an earthquake with more than six times the energy the plant was designed to withstand, plus a tsunami estimated at 49 feet that swept away backup generators 33 feet above sea level.”
The explicit or implicit argument in all such accounts is that the Fukushima’s proximate causes are so rare as to be almost irrelevant to nuclear plants in the future. Nuclear power is safe, they suggest, except against the specific kind of natural disaster that struck Japan, which is both a specifically Japanese problem, and one that is unlikely to re-occur, anywhere, in any realistic timeframe
An appealing but tenuous logic
The logic of this is tenuous on various levels. The ‘improbability’ of the natural disaster is disputable, for one, as there were good reasons to believe that neither the earthquake nor the tsunami should have been surprising. The area was well known to be seismically active after all, and the quake, when it came, was only the fourth largest of the last century.
The Japanese nuclear industry had even confronted its seismic under-preparedness four years earlier, on 16 July 2007, when an earthquake of unanticipated magnitude damaged the Kashiwazaki-Kariwa nuclear plant.
This had led several analysts to highlight Fukushima’s vulnerability to earthquakes, but officials had said much the same then as they now said in relation to Fukushima. The tsunami was not without precedent either.
Geologists had long known that a similar event had occurred in the same area in July 869. This was a long time ago, certainly, but the data indicated a thousand-year return cycle.
Several reports, meanwhile, have suggested that the earthquake alone might have precipitated the meltdown, even without the tsunami – a view supported by a range of evidence, from worker testimony, to radiation alarms that sounded before the tsunami. Haruki Madarame, the head of Japan’s Nuclear Safety Commission, has criticised Fukushima’s operator, TEPCO, for denying that it could have anticipated the flood.
The claim that Japan is ‘uniquely vulnerable’ to such hazards is similarly disputable. In July 2011, for instance, the Wall Street Journal reported on private NRC emails showing that the industry and its regulators had evidence that many US reactors were at risk from earthquakes that had not been anticipated in their design.
It noted that the regulator had taken very little or no action to accommodate this new understanding. As if to illustrate their concern, on 23 August 2011, less than six months after Fukushima, North Anna nuclear plant in Mineral, Virginia, was rocked by an earthquake that exceeded its design-basis predictions.
Every accident is ‘unique’ – just like the next one
There is, moreover, a larger and more fundamental reason to doubt the ‘unique events or vulnerabilities’ narrative, which lies in recognising its implicit assertion that nuclear plants are safe against everything except the events that struck Japan.
It is important to understand that those who assert that nuclear power is safe because the 2011 earthquake and tsunami will not re-occur are, essentially, saying that although the industry failed to anticipate those events, it has anticipated all the others.
Yet even a moment’s reflection reveals that this is highly unlikely. It supposes that experts can be sure they have comprehensively predicted all the challenges that nuclear plants will face in its lifetime (or, in engineering parlance: that the ‘design basis’ of every nuclear plant is correct) – even though a significant number of technological disasters, including Fukushima, have resulted, at least in part, from conditions that engineers failed to even consider.
As Sagan points out: “things that have never happened before, happen all the time”. The terrorist attacks of 9/11 are perhaps the most iconic illustration of this dilemma but there are many others.
Perrow (2007) painstakingly explores a landscape of potential disaster scenarios that authorities do not formally recognise, but it is highly unlikely that he has considered them all.
More are hypothesised all the time. For instance, researchers have recently speculated about the effects of massive solar storms, which, in pre-nuclear times, have caused electrical systems over North America and Europe to fail for weeks at a time.
Human failings that are unrepresentative and / or correctable
A second rationale that accounts of Fukushima invoke to establish that accidents will not re-occur focuses on the people who operated or regulated the plant, and the institutional culture in which they worked. Observers who opt to view the accident through this lens invariably construe it as the result of human failings – either error, malfeasance or both.
The majority of such narratives relate the failings they identify directly to Fukushima’s specific regulatory or operational context, thereby portraying it as a ‘Japanese’ rather than a ‘nuclear’ accident.
Many, for instance, stress distinctions between US and Japanese regulators; often pointing out that the Japanese nuclear regulator (NISA) was subordinate to the Ministry of Trade and Industry, and arguing that this created a conflict of interest between NISA’s responsibilities for safety and the Ministry’s responsibility to promote nuclear energy.
They point, for instance, to the fact that NISA had recently been criticised by the International Atomic Energy Agency (IAEA) for a lack of independence, in a report occasioned by earthquake damage at another plant. Or to evidence that NISA declined to implement new IAEA standards out of fear that they would undermine public trust in the nuclear industry.
Other accounts point to TEPCO, the operator of the plant, and find it to be distinctively “negligent”. A common assertion in vein, for instance, is that it concealed a series of regulatory breaches over the years, including data about cracks in critical circulation pipes that were implicated in the catastrophe.
There are two subtexts to these accounts. Firstly, that such an accident will not happen here (wherever ‘here’ may be) because ‘our’ regulators and operators ‘follow the rules’. And secondly, that these failings can be amended so that similar accidents will not re-occur, even in Japan.
Where accounts of the human failings around Fukushima do portray those failings as being characteristic of the industry beyond Japan, the majority still construe those failings as eradicable.
In March 2012, for instance, the Carnegie Endowment for International Peace issued a report that highlighted a series of organisational fallings associated with Fukushima, not all of which they considered to be meaningfully Japanese.
Nevertheless, the report – entitled ‘Why Fukushima was preventable’ – argued that such failings could be resolved. “In the final analysis”, it concluded, “the Fukushima accident does not reveal a previously unknown fatal flaw associated with nuclear power.”
The same message echoes in the many post-Fukushima actions and pronouncements of nuclear authorities around the world promising managerial reviews and reforms, such as the IAEA’s hastily announced ‘five-point plan’ to strengthen reactor oversight.
Myths of exceptionality
As with the previous narratives about exogenous hazards, however, the logic of these ‘human failure’ arguments is also tenuous. Despite the editorial consternation that revelations about Japanese malfeasance and mistakes have inspired, for instance, there are good reasons to believe that neither were exceptional.
It would be difficult to deny that Japan had a first-class reputation for managing complex engineering infrastructures, for instance. As the title of one op-ed in the Washington Post puts it: “If the competent and technologically brilliant Japanese can’t build a completely safe reactor, who can?”
Reports of Japanese management failings must be considered in relation to the fact that reports of regulatory shortcomings, operator error, and corporate malfeasance abound in every state with nuclear power and a free press.
There also exists a long tradition of accident investigations finding variations in national safety practices that are later rejected on further scrutiny.
When Western experts blamed Chernobyl on the practices of Soviet nuclear industry, for example, they unconsciously echoed Soviet narratives highlighting the inferiority of Western safety cultures to argue that an accident like Three Mile Island could never happen in the USSR.
Arguments suggesting that ‘human’ problems are potentially solvable are similarly difficult to sustain, for there are compelling reasons to believe that operational errors are an inherent property of all complex socio-technical systems.
Close accounts of even routine technological work, for instance, routinely find it to be necessarily and unavoidably ‘messier’ in practice than it appears on paper.
Thus both human error and non-compliance are ambiguous concepts. As Wynne (1988: 154) observes: “… the illegitimate extension of technological rules and practices into the unsafe or irresponsible is never clearly definable, though there is ex-post pressure to do so.” The culturally satisfying nature of ‘malfeasance explanations’ should, by itself, be cause for circumspection.
These studies undermine the notion of ‘perfect rule compliance’ by showing that even the most expansive stipulations sometimes require interpretation and do not relieve workers of having to make decisions in uncertain conditions.
In this context we should further recognise that accounts that show Fukushima, specifically, was preventable are not evidence that nuclear accidents, in general, are preventable.
To argue from analogy: it is true to say that any specific crime might have been avoided (otherwise it wouldn’t be a crime), but we would never deduce from this that crime, the phenomenon, is eradicable. Human failure will always be present in the nuclear sphere at some level, as it is in all complex socio-technical systems.
And, relative to the reliability demanded of nuclear plants, it is safe to assume that this level will always be too high or, at least, that our certainty regarding it will be too low. While human failures and malfeasance are undoubtedly worth exploring, understanding and combating, therefore, we should avoid the conclusion that they can be ‘solved’.
Plant design is unrepresentative and/or correctable
Parallel to narratives about Fukushima’s circumstances and operation, outlined above, are narratives that emphasise the plant itself.
These limit the relevance of accident to the wider nuclear industry by arguing that the design of its reactor (a GE Mark-1) was unrepresentative of most other reactors, while simultaneously promising that any reactors that were similar enough to be dangerous could be rendered safe by ‘correcting’ their design.
Accounts in this vein frequently highlight the plant’s age, pointing out that reactor designs have changed over time, presumably becoming safer. A UK civil servant exemplified this narrative, and the strategic decision to foreground it, in an internal email (later printed in the Guardian [2011]), in which he asserted that
“We [The Department of Business, Innovation and Skills] need to … show that events in Japan, whilst looking dramatic, are all part of the safety processes of this 1960’s reactor.”
Stressing the age of the reactor in this way became a mainstay of Fukushima discourse in the disaster’s immediate aftermath. Guardian columnist George Monbiot (2011b), for instance, described Fukushima as “a crappy old plant with inadequate safety features”.
He concluded that its failure should not speak to the integrity of later designs, like that of the neighboring plant, Fukushima ‘Daini’, which did not fail in the tsunami. “Using a plant built 40 years ago to argue against 21st-century power stations”, he wrote, “is like using the Hindenburg disaster to contend that modern air travel is unsafe.”
Other accounts highlighted the reactor’s design but focused on more generalisable failings, such as the “insufficient defense-in-depth provisions for tsunami hazards” (IAEA 2011a: 13), which could not be construed as indigenous only to the Mark-1 reactors or their generation.
The implication – we can and will fix all these problems
These failings could be corrected, however, or such was the implication. The American Nuclear Society set the tone, soon after the accident, when it reassured the world that: “the nuclear power industry will learn from this event, and redesign our facilities as needed to make them safer in the future.”
Almost every official body with responsibility for nuclear power followed in their wake. The IAEA, for instance, orchestrated a series of rolling investigations, which eventually cumulated in the announcement of its ‘Action Plan on Nuclear Safety’ and a succession of subsequent meetings where representatives of different technical groups could pool their analyses and make technical recommendations.
The groups invariably conclude that “many lessons remain to be learned” and recommend further study and future meetings. Again, however, there is ample cause for scepticism.
Firstly, there are many reasons to doubt that Fukushima’s specific design or generation made it exceptionally vulnerable. As noted above, for instance, many of the specific design failings identified after the disaster – such as the inadequate water protection around reserve power supplies – were broadly applicable across reactor designs.
And even if the reactor design or its generation were exceptional in some ways, that exceptionalism is decidedly limited. There are currently 32 Mark-1 reactors in operation around the world, and many others of a similar age and generation, especially in the US, where every reactor currently in operation was commissioned before the Three Mile Island accident in 1979.
Secondly, there is little reason to believe that most existing plants could be retrofitted to meet all Fukushima’s lessons. Significantly raising the seismic resilience of a nuclear plant, for instance, implies such extensive design changes that it might be more practical to decommission the entire structure and rebuild from scratch.
This perhaps explains why progress has been halting on the technical recommendations. It might be true that different, or more modern reactors are safer, therefore, but these are not the reactors we have.
In March 2012, the NRC did announce some new standards pertaining to power outages and fuel pools – issuing three ‘immediately effective’ orders requiring operators to implement some of the more urgent recommendations. The required modifications were relatively modest, however, and ‘immediately’ in this instance meant ‘by December 31st 2016’.
Meanwhile, the approvals for four new reactors the NRC granted around this time contained no binding commitment to implement the wider lessons it derived from Fukushima. In each case, the increasingly marginalised NRC chairman, Gregory Jaczko, cast a lone dissenting vote. He was also the only committee member to object to the 2016 timeline.
Complex systems’ ability to keep on surprising
Finally, and most fundamentally, there are many a priori reasons to doubt that any reactor design could be as safe as risk analyses suggest. Observers of complex systems have outlined strong arguments for why critical technologies are inevitably prone to some degree of failure, whatever their design.
The most prominent such argument is Perrow’s Normal Accident Theory (NAT), with its simple but profound probabilistic insight that accidents caused by very improbable confluences of events (that no risk calculation could ever anticipate) are ‘normal’ in systems where there are many opportunities for them to occur.
From this perspective, the ‘we-found-the-flaw-and-fixed-it’ argument is implausible because it offers no way of knowing how many ‘fateful coincidences’ the future might hold.
‘Lesson 1’ of the IAEA’s preliminary report on Fukushima is that the ” … design of nuclear plants should include sufficient protection against infrequent and complex combinations of external events.”
NAT explains why an irreducible number of these ‘complex combinations’ must be forever beyond the reach of formal analysis and managerial control.
A different way of demonstrating much the same conclusion is to point to the fundamental epistemological ambiguity of technological knowledge, and to how the significance of this ambiguity is magnified in complex, safety-critical systems due to the very high levels of certainty these systems require.
Judgements become more significant in this context because they have to be absolutely correct. There is no room for error bars in such calculations . It makes little sense to say that we are 99% certain a reactor will not explode, but only 50% sure that this number is correct.
Perfect safety can never be guaranteed
Viewed from this perspective, it becomes apparent that complex systems are likely to be prone to failures arising from erroneous beliefs that are impossible to predict in advance, which I have elsewhere called ‘Epistemic Accidents’.
This is essentially to say that the ‘we-found-the-flaw-and-fixed-it’ argument cannot guarantee perfect safety because it offers no way of knowing how many new ‘lessons’ the future might hold.
Just as it is impossible for engineers and regulators to know for certain that they have anticipated every external event a nuclear plant might face, so it is impossible for them know that their understanding of the system itself is completely accurate.
Increased safety margins, redundancy, and defense-in-depth undoubtedly might improve reactor safety, but no amount of engineering wizardry can offer perfect safety, or even safety that is ‘knowably’ of the level that nuclear plants require. As Gusterson (2011), puts it: ” … the perfectly safe reactor is always just around the corner.”
Nuclear authorities sometimes concede this. After the IAEA ’s 2012 recommendations to pool insights from the disaster, for instance, the meeting’s chairman, Richard Meserve, summarised: “In the nuclear business you can never say, ‘the task is done’.”
Instead they promise improvement. “The Three Mile Island and Chernobyl accidents brought about an overall strengthening of the safety system”, Meserve continued. “It is already apparent that the Fukushima accident will have a similar effect.”
The real question, however, is when will the safety be strong enough? As it wasn’t after Three Mile Island or Chernobyl, why should Fukushima be any different?
The reliability myth
This is all to say, in essence, that it is misleading to assert that an accident of Fukushima’s scale will not re-occur. For there are credible reasons to believe that the reliability required of reactors is not calculable, and there are credible reasons to believe that the actual reliability of reactors is much lower than is officially calculated.
These limitations are clearly evinced by the actual historical failure rate of nuclear reactors. Even the most rudimentary calculations show that civil nuclear accidents have occurred far more frequently than official reliability assessments have predicted.
The exact numbers vary, depending on how one classifies ‘an accident’ (whether Fukushima counts as one meltdown or three, for example), but Ramana (2011) puts the historical rate of serious meltdowns at 1 in every 3,000 reactor years, while Taebi et al. (2012: 203fn) put it at somewhere between 1 in every 1,300 to 3,600 reactor years.
Either way, the implied reliability is orders of magnitude lower than assessments claim.
In a recent declaration to a UK regulator, for instance, Areva, a prominent French nuclear manufacturer, invoked probabilistic calculations to assert that the likelihood of a “core damage incident” in its new ‘EPR’ reactor were of the order of one incident, per reactor, every 1.6 million years (Ramana 2011).
Two: the accident was tolerable
The second basic narrative through which accounts of Fukushima have kept the accident from undermining the wider nuclear industry rests on the claim that its effects were tolerable – that even though the costs of nuclear accidents might look high, when amortised over time they are acceptable relative to the alternatives.
The ‘accidents are tolerable’ argument is invariably framed in relation to the health effects of nuclear accidents. “As far as we know, not one person has died from radiation”, Sir David King told a press conference in relation to Fukushima, neatly expressing a sentiment that would be echoed in editorials around the world in the aftermath of the accident.
“Atomic energy has just been subjected to one of the harshest of possible tests, and the impact on people and the planet has been small”, concluded Monbiot in one characteristic column.
“History suggests that nuclear power rarely kills and causes little illness”, the Washington Post reassured its readers (Brown 2011). See also eg McCulloch (2011); Harvey (2011). “Fukushima’s Refugees Are Victims Of Irrational Fear, Not Radiation”, declared the title of an article in Forbes (Conca 2012).
In its more sophisticated forms, this argument draws on comparisons with other energy alternatives. A 2004 study by the American Lung Association argues that coal-fired power plants shorten the lives of 24,000 people every year.
Chernobyl, widely considered to be the most poisonous nuclear disaster to date, is routinely thought to be responsible for around 4,000 past or future deaths.
Even if the effects of Fukushima are comparable (which the majority of experts insist they are not), then by these statistics the human costs of nuclear energy seem almost negligible, even when accounting for its periodic failures.
Such numbers are highly contestable, however. Partly because there are many more coal than nuclear plants (a fairer comparison might consider deaths per kilowatt-hour). But mostly because calculations of the health effects of nuclear accidents are fundamentally ambiguous.
Chronic radiological harm can manifest in a wide range of maladies, none of which are clearly distinguishable as being radiologically induced – they have to be distinguished statistically – and all of which have a long latency , sometimes of decades or even generations.
How many died? It all depends …
So it is that mortality estimates about nuclear accidents inevitably depend on an array of complex assumptions and judgments that allow for radically divergent – but equally ‘scientific’ – interpretations of the same data. Some claims are more compelling than others, of course, but ‘truth’ in this realm does not ‘shine by its own lights’ as we invariably suppose it ought.
Take, for example, the various studies of Chernobyl’s mortality, from which estimates of Fukushima’s are derived. The models underlying these studies are themselves derived from data from Hiroshima and Nagasaki survivors, the accuracy and relevance of which have been widely criticised, and they require the modeller to make a range of choices with no obviously correct answer.
Modellers must select between competing theories of how radiation affects the human body, for instance; between widely varying judgments about the amount of radioactive material the accident released; and much more. Such choices are closely interlinked and mutually dependent.
Estimates of the composition and quantities of the isotopes released in the accident, for example, will affect models of their distribution, which, in conjunction with theories of how radiation affects the human body, will affect conclusions about the specific populations at risk.
This, in turn, will affect whether a broad spike in mortality should be interpreted as evidence of radiological harm or as evidence that many seemingly radiation – related deaths are actually symptomatic of something else. And so on ad infinitum: a dynamic tapestry of theory and justification, where subtle judgements reverberate throughout the system.
The net result is that quiet judgrments concerning the underlying assumptions of an assessment – usually made in the very earliest stages of a study and all but invisible to most observers – have dramatic affects on its findings. The effects of this are visible in the widely divergent assertions made about Chernobyl’s death toll.
The ‘orthodox’ mortality figure cited above – no more than 4,000 deaths – comes from the 2005 IAEA-led ‘Chernobyl Forum ‘ report. Or rather, from the heavily bowdlerised press release from the IAEA that accompanied its executive summary. The actual health section of the report alludes to much higher numbers.
Yet the ‘4,000 deaths’ number is endorsed and cited by most international nuclear authorities, although it stands in stark contrast to the findings of similar investigations.
Two reports published the following year, for example, offer much higher figures: one estimating 30,000 to 60,000 cancer deaths (Fairlie & Sumner 2006 ); the other 200,000 or more (Greenpeace 2006: 10).
In 2009, meanwhile, the New York Academy of Sciences published an extremely substantive Russian report by Yablokov that raised the toll even further, concluding that in the years up to 2004, Chernobyl caused around 985,000 premature cancer deaths worldwide.
Between these two figures – 4,000 and 985,000 – lie a host of other expert estimations of Chernobyl’s mortality, many of them seemingly rigorous and authoritative. The Greenpeace report tabulates some of the varying estimates and correlates them to differing methodologies.
Science? Or propaganda?
Different sides in this contest of numbers routinely assume their rivals are actively attempting to mislead – a wide range of critics argue that most official accounts are authored by industry apologists who ‘launder’ nuclear catastrophes by dicing evidence of their human fallout into an anodyne melée of claims and counter claims.
When John Gofman, a former University of California Berkeley Professor of Medical Physics, wrote that the Department of Energy was “conducting a Josef Goebels propaganda war” by advocating a conservative model of radiation damage, for instance, his charge more remarkable for its candor than its substance.
And there is certainly some evidence for this. There can be little doubt that in the past the US government has intentionally clouded the science of radiation hazards to assuage public concerns. The 1995 US Advisory Committee on Human Radiation Experiments, for instance, concluded that Cold War radiation research was heavily sanitised for political ends.
A former AEC (NRC) commissioner testified in the early 1990s that: “One result of the regulators’ professional identification with the owners and operators of the plants in the battles over nuclear energy was a tendency to try to control information to disadvantage the anti-nuclear side.” It is perhaps more useful, however, to say they are each discriminating about the realities to which they adhere.
In this realm there are no entirely objective facts, and with so many judgements it is easy to imagine how even small, almost invisible biases, might shape the findings of seemingly objective hazard calculations.
Indeed, many of the judgements that separate divergent nuclear hazard calculations are inherently political, with the result that there can be no such thing as an entirely neutral account of nuclear harm.
Researchers must decide whether a ‘stillbirth’ counts as a ‘fatality’, for instance. They must decide whether an assessment should emphasise deaths exclusively, or if it should encompass all the injuries, illnesses, deformities and dis abilities that have been linked to radiation. They must decide whether a life ‘shortened’ constitutes a life ‘lost’.
There are no correct answers to such questions. More data will not resolve them. Researchers simply have to make choices. The net effect is that the hazards of any nuclear disaster can only be glimpsed obliquely through a distorted lens.
So much ambiguity and judgement is buried in even the most rigorous calculations of Fukushima’s health impacts that no study can be definitive. All that remains are impressions and, for the critical observer, a vertiginous sense of possibility.
Estimating the costs – how many $100s of billions?
The only thing to be said for sure is that declarative assurances of Fukushima’s low death toll are misleading in their surety. Given the intense fact-figure crossfire around radiological mortality, it is unhelpful to view Fukushima purely through the lens of health.
In fact, the emphasis on mortality might itself be considered a way of minimising Fukushima, considering that there are other – far less ambiguous – lenses through which to view the disaster’s consequences.
Fukushima’s health effects are contested enough that they can be interpreted in ways that make the accident look tolerable, but it is much more challenging to make a case that it was tolerable in other terms.
Take, for example, the disaster’s economic impact. The intense focus on the health and safety effects of Fukushima has all but eclipsed its financial consequences, yet the latter are arguably more significant and are certainly less ambiguous.
Nuclear accidents incur a vast spectrum of costs. There are direct costs relating to the need to seal off the reactor; study, monitor and mitigate its environmental fallout; resettle, compensate and treat the people in danger; and so forth.
Over a quarter of a century after Chernobyl, the accident still haunts Western Europe, where government scientists in several countries continue to monitor certain meats, and keep some from entering the food chain.
Then there is an array of indirect costs that arise from externalities, such as the loss of assets like farmland and industrial facilities; the loss of energy from the plant and those around it; the impact of the accident on tourism; and so forth. The exact economic impact of a nuclear accident is almost as difficult to estimate as its mortality, and projections differ for the same fundamental reasons.
The evacuation zone around Fukushima – an area of around 966 sq km, much of which will be uninhabitable for generations – covers 3% of Japan, a densely populated and mountainous country where only 20% of the land is habitable in the first place.
They do not differ to the same degree, however, and in contrast to Fukushima’s mortality there is little contention that its financial costs will be enormous. By November of 2013, the Japanese government had already allocated over 8 trillion yen (roughly $80 billion or £47 billion) to Fukushima’s clean-up alone – a figure that excluded the cost of decommissioning the six reactors, a process expected to take decades and cost tens of billions of dollars.
Independent experts have estimated the clean-up cost to be in the region of $500 billion (Gunderson & Caldicott 2012). These estimates, moreover, exclude most of the indirect costs outlined above, such as the disaster’s costs to food and agriculture industries, which the Japanese Ministry of Agriculture, Forestries and Fisheries (MAFF) has estimated to be 2,384.1 billion yen (roughly $24 billion).
Of these competing estimates, the higher numbers seem more plausible. The notoriously conservative report of the Chernobyl Forum estimated that the cost of that accident had already mounted to “hundreds of billions of dollars” after just 20 years, and it seems unlikely that Fukushima’s three meltdowns could cost less.
Even if we assume that Chernobyl was more hazardous than Fukushima (a common conviction that is incrementally becoming more tenuous), then it remains true that the same report projected the 30-year costs cost to Belarus alone to be US$235 billion, and that Belarus’s lost opportunities, compensation payments and clean-up expenditures are unlikely to rival Japan’s.
Considering, for instance, Japan’s much higher cost of living, its indisputable loss of six reactors and decision to at least shutter the remainder of its nuclear plants, and many other factors. The Chernobyl reactor did not even belong to Belarus – it is in what is now the Ukraine.
The nuclear disaster liability swindle
To put these figures into perspective, consider that nuclear utilities in the US are required to create an industry – wide insurance pool of only about $12 billion for accident relief, and are protected against further losses by the Price-Anderson Act, by which the US Congress has socialised the costs of any nuclear disaster.
The nuclear industry needs such extraordinary government protection in the US, as it does in all countries, because – for all the authoritative, blue-ribbon risk assessments demonstrating its safety – the reactor business, almost uniquely, is unable to secure private insurance.
The industry’s unique dependence on limited liabilities reflects the fact that no economic justification for atomic power could concede the evitability of major accidents like Fukushima and remain viable or competitive.
As Mark Cooper, the author of a 2012 report on the economics of nuclear disaster has put it:
“If the owners and operators of nuclear reactors had to face the full liability of a Fukushima-style nuclear accident or go head-to-head with alternatives in a truly competitive marketplace, unfettered by subsidies, no one would have built a nuclear reactor in the past, no one would build one today, and anyone who owns a reactor would exit the nuclear business as quickly as possible.”
John Downer works at the Global Insecurities Centre, School of Sociology, Politics and International Studies, Bristol University.
This article is an extract from ‘In the shadow of Tomioka – on the institutional invisibility of nuclear disaster‘, Published by the Centre for Analysis of Risk and Regulation at the London School of Economics and Political Science.
This version has been edited to include important footnote material in the main text, and exclude most references. Scholars, scientists, researchers, etc please refer to the original publication.