Monthly Archives: December 2014

Death by landfill – cutting ‘green tape’ costs lives





I’ve been a professional ‘environmental investigator’ for over 22 years now. Over that time I’ve seen some awful offences against the environment. I’ve also witnessed some inspiring action from the individuals and communities affected.

After seeing so many outrageous cases it’s easy to become desensitised to the more everyday environmental offences – even if they are, of themselves, dire to those involved.

But every now and again you come across something that jerks you back to stark reality – something that touches a raw nerve.

I spent the 1990s working as an ‘eco-troubleshooter for hire’ across Britain. For the last decade or so, tired of seeing the same problems coming around again and again, I’ve become more strategic – trying, proactively, to deal with issues before they become an offence to human health and environment. For example, I was apparently the first person touring the UK talking about fracking in 2009 / 2010.

I’ve seen all sorts of ‘nastiness’ – from the dodgy waste reclamation plants of the Black Country, to the chemical plants of Teeside, to the landfills of South Wales.

The point at which I decided to stop chasing tipper lorries, and instead proactively identify ‘the next big issue’, was after fighting Newcastle City Council in 1999/2000.

They had, as a method of ‘recycling’, dumped highly toxic incinerator ash on public parks and allotments across Newcastle – only for them to get a slap on the wrist in the court and, politically, to brush the matter under the carpet.

A case from the ‘book of horrors’

1991-2003 is a time in my career which I look back upon with both fond and troubling memories. And a few weeks ago it came back to haunt me with a vengeance. After speaking about fracking in Guildford I met a couple whose case was right out of my old ‘book of horrors’ from the 1990s.

During February of this year the news was dominated by the flooding along the Thames Valley. Amidst the general mayhem there was one tragedy which has received little public attention.

In the early hours of 8th February 2014, Kye Gbangbola, his seven year old son Zane, and Zane’s mother Nicole were all taken ill at their home in Thameside, Surrey. An ambulance was called and they were taken to hospital. Both Kye and Zane had suffered cardiac arrest. Zane died later in hospital. Kye remains paralysed from the waist down.

Kye and Nicole came to my talk in Guildford and told me of their campaign to find the truth of what happened that day. Surrey Fire and Rescue Service attended and found hydrogen cyanide. Medical tests also showed the presence of cyanide in the family’s blood.

Ten months later the case has not been resolved: no date for an inquest; no death certificate; no resolution to the family’s plight.

What is the possible source of cyanide from flooding?

Just to the north of their home was a former gravel pit which, some years ago, had been used as a waste dump. Before waste licensing came in a the end of the 1970s, waste dumping was pretty much uncontrolled. Former gravel and brick pits around the periphery of London were used extensively to get rid of the capital’s waste.

And the source of the waste? No one knows. If, for example, the site had been filled with innocently identified ‘construction waste’, and if that material had come from a former gasworks in London or elsewhere, it could contain high levels of cyanide.

These things happened in the 1970s and 1980s. For example, in 1992 I discovered the the UK Atomic Energy Authority’s Harwell Laboratory, Britain’s premier nuclear research agency, had for years been secretly dumping waste chemical flasks and radioactive waste transport containers in a gravel pit on the edge of an Oxfordshire village.

In April 2014 Surrey County Council, the waste disposal authority for the area, denied that the site had been landfilled. If you go to the Environment Agency’s web site, you can see that the site is classed as an ‘historic landfill’ – that is, pre-dating the controls brought in during the late 1970s.

As flood-waters rose in February, the landfilled material is presumed to have become saturated. As the result of either chemical reactions, or the displacement of toxic gases, or both, the groundwater which filled the cellar is presumed to have carried the toxic gas into the house, overcoming the family.

A trip down eco-memory lane

What I’ve found so troubling about this case is that, for twenty years, this has been a tragedy waiting to happen. To explain why, I need you to take a trip down eco-memory lane.

When I started work professionally in 1992, the first thing I did was to write a series of reports on the issue of contaminated land. During my ‘voluntary period’ (1984-1991) I’d come across the issue a number of times.

From closed landfill sites, to old gasworks, it was a serious problem – and one which I believed could form the basis of a viable business as a full-time ‘environmental investigator’.

The Department of the Environments’ (DoE) Interdepartmental Committee on the Redevelopment of Contaminated Land (ICRCL) had produced a number of documents in the early 1980’s setting out the best practice for the redeveloping contaminated land.

In 1985 the Royal Commission on Environmental Pollution’s Eleventh Report highlighted the problems too. This led to research being commissioned, and eventually the issuing of a DoE / Welsh Office circular explaining the procedures and best practice in the redevelopment of contaminated land. The ICRCL also revised some of their previous notes to reflect this.

In 1989 the Government decided to put all this new research and best practice into a formal, legally enforcible regulatory framework; which was inserted into the new Environmental Protection Bill , eventually becoming Part II of the Environmental Protection Act 1990.

Then the development industry went absolutely berserk!

Shortly after the new Act was approved, developers and landowners, fearful that their assets would be effectively worthless if they had to clean up historic contamination, brought brickbats to bear on politicians and regulators. What particularly stirred their wrath were two specific sections of the new Act:

Section 61, which required local waste regulation authorities to map all the former landfill sites in their area. The rationale was that what is the value of bringing new regulations to control the hazard from current and new landfill sites if you didn’t police the condition of the old ones too. Specifically paragraph 1 stated,

“it shall be the duty of every waste regulation authority to cause its area to be inspected from time to time to detect whether any land is in such a condition, by reason of the relevant matters affecting the land, that it may cause pollution of the environment or harm to human health.”

Section 143 was similar, but it extended the need to survey and evaluate land to all potentially “contaminative uses of land”, and created a,

“duty of a local authority, as respects land in its area subject to contamination, to maintain, in accordance with the regulations, a register in the prescribed form and containing the prescribed particulars.”

The fear expressed by many property developers was that large areas of land would be ‘blighted’. The land would be worthless because of the perceived risk in the mind of the public, and because of the large costs of decontamination before any new developments could be built.

That view ignores the potential hazards of development – and arguably the greater cost to public health and the NHS. I’d carried out some research on behalf of Friends of the Earth in Oxfordshire, Kent and Lancashire, and there were a large number of sites which could cause problems to the environment and human health if badly redeveloped.

The risk was not from the land as it was – it was the impact on workers and the public if the substances locked-up in the ground were disturbed, dug up or moved.

In May 1991, following a public consultation, regulations were drafted to implement the new system – to be commenced in April 1992. These were abandoned shortly before this date due to pressure from property developers.

To address the developer’s concerns, following a second consultation period, the regulations were the redrafted – the new guidelines only covering 15% of the land area which the original regulations would have. Despite this, the Department of the Environment still received objections from major developers and landowners.

The Government caves in to pressure

On 24th March 1993 the Government abandoned plans to implement sections 143 and 61 of the Act, and announced that it would begin a review of the powers of regulatory bodies to control the pollution of land.

The exact nature of pressure brought to bear on the Conservative government, causing them to cave into the development lobby rather than protecting public health, was not clear at the time. All we can do today is ask the minister responsible for that decision, Michael Howard – now a member of the House of Lords.

By 1995 the Government was planning to merge various environmental regulators to form a new ‘super-regulator’, the Environment Agency.

That was brought about by The Environment Act 1995. Section 57 of that Act repealed section 143 of the 1990 Act, inserting in its place a new ‘Part IIA’ of the Environmental Protection Act which instituted a new legal framework for dealing with contaminated land.

Section 120 and Schedule 22 of the 1995 Act repealed section 61 of the 1990 Act, taking away the obligation to monitor former landfill sites – meaning that they would be dealt with just like any other types of ‘potentially contaminated land’ even though, arguably, landfill sites are a more hazardous land use.

How can we summarised this new process? That’s best summed up in DEFRA’s 2008 legal definition of land contamination, drawn up by the then ‘New Labour’ government (my emphasis):

“Part 2A of the Environmental Protection Act 1990 came into force in England in 2000. The Government sees a central aim of the Part 2A regime as being to encourage voluntary remediation of land affected by contamination.”

What does ‘voluntary remediation’ mean in practice?

Around 1997/8 I investigated the redevelopment of the former Royal Small Arms Factory in Enfield Lock. Redevelopment was causing nausea and skin rashes amongst nearby residents.

What that ‘remediation’ meant to the developer of the new Enfield Island Village was that at the least contaminated end of the island, where the ‘expensive’ houses were to be built, a metre or two of soil was dug up (which was causing the problems experienced by the neighbours) and replaced with fresh material before the houses were erected.

At the other, most contaminated end of the island, very little soil was removed. Instead a metre of clay was rolled down on the ground surface before the ‘low cost’ social housing was erected.

This is the problem with the framework for contaminated land instituted in 1995. It proceeds on a ‘don’t ask, don’t tell’ basis. If the local council doesn’t press the issue, the developer need only undertake works which render the site fit for its intended purpose.

Worse still, if a local authority decides that a site presents an imminent risk to the public, it might have to bear the cost of remedial action and try to bill the landowner for the work. Consequently it isn’t in the interests of local authorities to look, just in case they find something – Surrey’s immediate denial in this case being an exemplar of the principle.

If Section 61 had not been repealed in 1995, Surrey County Council would have had to investigate every former landfill in the area and assess its risk to the public (around the periphery of London, that’s quite a lot of sites).

If Section 147 had not been repealed, Spelthorne Borough Council’s Environmental Health Department would have had to keep a detailed register of potentially contaminated sites, and that register would have been available to anyone to view.

There is a repeating pattern of administrative action at work here

Just as in the early 1990s, today the Government and regulators are coming under pressure to water-down environmental regulations, and ‘cut the green tape‘, to allow business to develop more easily.

For example, on the back of a more right-wing economic bandwagon, instituting policies such as ‘fracking’ for shale gas, David Cameron has instructed his aides to get rid of the green crap from policy.

That’s also why this case touched a raw nerve with me. I’ve come across some nasty cases in the past – such as the Rocket Pool estate in Bradley, Wolverhampton, where people living on the edge of a former landfill were becoming seriously ill (a few years later, a number of local people who I worked with on that case had died).

What really annoys me is that, 20 years ago, the consequences of decisions made then were entirely foreseeable – and were made purely for the sake of money over the value of people’s health.

Today, that same blinkered agenda is still driving decision-making. That’s what hit me as I talked to Kye and Nicole in Guildford. My past was catching up with me and it had so much to say about the present.

We can’t be certain that if sections 61 and 143 of the Environmental Protection Act 1990 had not been withdrawn, and the legislation enacted as originally anticipated, that Zane and his family would not have succumbed to the tragedy which befell them in February.

What we can say, especially given the requirements of section 61 on Surrey County Council, is that it would have been less likely to happen if these sites had been properly investigated 20 years ago.

And today, though their individual case is a sad reminder of Britain’s legacy of past mistakes, it should serve as a red flag over decisions taken today to ‘cut green tape’ – which, with what we know from our recent past, could plague present and future generations.

 


 

Paul Mobbs is an independent environmental consultant, investigator, author and lecturer.

A fully referenced version of this article is posted on the Free Range Activism website.

 

 






Fukushima and the institutional invisibility of nuclear disaster





Speaking at press conference soon after the accident began, the UK government’s former chief science advisor, Sir David King, reassured journalists that the natural disaster that precipitated the failure had been “an extremely unlikely event”.

In doing so, he exemplified the many early accounts of Fukushima that emphasised the improbable nature of the earthquake and tsunami that precipitated it.

A range of professional bodies made analogous claims around this time, with journalists following their lead. This lamentation, by a consultant writing in the New American, is illustrative of the general tone:

” … the Fukushima ‘disaster’ will become the rallying cry against nuclear power. Few will remember that the plant stayed generally intact despite being hit by an earthquake with more than six times the energy the plant was designed to withstand, plus a tsunami estimated at 49 feet that swept away backup generators 33 feet above sea level.”

The explicit or implicit argument in all such accounts is that the Fukushima’s proximate causes are so rare as to be almost irrelevant to nuclear plants in the future. Nuclear power is safe, they suggest, except against the specific kind of natural disaster that struck Japan, which is both a specifically Japanese problem, and one that is unlikely to re-occur, anywhere, in any realistic timeframe

An appealing but tenuous logic

The logic of this is tenuous on various levels. The ‘improbability’ of the natural disaster is disputable, for one, as there were good reasons to believe that neither the earthquake nor the tsunami should have been surprising. The area was well known to be seismically active after all, and the quake, when it came, was only the fourth largest of the last century.

The Japanese nuclear industry had even confronted its seismic under-preparedness four years earlier, on 16 July 2007, when an earthquake of unanticipated magnitude damaged the Kashiwazaki-Kariwa nuclear plant.

This had led several analysts to highlight Fukushima’s vulnerability to earthquakes, but officials had said much the same then as they now said in relation to Fukushima. The tsunami was not without precedent either.

Geologists had long known that a similar event had occurred in the same area in July 869. This was a long time ago, certainly, but the data indicated a thousand-year return cycle.

Several reports, meanwhile, have suggested that the earthquake alone might have precipitated the meltdown, even without the tsunami – a view supported by a range of evidence, from worker testimony, to radiation alarms that sounded before the tsunami. Haruki Madarame, the head of Japan’s Nuclear Safety Commission, has criticised Fukushima’s operator, TEPCO, for denying that it could have anticipated the flood.

The claim that Japan is ‘uniquely vulnerable’ to such hazards is similarly disputable. In July 2011, for instance, the Wall Street Journal reported on private NRC emails showing that the industry and its regulators had evidence that many US reactors were at risk from earthquakes that had not been anticipated in their design.

It noted that the regulator had taken very little or no action to accommodate this new understanding. As if to illustrate their concern, on 23 August 2011, less than six months after Fukushima, North Anna nuclear plant in Mineral, Virginia, was rocked by an earthquake that exceeded its design-basis predictions.

Every accident is ‘unique’ – just like the next one

There is, moreover, a larger and more fundamental reason to doubt the ‘unique events or vulnerabilities’ narrative, which lies in recognising its implicit assertion that nuclear plants are safe against everything except the events that struck Japan.

It is important to understand that those who assert that nuclear power is safe because the 2011 earthquake and tsunami will not re-occur are, essentially, saying that although the industry failed to anticipate those events, it has anticipated all the others.

Yet even a moment’s reflection reveals that this is highly unlikely. It supposes that experts can be sure they have comprehensively predicted all the challenges that nuclear plants will face in its lifetime (or, in engineering parlance: that the ‘design basis’ of every nuclear plant is correct) – even though a significant number of technological disasters, including Fukushima, have resulted, at least in part, from conditions that engineers failed to even consider.

As Sagan points out: “things that have never happened before, happen all the time”. The terrorist attacks of 9/11 are perhaps the most iconic illustration of this dilemma but there are many others.

Perrow (2007) painstakingly explores a landscape of potential disaster scenarios that authorities do not formally recognise, but it is highly unlikely that he has considered them all.

More are hypothesised all the time. For instance, researchers have recently speculated about the effects of massive solar storms, which, in pre-nuclear times, have caused electrical systems over North America and Europe to fail for weeks at a time.

Human failings that are unrepresentative and / or correctable

A second rationale that accounts of Fukushima invoke to establish that accidents will not re-occur focuses on the people who operated or regulated the plant, and the institutional culture in which they worked. Observers who opt to view the accident through this lens invariably construe it as the result of human failings – either error, malfeasance or both.

The majority of such narratives relate the failings they identify directly to Fukushima’s specific regulatory or operational context, thereby portraying it as a ‘Japanese’ rather than a ‘nuclear’ accident.

Many, for instance, stress distinctions between US and Japanese regulators; often pointing out that the Japanese nuclear regulator (NISA) was subordinate to the Ministry of Trade and Industry, and arguing that this created a conflict of interest between NISA’s responsibilities for safety and the Ministry’s responsibility to promote nuclear energy.

They point, for instance, to the fact that NISA had recently been criticised by the International Atomic Energy Agency (IAEA) for a lack of independence, in a report occasioned by earthquake damage at another plant. Or to evidence that NISA declined to implement new IAEA standards out of fear that they would undermine public trust in the nuclear industry.

Other accounts point to TEPCO, the operator of the plant, and find it to be distinctively “negligent”. A common assertion in vein, for instance, is that it concealed a series of regulatory breaches over the years, including data about cracks in critical circulation pipes that were implicated in the catastrophe.

There are two subtexts to these accounts. Firstly, that such an accident will not happen here (wherever ‘here’ may be) because ‘our’ regulators and operators ‘follow the rules’. And secondly, that these failings can be amended so that similar accidents will not re-occur, even in Japan.

Where accounts of the human failings around Fukushima do portray those failings as being characteristic of the industry beyond Japan, the majority still construe those failings as eradicable.

In March 2012, for instance, the Carnegie Endowment for International Peace issued a report that highlighted a series of organisational fallings associated with Fukushima, not all of which they considered to be meaningfully Japanese.

Nevertheless, the report – entitled ‘Why Fukushima was preventable’ – argued that such failings could be resolved. “In the final analysis”, it concluded, “the Fukushima accident does not reveal a previously unknown fatal flaw associated with nuclear power.”

The same message echoes in the many post-Fukushima actions and pronouncements of nuclear authorities around the world promising managerial reviews and reforms, such as the IAEA’s hastily announced ‘five-point plan’ to strengthen reactor oversight.

Myths of exceptionality

As with the previous narratives about exogenous hazards, however, the logic of these ‘human failure’ arguments is also tenuous. Despite the editorial consternation that revelations about Japanese malfeasance and mistakes have inspired, for instance, there are good reasons to believe that neither were exceptional.

It would be difficult to deny that Japan had a first-class reputation for managing complex engineering infrastructures, for instance. As the title of one op-ed in the Washington Post puts it: “If the competent and technologically brilliant Japanese can’t build a completely safe reactor, who can?”

Reports of Japanese management failings must be considered in relation to the fact that reports of regulatory shortcomings, operator error, and corporate malfeasance abound in every state with nuclear power and a free press.

There also exists a long tradition of accident investigations finding variations in national safety practices that are later rejected on further scrutiny.

When Western experts blamed Chernobyl on the practices of Soviet nuclear industry, for example, they unconsciously echoed Soviet narratives highlighting the inferiority of Western safety cultures to argue that an accident like Three Mile Island could never happen in the USSR.

Arguments suggesting that ‘human’ problems are potentially solvable are similarly difficult to sustain, for there are compelling reasons to believe that operational errors are an inherent property of all complex socio-technical systems.

Close accounts of even routine technological work, for instance, routinely find it to be necessarily and unavoidably ‘messier’ in practice than it appears on paper.

Thus both human error and non-compliance are ambiguous concepts. As Wynne (1988: 154) observes: “… the illegitimate extension of technological rules and practices into the unsafe or irresponsible is never clearly definable, though there is ex-post pressure to do so.” The culturally satisfying nature of ‘malfeasance explanations’ should, by itself, be cause for circumspection.

These studies undermine the notion of ‘perfect rule compliance’ by showing that even the most expansive stipulations sometimes require interpretation and do not relieve workers of having to make decisions in uncertain conditions.

In this context we should further recognise that accounts that show Fukushima, specifically, was preventable are not evidence that nuclear accidents, in general, are preventable.

To argue from analogy: it is true to say that any specific crime might have been avoided (otherwise it wouldn’t be a crime), but we would never deduce from this that crime, the phenomenon, is eradicable. Human failure will always be present in the nuclear sphere at some level, as it is in all complex socio-technical systems.

And, relative to the reliability demanded of nuclear plants, it is safe to assume that this level will always be too high or, at least, that our certainty regarding it will be too low. While human failures and malfeasance are undoubtedly worth exploring, understanding and combating, therefore, we should avoid the conclusion that they can be ‘solved’.

Plant design is unrepresentative and/or correctable

Parallel to narratives about Fukushima’s circumstances and operation, outlined above, are narratives that emphasise the plant itself.

These limit the relevance of accident to the wider nuclear industry by arguing that the design of its reactor (a GE Mark-1) was unrepresentative of most other reactors, while simultaneously promising that any reactors that were similar enough to be dangerous could be rendered safe by ‘correcting’ their design.

Accounts in this vein frequently highlight the plant’s age, pointing out that reactor designs have changed over time, presumably becoming safer. A UK civil servant exemplified this narrative, and the strategic decision to foreground it, in an internal email (later printed in the Guardian [2011]), in which he asserted that

“We [The Department of Business, Innovation and Skills] need to … show that events in Japan, whilst looking dramatic, are all part of the safety processes of this 1960’s reactor.”

Stressing the age of the reactor in this way became a mainstay of Fukushima discourse in the disaster’s immediate aftermath. Guardian columnist George Monbiot (2011b), for instance, described Fukushima as “a crappy old plant with inadequate safety features”.

He concluded that its failure should not speak to the integrity of later designs, like that of the neighboring plant, Fukushima ‘Daini’, which did not fail in the tsunami. “Using a plant built 40 years ago to argue against 21st-century power stations”, he wrote, “is like using the Hindenburg disaster to contend that modern air travel is unsafe.”

Other accounts highlighted the reactor’s design but focused on more generalisable failings, such as the “insufficient defense-in-depth provisions for tsunami hazards” (IAEA 2011a: 13), which could not be construed as indigenous only to the Mark-1 reactors or their generation.

The implication – we can and will fix all these problems

These failings could be corrected, however, or such was the implication. The American Nuclear Society set the tone, soon after the accident, when it reassured the world that: “the nuclear power industry will learn from this event, and redesign our facilities as needed to make them safer in the future.”

Almost every official body with responsibility for nuclear power followed in their wake. The IAEA, for instance, orchestrated a series of rolling investigations, which eventually cumulated in the announcement of its ‘Action Plan on Nuclear Safety’ and a succession of subsequent meetings where representatives of different technical groups could pool their analyses and make technical recommendations.

The groups invariably conclude that “many lessons remain to be learned” and recommend further study and future meetings. Again, however, there is ample cause for scepticism.

Firstly, there are many reasons to doubt that Fukushima’s specific design or generation made it exceptionally vulnerable. As noted above, for instance, many of the specific design failings identified after the disaster – such as the inadequate water protection around reserve power supplies – were broadly applicable across reactor designs.

And even if the reactor design or its generation were exceptional in some ways, that exceptionalism is decidedly limited. There are currently 32 Mark-1 reactors in operation around the world, and many others of a similar age and generation, especially in the US, where every reactor currently in operation was commissioned before the Three Mile Island accident in 1979.

Secondly, there is little reason to believe that most existing plants could be retrofitted to meet all Fukushima’s lessons. Significantly raising the seismic resilience of a nuclear plant, for instance, implies such extensive design changes that it might be more practical to decommission the entire structure and rebuild from scratch.

This perhaps explains why progress has been halting on the technical recommendations. It might be true that different, or more modern reactors are safer, therefore, but these are not the reactors we have.

In March 2012, the NRC did announce some new standards pertaining to power outages and fuel pools – issuing three ‘immediately effective’ orders requiring operators to implement some of the more urgent recommendations. The required modifications were relatively modest, however, and ‘immediately’ in this instance meant ‘by December 31st 2016’.

Meanwhile, the approvals for four new reactors the NRC granted around this time contained no binding commitment to implement the wider lessons it derived from Fukushima. In each case, the increasingly marginalised NRC chairman, Gregory Jaczko, cast a lone dissenting vote. He was also the only committee member to object to the 2016 timeline.

Complex systems’ ability to keep on surprising

Finally, and most fundamentally, there are many a priori reasons to doubt that any reactor design could be as safe as risk analyses suggest. Observers of complex systems have outlined strong arguments for why critical technologies are inevitably prone to some degree of failure, whatever their design.

The most prominent such argument is Perrow’s Normal Accident Theory (NAT), with its simple but profound probabilistic insight that accidents caused by very improbable confluences of events (that no risk calculation could ever anticipate) are ‘normal’ in systems where there are many opportunities for them to occur.

From this perspective, the ‘we-found-the-flaw-and-fixed-it’ argument is implausible because it offers no way of knowing how many ‘fateful coincidences’ the future might hold.

‘Lesson 1’ of the IAEA’s preliminary report on Fukushima is that the ” … design of nuclear plants should include sufficient protection against infrequent and complex combinations of external events.”

NAT explains why an irreducible number of these ‘complex combinations’ must be forever beyond the reach of formal analysis and managerial control.

A different way of demonstrating much the same conclusion is to point to the fundamental epistemological ambiguity of technological knowledge, and to how the significance of this ambiguity is magnified in complex, safety-critical systems due to the very high levels of certainty these systems require.

Judgements become more significant in this context because they have to be absolutely correct. There is no room for error bars in such calculations . It makes little sense to say that we are 99% certain a reactor will not explode, but only 50% sure that this number is correct.

Perfect safety can never be guaranteed

Viewed from this perspective, it becomes apparent that complex systems are likely to be prone to failures arising from erroneous beliefs that are impossible to predict in advance, which I have elsewhere called ‘Epistemic Accidents’.

This is essentially to say that the ‘we-found-the-flaw-and-fixed-it’ argument cannot guarantee perfect safety because it offers no way of knowing how many new ‘lessons’ the future might hold.

Just as it is impossible for engineers and regulators to know for certain that they have anticipated every external event a nuclear plant might face, so it is impossible for them know that their understanding of the system itself is completely accurate.

Increased safety margins, redundancy, and defense-in-depth undoubtedly might improve reactor safety, but no amount of engineering wizardry can offer perfect safety, or even safety that is ‘knowably’ of the level that nuclear plants require. As Gusterson (2011), puts it: ” … the perfectly safe reactor is always just around the corner.”

Nuclear authorities sometimes concede this. After the IAEA ’s 2012 recommendations to pool insights from the disaster, for instance, the meeting’s chairman, Richard Meserve, summarised: “In the nuclear business you can never say, ‘the task is done’.”

Instead they promise improvement. “The Three Mile Island and Chernobyl accidents brought about an overall strengthening of the safety system”, Meserve continued. “It is already apparent that the Fukushima accident will have a similar effect.”

The real question, however, is when will the safety be strong enough? As it wasn’t after Three Mile Island or Chernobyl, why should Fukushima be any different?

The reliability myth

This is all to say, in essence, that it is misleading to assert that an accident of Fukushima’s scale will not re-occur. For there are credible reasons to believe that the reliability required of reactors is not calculable, and there are credible reasons to believe that the actual reliability of reactors is much lower than is officially calculated.

These limitations are clearly evinced by the actual historical failure rate of nuclear reactors. Even the most rudimentary calculations show that civil nuclear accidents have occurred far more frequently than official reliability assessments have predicted.

The exact numbers vary, depending on how one classifies ‘an accident’ (whether Fukushima counts as one meltdown or three, for example), but Ramana (2011) puts the historical rate of serious meltdowns at 1 in every 3,000 reactor years, while Taebi et al. (2012: 203fn) put it at somewhere between 1 in every 1,300 to 3,600 reactor years.

Either way, the implied reliability is orders of magnitude lower than assessments claim.

In a recent declaration to a UK regulator, for instance, Areva, a prominent French nuclear manufacturer, invoked probabilistic calculations to assert that the likelihood of a “core damage incident” in its new ‘EPR’ reactor were of the order of one incident, per reactor, every 1.6 million years (Ramana 2011).

Two: the accident was tolerable

The second basic narrative through which accounts of Fukushima have kept the accident from undermining the wider nuclear industry rests on the claim that its effects were tolerable – that even though the costs of nuclear accidents might look high, when amortised over time they are acceptable relative to the alternatives.

The ‘accidents are tolerable’ argument is invariably framed in relation to the health effects of nuclear accidents. “As far as we know, not one person has died from radiation”, Sir David King told a press conference in relation to Fukushima, neatly expressing a sentiment that would be echoed in editorials around the world in the aftermath of the accident.

“Atomic energy has just been subjected to one of the harshest of possible tests, and the impact on people and the planet has been small”, concluded Monbiot in one characteristic column.

“History suggests that nuclear power rarely kills and causes little illness”, the Washington Post reassured its readers (Brown 2011). See also eg McCulloch (2011); Harvey (2011). “Fukushima’s Refugees Are Victims Of Irrational Fear, Not Radiation”, declared the title of an article in Forbes (Conca 2012).

In its more sophisticated forms, this argument draws on comparisons with other energy alternatives. A 2004 study by the American Lung Association argues that coal-fired power plants shorten the lives of 24,000 people every year.

Chernobyl, widely considered to be the most poisonous nuclear disaster to date, is routinely thought to be responsible for around 4,000 past or future deaths.

Even if the effects of Fukushima are comparable (which the majority of experts insist they are not), then by these statistics the human costs of nuclear energy seem almost negligible, even when accounting for its periodic failures.

Such numbers are highly contestable, however. Partly because there are many more coal than nuclear plants (a fairer comparison might consider deaths per kilowatt-hour). But mostly because calculations of the health effects of nuclear accidents are fundamentally ambiguous.

Chronic radiological harm can manifest in a wide range of maladies, none of which are clearly distinguishable as being radiologically induced – they have to be distinguished statistically – and all of which have a long latency , sometimes of decades or even generations.

How many died? It all depends …

So it is that mortality estimates about nuclear accidents inevitably depend on an array of complex assumptions and judgments that allow for radically divergent – but equally ‘scientific’ – interpretations of the same data. Some claims are more compelling than others, of course, but ‘truth’ in this realm does not ‘shine by its own lights’ as we invariably suppose it ought.

Take, for example, the various studies of Chernobyl’s mortality, from which estimates of Fukushima’s are derived. The models underlying these studies are themselves derived from data from Hiroshima and Nagasaki survivors, the accuracy and relevance of which have been widely criticised, and they require the modeller to make a range of choices with no obviously correct answer.

Modellers must select between competing theories of how radiation affects the human body, for instance; between widely varying judgments about the amount of radioactive material the accident released; and much more. Such choices are closely interlinked and mutually dependent.

Estimates of the composition and quantities of the isotopes released in the accident, for example, will affect models of their distribution, which, in conjunction with theories of how radiation affects the human body, will affect conclusions about the specific populations at risk.

This, in turn, will affect whether a broad spike in mortality should be interpreted as evidence of radiological harm or as evidence that many seemingly radiation – related deaths are actually symptomatic of something else. And so on ad infinitum: a dynamic tapestry of theory and justification, where subtle judgements reverberate throughout the system.

The net result is that quiet judgrments concerning the underlying assumptions of an assessment – usually made in the very earliest stages of a study and all but invisible to most observers – have dramatic affects on its findings. The effects of this are visible in the widely divergent assertions made about Chernobyl’s death toll.

The ‘orthodox’ mortality figure cited above – no more than 4,000 deaths – comes from the 2005 IAEA-led ‘Chernobyl Forum ‘ report. Or rather, from the heavily bowdlerised press release from the IAEA that accompanied its executive summary. The actual health section of the report alludes to much higher numbers.

Yet the ‘4,000 deaths’ number is endorsed and cited by most international nuclear authorities, although it stands in stark contrast to the findings of similar investigations.

Two reports published the following year, for example, offer much higher figures: one estimating 30,000 to 60,000 cancer deaths (Fairlie & Sumner 2006 ); the other 200,000 or more (Greenpeace 2006: 10).

In 2009, meanwhile, the New York Academy of Sciences published an extremely substantive Russian report by Yablokov that raised the toll even further, concluding that in the years up to 2004, Chernobyl caused around 985,000 premature cancer deaths worldwide.

Between these two figures – 4,000 and 985,000 – lie a host of other expert estimations of Chernobyl’s mortality, many of them seemingly rigorous and authoritative. The Greenpeace report tabulates some of the varying estimates and correlates them to differing methodologies.

Science? Or propaganda?

Different sides in this contest of numbers routinely assume their rivals are actively attempting to mislead – a wide range of critics argue that most official accounts are authored by industry apologists who ‘launder’ nuclear catastrophes by dicing evidence of their human fallout into an anodyne melée of claims and counter claims.

When John Gofman, a former University of California Berkeley Professor of Medical Physics, wrote that the Department of Energy was “conducting a Josef Goebels propaganda war” by advocating a conservative model of radiation damage, for instance, his charge more remarkable for its candor than its substance.

And there is certainly some evidence for this. There can be little doubt that in the past the US government has intentionally clouded the science of radiation hazards to assuage public concerns. The 1995 US Advisory Committee on Human Radiation Experiments, for instance, concluded that Cold War radiation research was heavily sanitised for political ends.

A former AEC (NRC) commissioner testified in the early 1990s that: “One result of the regulators’ professional identification with the owners and operators of the plants in the battles over nuclear energy was a tendency to try to control information to disadvantage the anti-nuclear side.” It is perhaps more useful, however, to say they are each discriminating about the realities to which they adhere.

In this realm there are no entirely objective facts, and with so many judgements it is easy to imagine how even small, almost invisible biases, might shape the findings of seemingly objective hazard calculations.

Indeed, many of the judgements that separate divergent nuclear hazard calculations are inherently political, with the result that there can be no such thing as an entirely neutral account of nuclear harm.

Researchers must decide whether a ‘stillbirth’ counts as a ‘fatality’, for instance. They must decide whether an assessment should emphasise deaths exclusively, or if it should encompass all the injuries, illnesses, deformities and dis abilities that have been linked to radiation. They must decide whether a life ‘shortened’ constitutes a life ‘lost’.

There are no correct answers to such questions. More data will not resolve them. Researchers simply have to make choices. The net effect is that the hazards of any nuclear disaster can only be glimpsed obliquely through a distorted lens.

So much ambiguity and judgement is buried in even the most rigorous calculations of Fukushima’s health impacts that no study can be definitive. All that remains are impressions and, for the critical observer, a vertiginous sense of possibility.

Estimating the costs – how many $100s of billions?

The only thing to be said for sure is that declarative assurances of Fukushima’s low death toll are misleading in their surety. Given the intense fact-figure crossfire around radiological mortality, it is unhelpful to view Fukushima purely through the lens of health.

In fact, the emphasis on mortality might itself be considered a way of minimising Fukushima, considering that there are other – far less ambiguous – lenses through which to view the disaster’s consequences.

Fukushima’s health effects are contested enough that they can be interpreted in ways that make the accident look tolerable, but it is much more challenging to make a case that it was tolerable in other terms.

Take, for example, the disaster’s economic impact. The intense focus on the health and safety effects of Fukushima has all but eclipsed its financial consequences, yet the latter are arguably more significant and are certainly less ambiguous.

Nuclear accidents incur a vast spectrum of costs. There are direct costs relating to the need to seal off the reactor; study, monitor and mitigate its environmental fallout; resettle, compensate and treat the people in danger; and so forth.

Over a quarter of a century after Chernobyl, the accident still haunts Western Europe, where government scientists in several countries continue to monitor certain meats, and keep some from entering the food chain.

Then there is an array of indirect costs that arise from externalities, such as the loss of assets like farmland and industrial facilities; the loss of energy from the plant and those around it; the impact of the accident on tourism; and so forth. The exact economic impact of a nuclear accident is almost as difficult to estimate as its mortality, and projections differ for the same fundamental reasons.

The evacuation zone around Fukushima – an area of around 966 sq km, much of which will be uninhabitable for generations – covers 3% of Japan, a densely populated and mountainous country where only 20% of the land is habitable in the first place.

They do not differ to the same degree, however, and in contrast to Fukushima’s mortality there is little contention that its financial costs will be enormous. By November of 2013, the Japanese government had already allocated over 8 trillion yen (roughly $80 billion or £47 billion) to Fukushima’s clean-up alone – a figure that excluded the cost of decommissioning the six reactors, a process expected to take decades and cost tens of billions of dollars.

Independent experts have estimated the clean-up cost to be in the region of $500 billion (Gunderson & Caldicott 2012). These estimates, moreover, exclude most of the indirect costs outlined above, such as the disaster’s costs to food and agriculture industries, which the Japanese Ministry of Agriculture, Forestries and Fisheries (MAFF) has estimated to be 2,384.1 billion yen (roughly $24 billion).

Of these competing estimates, the higher numbers seem more plausible. The notoriously conservative report of the Chernobyl Forum estimated that the cost of that accident had already mounted to “hundreds of billions of dollars” after just 20 years, and it seems unlikely that Fukushima’s three meltdowns could cost less.

Even if we assume that Chernobyl was more hazardous than Fukushima (a common conviction that is incrementally becoming more tenuous), then it remains true that the same report projected the 30-year costs cost to Belarus alone to be US$235 billion, and that Belarus’s lost opportunities, compensation payments and clean-up expenditures are unlikely to rival Japan’s.

Considering, for instance, Japan’s much higher cost of living, its indisputable loss of six reactors and decision to at least shutter the remainder of its nuclear plants, and many other factors. The Chernobyl reactor did not even belong to Belarus – it is in what is now the Ukraine.

The nuclear disaster liability swindle

To put these figures into perspective, consider that nuclear utilities in the US are required to create an industry – wide insurance pool of only about $12 billion for accident relief, and are protected against further losses by the Price-Anderson Act, by which the US Congress has socialised the costs of any nuclear disaster.

The nuclear industry needs such extraordinary government protection in the US, as it does in all countries, because – for all the authoritative, blue-ribbon risk assessments demonstrating its safety – the reactor business, almost uniquely, is unable to secure private insurance.

The industry’s unique dependence on limited liabilities reflects the fact that no economic justification for atomic power could concede the evitability of major accidents like Fukushima and remain viable or competitive.

As Mark Cooper, the author of a 2012 report on the economics of nuclear disaster has put it:

“If the owners and operators of nuclear reactors had to face the full liability of a Fukushima-style nuclear accident or go head-to-head with alternatives in a truly competitive marketplace, unfettered by subsidies, no one would have built a nuclear reactor in the past, no one would build one today, and anyone who owns a reactor would exit the nuclear business as quickly as possible.”

 

 


 

John Downer works at the Global Insecurities Centre, School of Sociology, Politics and International Studies, Bristol University.

This article is an extract from ‘In the shadow of Tomioka – on the institutional invisibility of nuclear disaster‘, Published by the Centre for Analysis of Risk and Regulation at the London School of Economics and Political Science.

This version has been edited to include important footnote material in the main text, and exclude most references. Scholars, scientists, researchers, etc please refer to the original publication.

 

 






FLUMP- Keystone Species, Climate Change and Coffee, Basic Science and More

Citizen scientist invest time and money to document the Earth's Biodiversity.

It’s Friday and that means that it’s time for our Friday link dump, where we highlight some recent papers (and other stuff) that we found interesting but didn’t have the time to write an entire post about. If you think there’s something we missed, or have something to say, please share in the comments section!

Science just released its annual list with the top 10 scientific achievements of  the year.

A new study led by Anthony R. Rafferty, shows that online supplementary material may acts as a “citation black role”, as these citations are invisible to search engines. The authors estimated that about 6% of all citations are only included in online supplementary material and therefore, are not considered in citation counts.

Andrew E. Noble and William F. Fagan propose a new framework to combine effects of selection, drift, speciation and dispersal on community dynamics, in their new paper “A niche remedy for the dynamical problems of neutral theory“.

Marco A. R. Mello and colleagues explored the ecological features of keystone species in seed dispersal networks across the Neotropics, in their paper ”Keystone species in seed dispersal networks are mainly determined by dietary specialization“. They evaluated the role of different  species traits, such as dietary specialization, body size and geographic range, and found that dietary specialization seems to be the main feature that makes a species a keystone.

At last, here is a plea for basic science: “Fundamental ecology is fundamental

– Vinicius Bastazini

Millions of citizen scientists contribute time and money to biodiversity research, but are their data reaching a scientific audience? You can find out in the most recent issue of Biological Conservation. (And congrats to co-author and fellow blogger Hillary!)

– Kylla Benes

Better kick the habit now, in this month’s Climatic Chance issue researchers claim that climate change will adversely affect the global supply of coffee beans. The authors of “A bitter cup: climate change profile of global production of Arabica and Robusta coffee” utilized modeling to determine that the number of sites suitable for the growth of coffee beans could be cut in half by 2050.

Check out these wonderful close-ups from this year’s BioScapes competition!

– Nate Johnson

December 19, 2014

Welcome new SE: Francois Massol

We are very happy to welcome Dr. Francois Massol to Oikos Editorial Board. Get to know him here:

DSC_8807What’s your main research focus at the moment?

These days, I try and focus my efforts on the evolution of dispersal and the evolutionary ecology of interaction networks. What I want to understand is how some traits and some particular positions in ecological networks come to be associated with a given propensity to disperse. This issue is important from a fundamental viewpoint – it relates to the knowledge of so-called “dispersal syndromes” – but it is also a hot issue from a more applied perspective because it could help understand the evolutionary emergence of would-be invasive, keystone or easily threatened species. Given my personal bias towards equations and theory, I tend to first confront these issues using models and then collaborate with more empirically minded colleagues to test theoretical predictions with field or experimental data.

However, when I write “focus my efforts”, I have to acknowledge that I spend quite a significant fraction of my time away from my usual favourite subjects, working on interdisciplinary projects (mostly with social scientists and physicists) – and I am rather thankful for these little eccentricities, for they help me broaden my perspective of theoretical approaches to modelling the dynamics of biodiversity.

Can you describe you research career? Where, what, when?

Coming from a typically French undergrad background (maths and physics), I switched to ecology and evolutionary biology during my Master and then my PhD in Montpellier, under the supervision of Philippe Jarne at the CEFE. My work at that time was focused on community ecology models. After I graduated, my first position was at the Irstea Hydrobiology lab in Aix-en-Provence, to work on more functional aspects of aquatic communities. While I was employed at Irstea, I obtained a Marie Curie fellowship that allowed me to spend a year (2009 – 2010) in Mathew Leibold’s lab in Austin, Texas, where I tried to run a mesocosm experiment dealing with the effect of dispersal on the functioning of food webs (sadly, the experiment failed, but this is another story). In 2012, I was recruited at the CNRS in Montpellier (back to the CEFE), in the group of Pierre-Olivier Cheptou, to work on the evolution of mating systems and dispersal traits in plants. In 2013, I moved to a CNRS lab in Lille (GEPV) where I joined the group of Sylvain Billiard to work on the evolutionary ecology of mating systems. Moving so frequently is both a boon and a curse for obvious reasons, but as a connoisseur of the evolution of dispersal, I try to wear this as a badge of honour (and humour).

2008 janv Beauplan FM malaco-bidon

How come that you became a scientist in ecology?

If I were to explain why I became a scientist based on personality and motivations alone, curiosity together with the possibility of working in a free-thinking environment surely had a role at some point. I would also add that my personal kind of stubbornness probably helped a lot in getting me there. However, I think it’s also quite enlightening to think of a career path in science as built half on motivations and half on contingencies. The original contingency that set me on track was the first scientific internship I did back in 2002 in Dima Sherbakov’s lab at the Limnological Institute in Irkutsk, Russia. The atmosphere in the lab, the way people were working, the passion that permeated the place – all of this probably triggered something in my mind and I have been fond of this ambience ever since. The second set of happy contingencies have been the genial encounters I made afterwards when I was looking for a PhD project, i.e. Daniel Gerdeaux and Philippe Jarne, and then during my PhD (Pierre-Olivier Cheptou, to name but one person). I am convinced that a large part of my day-to-day satisfaction at work is based on the variety and the general goodwill of the colleagues with whom I interact.

What do you do when you’re not working?

At the moment, I am quite busy taking care of the house we just bought. House chores, family and friends occupy a consequent share of my non-lab time… Generally, I tend to spend the rest of my spare time reading (Terry Pratchett, Neal Stephenson, John Le Carré, Jasper Fforde and Neil Gaiman are always on top of the list), hiking, traveling and playing badminton.

Personal webpage: https://sites.google.com/a/polytechnique.org/francoismassol/home

ResearchGate page: https://www.researchgate.net/profile/Francois_Massol

 

EU Trade Secrets Directive – a threat to health, environment, human rights





A new draft EU directive currently looked at by the European Parliament wants to protect companies’ ‘trade secrets’.

But it uses definitions so large and exceptions so weak that it could seriously endanger the work of journalists, whistle-blowers, unionists and researchers as well as severely limiting corporate accountability and the transparency of corporate data used for regulation.

We publish a joint statement, below, together with many other groups that calls for the directive to be radically amended.

And end to transparency on health, food, environment

We strongly oppose the hasty push by the European Commission and Council for a new European Union (EU) Directive on Trade Secrets because it contains:

  • An unreasonably broad definition of ‘trade secrets’ that enables almost anything within a company to be deemed as such;
  • Overly-broad protection for companies, which could sue anyone who “unlawfully acquires, uses or discloses” their so-called “trade secrets”; and
  • Inadequate safeguards that will not ensure that EU consumers,  journalists, whistleblowers, researchers and workers have reliable access to important data that is in the public interest.

Contrary to the Commission’s goals, this unbalanced piece of legislation would result in legal uncertainty.

Unless radically amended by the Council and European Parliament, the proposed directive could endanger freedom of expression and information, corporate accountability, information sharing – possibly even innovation – in the EU.

Specifically, we share great concern that under the draft directive companies in the health, environment and food safety fields could refuse compliance with transparency policies even when the public interest is at stake.

Health

Pharmaceutical companies argue that all aspects of clinical development should be considered a trade secret.

Access to biomedical research data by regulatory authorities, researchers, doctors and patients – particularly data on drug efficacy and adverse drug reactions – is critical, however, for protecting patient safety and conducting further research and independent analyses.

This information also prevents scarce public resources from being spent on therapies that are no better than existing treatments, do not work, or do more harm than good. Moreover, disclosure of pharmaceutical research is needed to avoid unethical repetition of clinical trials on people.

The proposed directive should not obstruct recent EU developments to increase sharing and transparency of this data.

Environment

Trade secret protection can be used to refuse the release of information on hazardous products within the chemical industry.

Trade secret protection may, for example, be invoked by companies to hide information on chemicals in plastics, clothing, cleaning products and other items that can cause severe damage to the environment and human health.

They could also use the directive to refuse disclosing information on the dumping of chemicals, including fracking fluids, or releasing toxins into the air.

Food safety

Under EU law, all food products, genetically modified organisms and pesticides are regulated by the European Food Safety Authority (EFSA).

Toxicological studies that the EFSA relies on to assess the risks associated with these products are, however, performed by manufacturers themselves.

However one of the EFSA’s most interesting objectives is to make its scientific opinions ‘reproducible’ by others, a key validation criteria in scientific methodology. Scientific scrutiny of the EFSA’s assessments is only possible with complete access to these studies.

Companies argue, though, that this information contains confidential business information and strongly oppose its disclosure. The EFSA has recently launched a Transparency Initiative to improve its credibility, and is considering providing independent scientists with access to this data.

Unfortunately, this objective has been strongly criticised by the manufacturing industries (chemical, pesticide, seed, biotech, and additives), which argue that this toxicological data contain “confidential business information” that “should be protected from all disclosures and misuse at all times”.

These industries openly threatened the EFSA with legal action should the Authority decide to publish this data. The EFSA would probably have a solid legal defense for such action because ensuring food safety serves as a strong justification. But this situation may change if the current directive on trade secrets covers such essential data.

It is essential that the risk assessment work of public bodies is properly monitored by the scientific community. All data that these public bodies use must therefore be exempt from the scope of the directive.

The right to freedom of expression and information could be seriously harmed

Under the proposed directive, whistleblowers can use undisclosed information to reveal misconduct or wrongdoing, but only if “the alleged acquisition, use or disclosure of the trade secret was necessary for such revelation and that the respondent acted in the public interest.”

Unfortunately, though, determining whether disclosure was necessary can often only be evaluated afterwards. In addition, it remains unclear whether many types of information (e.g., plans to terminate numerous employees) qualify as ‘misconduct’ or ‘wrongdoing’.

This creates legal uncertainty for journalists, particularly those who specialise in economic investigations and whistleblowers.

The mobility of EU workers could be undermined

The proposed directive poses a danger of lock-in effects for workers. It could create situations where an employee will avoid jobs in the same field as his / her former employer, rather than risking not being able to use his / her own skills and competences, and being liable for damages.

This inhibits one’s career development, as well as professional and geographical mobility in the labour market.

In addition, despite the Commission’s desire for a ‘magic bullet’ that will keep Europe in the innovation game, closed-door trade secret protection may make it more difficult for the EU to engage in promising open and collaborative forms of research.

In fact, there is a risk that the measures and remedies provided in this directive will undermine legitimate competition – even facilitate anti-competitive behaviour.

Supporters – a litany of corporate power

Unsurprisingly, the text is strongly supported by multinational companies. In fact, industry coalitions in the EU and the US are lobbying, through a unified Trade Secrets Coalition, for the adoption of trade secret protection.

In the EU, a so-called Trade Secrets & Innovation Coalition is pushing for this directive. This coalition is even registered in the EU Transparency register under this name. This coalition includes Alstom, DuPont de Nemours, General Electric, Intel, Michelin, Air Liquide, Nestlé and Safran, who work together with the pharmaceutical and the chemical industries.

In the US, two new bills are pending before Congress: the Trade Secrets Protection Act of 2014 (H.R. 5233) – and Senate Bill: Defend Trade Secrets Act of 2014 (S. 2267).

If passed, these texts would allow trade secret protection to be included in the Trans-Atlantic Trade and Investment Partnership (TTIP) – something that will be incredibly difficult to repeal in the future through democratic processes.

The US has made no secret of its explicit wish for strong language on trade secret protection in this agreement. Given that TTIP is expected to set a new global standard, its potential inclusion of trade secret protection is particularly worrisome.

We urge the Council and the European Parliament to radically amend the directive. This includes limiting the definition of what constitutes a trade secret and strengthening safeguards and exceptions to ensure that data in the public interest cannot be protected as trade secrets.

The right to freely use and disseminate information should be the rule, and trade secret protection the exception.

 


 

This statement was originally published by Corporate Observatory Europe. Please check the original joint statement for signatories, contact details, etc. In this version, footnotes have been incorporated into the main text, and additional subheads have been inserted.

 

 






Closing the gate on GMO and the criminal transatlantic trade agreement





A determined effort by all of us, who care about real food and real farming, will be needed to stop one of the most insidious attempts yet to end Europe’s widespread resistance to genetically modified organisms.

In particular, the use of GM seeds in European agriculture, leading to genetically modified crops being grown in areas that have, up until now, successfully resisted the GM corporate invasion.

The EU has so far licenced just one GM maize variety (MON 810) to be grown within its territories, and one potato variety (Amflora) for industrial starch production.

Up until now, the EU has acted according to a largely restrictive trade practice concerning GM and other controversial food products due to major public pressure, as well as under a broad EU ruling termed ‘the precautionary principle’.

Goodbye to all that?

All that could be about to go out the window under current negotiations between the USA and the European Commission to ratify a new trade agreement known TTIP, the Transatlantic Trade and Investment Partnership.

The objective of this ‘partnership’ is to facilitate far going corporate control of the international market place and to prize-open the mostly closed (but not locked) European door on GM crops and seeds.

While this corporate heist is being eased into place, replicas are being negotiated between Canada and the EU under the title ‘Comprehensive Economic Trade Agreement’ CETA.

And as if that wasn’t enough, a further dismantling of trade tariffs is underway via the ‘Trade In Services Agreement’ TiSA: a wide ranging further liberalization of corporate trading conditions as a direct continuation of the WTO (World Trade Organisation) GATS agreement, with its highly onerous, corporate biased ‘Codex Alimentarius’ sanitary and hygiene rulings. Indigenous seeds and medicinal herbs are particularly under attack via Codex.

We can thus recognize, from the outset, that a very dangerous interference of the already leaky checks and balances that control the import/export market is underway here.

The thinly disguised under-text reveals plans for a massive corporate take-over of all negotiated quasi-democratic trade agreements and food quality controls that currently take place between the US and EU. It is clear that the major corporate concerns are determined to overcome or dilute, all resistance to their unfettered ‘free trade’ goals.

Corporations, not the people, hold governments to account

Where they are blocked, corporations are claiming the right to sue governments and institutions held to be “infringing the principle of international free trade”. Such litigation procedures are not new, but the idea of writing them into a major trading agreement has sparked major controversy.

For example in Germany, where one of the main Swedish nuclear power construction companies is attempting to sue the German government for billions of euros, with the intention of gaining full compensation for the ban on nuclear power enacted earlier by the Merkel government.

To add a further sinister twist to this already draconian exercise in power politics, the court hearings on such actions are slated to take place in secret, in a court house in Washington DC. Such secret courts are already operational in the UK, where ‘sensitive cases’ can be heard out of sight of public scrutiny with no reports or summaries of the proceedings released into the public domain.

Here we witness the Orwellian control system fully up and running, with its attendant undisguised destruction of many decades of hard won civil liberties.

The unremitting and relentless nature of this neo-capitalist and corporate centralization of power is causing significant resistance to manifest itself. Earlier this month, 1.1 million people across Europe signed a card for Commission President Juncker calling on him to ditch TTIP.

As John Hilary, a member of Stop TTIP‘s Citizens’ Committee commented: “Politicians are always calling for citizens to get actively involved in European politics, and here are more than a million people who have done just that. On his 60th birthday, Juncker should blow out the candles on these massively unpopular and undemocratic trade deals that are opposed by people across Europe.

But the truth is, we are all going to have to get involved to ensure a people led victory.

GMOs – the corporate attack is already under way

As an organic farmer myself, I’m concentrating on the food and farming implications. But it is very important not to loose sight of the true intention behind all aspects of these nefarious trade agreements.

As a precursor to TTIP, a major shift in GMO legislation was already voted in by the EU’s Environmental Council on 12 June 2014 (the final vote to be taken in the European Parliament, January 2015).

After many years of EU member state disagreement on GM issues – leading to negotiation stalemate – this controversial agreement devolved GMO decision making procedures from Brussels to EU member states.

In the process however, it gives the green light to pro GMO governments to allow the planting of GM crops in their countries, while anti GM member states can put forward economic and environmental health arguments to ban GM crops.

Under the first draft of this agreement, countries wishing to block GM plantings were called upon to seek permission to ban such crops from the very corporations that are proposing to introduce them! A proposal whose unprecidented arrogance echoes the corporate agenda of TTIP and CETA trade proposals.

Fortunately, after intensive public lobbying, this clause was dropped on 11th November 2014. Nevertheless, what we have in front of our eyes is a strong GMO warning light.

‘Mutual recognition’ is a race to the bottom

The TTIP agreement would allow GM crops and seeds currently banned in Europe – as well as various medicated animal products such as US hormone enriched beef – to have a largely unrestricted flow into the EU.

In the process they would by-pass the ‘precautionary principle’ and the European Food Safety Agency’s views (for what they are worth) on the efficacy of such products.

So it would, in effect, remove any differences in trade related legislation between the EU and US. Because in corporate speak, such differences are held up as being ‘trade distorting’.

TTIP could also be used to attack positive food related initiatives in the US, such as ‘local preference’ legislation at the state level. It calls for ‘mutual recognition’ between trading blocks: trade speak for lowering standards.

Consumer groups have already pointed out that mutual recognition of standards is not an acceptable approach since it will require at least one of the parties to accept food that is not of a currently acceptable standard.

To put it in simple terms: the pressure to lower standards in Europe to ‘resolve the inconsistencies’ will be strong, and far more likely to succeed than the other solution: to raise standards in the USA.

Phrases like ‘harmonization’ and ‘regulatory cooperation’ are a frequently occuring part of TTIP trade speak. But in the end it’s all going one way: downwards, to the lowest common denominator.

TTIP a ‘main priority for the year ahead’

According to Corporate Europe Observatory: “Under TTIP’s chapter on ‘regulatory cooperation’ any future measure that could lead us towards a more sustainable food system, could be deemed ‘a barrier to trade’ and thus refused before it sees the light of day.

“Big business groups like Business Europe and the US Chamber of Commerce have been pushing for this corporate lobby dream scenario before the US-EU negotiations ever began. What they want from regulatory cooperation is to essentially co-write legislation and to establish a permanent EU-US dialogue to work towards harmonizing standards long after TTIP has been signed.

“Despite earlier reservations, the Commission now seems to go along with with this corporate dream. Leaked EU proposals from December 2013 outline a new system of regulatory cooperation between the EU and US, that will enable decisions to be made without any public oversight or engagement.”

What this means is that new, highly controversial GM seed lines will have virtually no publicly scrutinized safety net to slow or halt their progress to the fields and dinner plates of Europe.

One of the most determined voices behind the realization of TTIP’s ambitions is ex Polish Prime Minister, Donald Tusk: As the Guardian tells us: “Taking office this week as the new president of the European Council, chairing summits and mediating between national leaders, Donald Tusk, Poland’s former prime minister, singled out TTIP as one of his main priorities for the year ahead.”

Tusk, as prime minister of Poland, had already displayed his bias towards big business, by backing strategies to sell tranches of Poland’s most productive farmland to the highest foreign bidders, while simultaneously cosying-up to the EU Commission’s big chiefs.

Tusk is complicit, if not a leading voice, in supporting the overt centralization of political power in Brussels and the steady dismantling of national sovereignty: the right for countries to decide and control their own futures.

The end of national sovereignty?

TTIP and CETA are perfect weapons for the long planned for destruction of national sovereignty. Trade negotiators, GM exponents, big farming unions, agrichemical businesses and food processing giants are all in on the game and have strong lobby groups backing TTIP.

Their view on what the word ‘cooperation’ means goes like this, according to Corporate Europe Observatory: “A system of regulatory cooperation would prevent ‘bad decisions’ – thereby avoiding having to take governments to court later.”

These ‘bad decisions’ to be avoided include any attempts by governments to rein-in the overt lust for power which is the hallmark of the corporate elite.

For example, biotech and pesticide giants Syngenta and Bayer, are taking the European Union to court over its partial ban on three insecticides from the Neonicotinoid family, because of their deadly impact on bees.

However let us be clear, the European Union is only acting this way because of intense public pressure to do so; left to its own devices there would be no discernible difference between it and the corporate elite who stalk the corridors of power at the European Commission and European Parliament.

The underlying goal of ‘regulatory cooperation’ between industry and the EU, is to have a continuous ‘on going’ dialogue (known as ‘living agreement’) that could ultimately render any final TTIP agreement largely meaningless.

Meaningless, because it could by-pass any failures of TTIP to gain concessions on food and environmental standards by focusing on altering ‘implementation rules’ – rather than taking the more arduous route of altering ‘the law’ itself.

Tinkering with ‘implementation rules’ simply offers another way for corporate friendly concessions to become enshrined in common trading rights.

Resisting the corporate takeover of the food chain

Reassurances from EU and US negotiators that “food standards will not be lowered” look highly suspect. Farmers should be alert to the fact that, because of TTIP, imports are highly likely be allowed that do not meet local standards, thus undermining national trading disciplines.

This applies across the spectrum and includes currently non-compliant GMO crops. According to Corporate Europe Observatory, “Regulatory convergence will fundamentally change the way politics is done in the future, with industry sitting right at the table, if they get their way.”

If they get their way.

All groups and organizations that care about retaining a largely GMO Free Europe and the consumption of genuine, healthy food – in tandem with the ecological farming methods that produce it – had better jump to the task of stopping TTIP, and its related trading blocks, from destroying the last line of defence against a complete corporate take-over of the food chain.

Join the resistance today!

 


 

Campaign: Stop TTIP!

Julian Rose is an early pioneer of UK organic farming, international activist and author. He is currently the President of The International Coalition to Protect the Polish Countryside. His most recent book ‘In Defence of Life – A Radical Reworking of Green wisdom’ is published by Earth Books. Julian’s website is www.julianrose.info.

 

 






2014 badger cull failed – but the cull goes on





The Government today has released the results of the 2014 badger culls in Gloucestershire and Somerset.

In West Gloucestershire the cull was an outright failure. To be consider ‘effective’, the cull needed to kill at least 615 badgers, and no more than 1091. In fact, just 274 were killed – less than half of the minimum figure.

Defra today blamed the failure on the “challenges of extensive unlawful protest and intimidation” in Gloucestershire – an admission that may only encourage badger groups opposing the cull.

In West Somerset the cull killed 341 badgers – just within the specified range of 316 – 785.

But in fact, badger expert Professor Rosie Woodroffe has  dismissed the targets as having been in effect fixed at dangerously low levels to make them easier to meet. 

“The targets are all rubbish because they are based on rubbish data. In Somerset they set themselves an unbelievably easy target”, she told the Guardian in October. “It was not set in line with their aim – to kill at least 70% of badgers. They have completely thrown that out.”

Spread of bovine TB could actually be increased

The danger is that if too few badgers are killed, populations are disrupted causing increased badger movements, and more spreading of bovine TB among badgers and cattle. To prevent that the aim is to kill 70% of the population.

The badger population was estimated by counting the number of setts in the area, then multiplying it by the estimated number of badgers per sett. In Somerset, that led to a minimum cull number between 316 and 1,776 badgers – of which Defra chose the lowest possible figure.

It’s therefore highly likely that the 341 badgers killed in the Somerset cull is well under the 70% threshold for effectiveness and will serve only to disrupt badger society and increase the spread of bovine TB.

“In a clear attempt to bury bad news over Christmas, the report paints a picture of a disastrous policy which has clearly failed on scientific, economic and humaneness grounds”, said Dominic Dyer, chief executive of the Badger Trust.

Gloucestershire cull should not continue unless more effective

Nigel Gibbens, chief veterinary officer at Defra, said: “Given the level of badger population reduction estimated in the Somerset cull area in 2014, the benefits of reducing the disease in cattle over the planned four-year cull can be expected to be realised there.”

However he issued a stark warning over the Cloucestershire cull: “Given the lower level of badger population reduction in the Gloucestershire cull area over the past two years, the benefits of reducing the disease in cattle may not be realised there.”

And he added that culling should continue in Gloucestershire in 2015 only if there are “reasonable grounds for confidence that it can be carried out more effectively”.

He also conceded that there was “room for disagreement” over the humaneness of the culling, with some badgers surviving in agony for five minutes after being shot.

Environment secretary Liz Truss insisted: “The chief vet’s advice is that the results of this year’s cull in Somerset show they can be effective. That is why I am determined to continue with a comprehensive strategy that includes culling.”

Bring an end to this cruel policy!

But Dyer disagrees: “Despite spending millions of pounds of tax payers money the DEFRA Chief Veterinary Officer admits for the first time today that the badger cull is failing. It’s now time for the Government to admit it has got it wrong and bring an end to this disastrous cruel policy once and for all.

“It should now follow the example of Wales and introduce annual TB testing for cattle combined with tighter bio security and cattle control movements, with compliance linked to CAP single payments for farmers.

“This policy has delivered a 48% drop in the number of cattle slaughtered for TB in Wales in the last 5 years without killing any badgers at all.”

 


 

Oliver Tickell edits The Ecologist.

 

 






All over the world, renewables are beating nuclear





With many of the UK’s old nuclear power plants off-line due to faults and prospects for their ultimate replacement looking decidedly shaky, it is good that the renewable energy alternatives are moving ahead rapidly.

In 2013 nuclear supplied around 18% of UK electricity but in the third quarter of 2014, nuclear output fell 16.2% due to outages, while renewable output, which had reached 16.8% of electricity in the second quarter of 2014, was up 26%, over the previous year.

Indeed, there were periods in 2014 when wind alone met up to 15% of UK power demand, over-taking nuclear, and it even briefly achieved 24%.

What next? The financial woes of French developers Areva and EDF may mean that their £24 billion 3.4 GW Hinkley nuclear project, despite being heavily subsidised by British taxpayers and consumers, will get delayed or even halted, unless China or the Saudis bail it out.

Meanwhile, wind has reached 11GW, with 4GW of it offshore, solar is at 5GW and rising, with many new projects in the pipeline. By 2020 we may have 30GW of wind generation capacity and perhaps up to 20GW of solar.

Renewables get cheaper, nuclear gets more expensive

It’s true that this will require subsidies, but the technology is getting cheaper and by the time Hinkley is built, if it ever is, the Contact for a Difference (CfD) subsidy for on-land wind, and maybe even for solar, will be lower than that offered to the Hinkley developers (£92.5/MWh).

Indeed some say solar won’t need any subsidies in the 2020s. While offshore wind projects could be going ahead with CfD contracts below £100 / MWh, and without the £10 billion loan guarantee that Hinkley has been given.

The simple message is that renewables are getting cheaper and more competitive, while nuclear remains expensive, and its cost may well rise – requiring further subsidies.

The completion of the much delayed EPR at Flamanville, similar to the Hinkley design, has been put back by yet another year, to 2017, putting it even more over-budget.

The EPR being built in Finland, work on which started in 2005, and which was originally scheduled to go live in 2009, is now not likely to be completed until late 2018. It’s now almost twice over budget.

It’s hardly surprising then that most of the major EU power companies and utilities have backed away from nuclear, including SSE, RWE and Siemens, and most recently E.ON, in favour of renewables.

And globally it seems clear that renewables are winning out just about everywhere. They now supply over 19% of global primary energy and 22% or more of global electricity. By contrast nuclear is at around 11% and falling.

Country by country, renewables are taking over the world

Looking to the future, there are scenarios for India, Japan, South Korea, the USA and the EU, looking to renewables to supply most of their electricity, with Germany and Denmark of course already acting on them – Germany is aiming to get at least 80% of its electricity from renewables by 2050, Denmark 100%.

For example, a WWF report says China could get 80% of its electricity from renewables by 2050, at far less cost than relying on coal, and enabling China’s to cut its carbon emissions from power generation by 90% without compromising the reliability of the electric grid or slowing economic growth. And with no need for new nuclear.

Although renewables are not as developed as in China, India has been pushing them quite hard, with wind at nearly 20GW, on top of 39GW of existing large hydro. PV is at 2.6 GW grid-linked so far, but Bridge to India is pushing for 100GW by 2020.

Funding problems and policy changes have bedeviled the development of renewables in India, as have weak grids, with some saying that off-grid or mini grid community projects ought to be the focus.

The new government in India certainly faces some challenges. But WWF / TERI have produced an ambitious ‘near 100%’ by 2050 renewables scenario, with over 1,000GW each of wind and solar, plus major biomass use.

The US has now gets near 15% of its electricity from renewables, with wind power projects booming, and Obama’s policy of cutting emissions from coal plants by 30% by 2030 should speed that up. The US National Renewable Energy Lab has developed scenarios showing that the US could potentially generate 80% of its electricity from renewables by 2050.

In Japan renewables had been given a low priority, but following Fukushima nuclear disaster in 2011, Japan is now pushing ahead with some ambitious offshore wind projects, using floating wind turbines, and a large PV programme.

Overall, Japan has given the go-ahead to over 70 GW of renewable energy projects, most of which are solar. Longer term, a ‘100% by 2050′ ISEP renewables scenario has around 50GW of wind, much of it offshore, and 140GW of PV.

Rapid progress is being made in South America, although less so as yet in most of Africa. But the International Renewable Energy Agency says that Africa has the potential and the ability to utilise its renewable resources to fuel the majority of its future growth.

Yet the UK remains firmly stuck in a 1950s vision of the future

Back in the UK though, we have our large nuclear programme, with EDF one of the main backers. It can’t build any plants in France (which is cutting nuclear back by 25%), but the UK seems to be willing to host several – and pay heavily for them!

Similarly, Hitachi and Toshiba stand no chance of building new plants in Japan, but the UK is offering significant long-term subsidies and loan guarantees for their proposed UK projects. A far better deal than being offered to renewables.

Here the main focus seems to be on why we can’t afford offshore wind, or accept on-land wind, or live with large solar farms.

We struggle on – now generating over 15% of UK electricity from renewables, but far behind most of the rest of the EU, and especially the leaders, with some already having achieved their 2020 targets, nearly all of which were set higher than that for the UK.

In fact, despite having probably the largest potential of any EU country, we are still only beating Luxembourg and Malta.

It’s embarrassing …

 


 

David Elliott is Emeritus Professor of Technology Policy at the Open University.

Book: David’s latest book, ‘Renewables: a review of sustainable energy supply options’ is available from the Institute of Physics and the Network for Alternative Technology and Technology Assessment.

 

 






Antarctica: warming ocean trebles glacial melt





The Antarctic ice shelf is under threat from a silent, invisible agency – and the rate of melting of glaciers has trebled in the last two decades.

The ocean waters of the deep circumpolar current that swirl around the continent have been getting measurably warmer and nearer the ocean surface over the last 40 years, and now they could be accelerating glacier flow by melting the ice from underneath, according to new research.

And a separate study reports that the melting of the West Antarctic glaciers has accelerated threefold in the last 21 years.

West Antarctic ice sheet – a potential 4.8m of sea level rise

If the West Antarctic ice sheet were to melt altogether – something that is not likely to happen this century – the world’s sea levels would rise by 4.8 metres, with calamitous consequences for seaboard cities and communities everywhere.

Researchers from Germany, Britain, Japan and the US report in Science journal that they base their research on long-term studies of seawater temperature and salinity sampled from the Antarctic continental shelf.

This continued intrusion of warmer waters has accelerated the melting of glaciers in West Antarctica, and there is no indication that the trend is likely to reverse.

Other parts of the continent so far are stable – but they could start melting for the first time. “The Antarctic ice sheet is a giant water reservoir”, said Karen Heywood, professor of environmental sciences at the University of East Anglia, UK.

“The ice cap on the southern continent is on average 2,100 metres thick and contains 70% of the world’s fresh water. If this ice mass were to melt completely, it could raise global sea level by 60 metres. That is not going to happen, but it gives you an idea of how much water is stored there.”

Temperatures in the warmest waters in the Bellinghausen Sea in West Antarctica have risen from 0.8°C in the 1970s to about 1.2°C in the last few years.

“This might not sound much, but it is a large amount of extra heat available to melt the ice, said Sunke Schmidtko, an oceanographer at the Geomar Helmholtz Centre for Ocean Researchin Kiel, Germany, who led the study. “These waters have warmed in West Antarctica over 50 years. And they are significantly shallower than 50 years ago.”

Unpredictable consequences on ice and ecology

The apparent rise of warm water, and the observed melting of the West Antarctic ice shelf, could be linked to long-term changes in wind patterns in the Southern Ocean. Although melting has not yet been observed in other parts of the continent, there could be serious consequences for other ice shelves.

The shelf areas are also important for Antarctic krill – the little shrimp that plays a vital role in the Antarctic ocean food chain – as they serve as protective ‘nurseries’ for the young crustaceans. Warming ice shelves may have unpredictable consequences for spawning cycles, krill abundance, and wider ocean biodiversity.

Meanwhile, according to US scientists writing in Geophysical Research Letters, the glaciers of the Amundsen Sea in West Antarctica are shedding ice faster than any other part of the region.

Tyler Sutterley, a climate researcher at the University of California Irvine, and NASA space agency colleagues used four sets of observations to confirm the threefold acceleration.

They took their data from NASA’s Gravity Recovery and Climate Experiment (GRACE) satellites, from a NASA airborne project called Operation IceBridge, from an earlier satellite called ICESat, and from readings by the European Space Agency’s Envisat satellite.

Glaciers losing 16 billion tonnes of ice a year

The observations spanned the period 1992 to 2013 and enabled the researchers to calculate the total loss of ice, and also the rate of change of that loss. In all, during that period the continent lost 83 billion tonnes of ice per year on average.

After 1992, the rate of loss accelerated by 6.1 billion tonnes a year, and between 2003 and 2009 the melt rate increased by 16.3 gigatonnes a year on average. So the increasing rate of loss is now nearly three times the original figure.

“The mass loss of these glaciers is increasing at an amazing rate”, said Isabella Velicogna, Earth system scientist at both UC Irvine and the NASA Jet Propulsion Laboratory.

 


 

Tim Radford writes for Climate News Network.

 

 






I’ll talk politics with climate change deniers – but not science





There are many complex reasons why people decide not to accept the science of climate change. The doubters range from the conspiracy theorist to the sceptical scientist, or from the paid lobbyist to the raving lunatic.

Climate scientists, myself included, and other academics have strived to understand this reluctance. We wonder why so many people are unable to accept a seemingly straight-forward pollution problem.

And we struggle to see why climate change debates have inspired such vitriol.

These questions are important. In a world increasingly dominated by science and technology, it is essential to understand why people accept certain types of science but not others.

In short, it seems when it comes to climate change, it is not about the science but all about the politics.

Risky business

Back in the late 1980s and early 1990s differing views on climate science were put down to how people viewed nature: was it benign or malevolent? In 1995 leading risk expert John Adams suggested there were four myths of nature, which he represented as a ball on different shaped landscapes.

  1. Nature is benign and forgiving of any insults that humankind might inflict upon it and it does not need to be managed.
  2. Nature ephemeral. Nature is fragile, precarious, and unforgiving and environmental management must protect nature from humans.
  3. Nature perverse/tolerant. Within limits, nature can be relied upon to behave predictably and regulation is required to prevent major excesses.
  4. Nature capricious. Nature is unpredictable and there is no point to management.

Different personality types can be matched on to these different views, producing very different opinions about the environment. Climate change deniers would map on to number one, Greenpeace number two, while most scientists would be number three. These views are influenced by an individual’s own belief system, personal agenda (either financial or political), or whatever is expedient to believe at the time.

However, this work on risk perception was ignored by mainstream science because science up to now operates on what is called the knowledge deficit model. This suggests that people do not accept the science because there is not enough evidence; therefore more needs to be gathered.

Scientists operate in exactly this way, and they assume wrongly the rest of the world is equally rational and logical. It explains why over the past 35 years a huge amount of work gone into investigating climate change.

However – despite many thousands of pages of IPCC reports – the weight of evidence argument does not seem to work with everyone.

No understanding of science?

At first failure of the knowledge deficit model was blamed on the fact that people simply did not understand science, perhaps due to a lack of education.

This was exacerbated as scientists from the late 1990s onwards started to be drawn into discussions about whether people believed or did not believe in climate change.

The use of the word ‘belief’ is important here, as it was a direct jump from the American-led argument between the science of evolution and the belief in creation.

But we know that science is not a belief system. You cannot decide that you believe in penicillin or the principles of flight while at the same time disbelieve humans evolved from apes or that greenhouse gases can cause climate change.

This is because science is an expert trust-based system that is underpinned by rational methodology that moves forward by using detailed observation and experimentation to constantly test ideas and theories.

It does not provide us with convenient yes/no answers to complex scientific questions, however much the media portrayal of scientific evidence would like the general public to ‘believe’ this to be true.

It’s all about the politics

However, many who deny climate change is an issue are extremely intelligent, eloquent and rational. They would not see the debate as one about belief and they would see themselves above the influence of the media.

So if the lack of acceptance of the science of climate change is neither due to a lack of knowledge, nor due to a misunderstanding of science, what is causing it?

Recent work has refocused on understanding people’s perceptions and how they are shared, and as climate denial authority George Marshall suggests these ideas can take on a life of their own, leaving the individual behind.

Colleagues at Yale University developed this further by using the views of nature shown above to define different groups of people and their views on climate change. They found that political views are the main predictor of the acceptance of climate change as a real phenomenon.

This is because climate change challenges the Anglo-American neoliberal view that is held so dear by mainstream economists and politicians. Climate change is a massive pollution issue that shows the markets have failed and it requires governments to act collectively to regulate industry and business.

In stark contrast neoliberalism is about free markets, minimal state intervention, strong property rights and individualism. It also purports to provide a market-based solution via ‘trickle down’ enabling everyone to become wealthier.

But calculations suggest to bring the incomes of the very poorest people in the world up to just $1.25 per day would require at least a 15 times increase in global GDP. This means huge increases in consumption, resource use and of course, carbon emissions.

It’s easier to deny climate change, than to deny our own ideologies

So in many cases the discussion of the science of climate change has nothing to do with the science and is all about the political views of the objectors. Many perceive climate change as a challenge to the very theories that have dominated global economics for the last 35 years, and the lifestyles that it has provided in developed, Anglophone countries.

Hence, is it any wonder that many people prefer climate change denial to having to face the prospect of building a new political (and socio-economic) system, which allows collective action and greater equality?

I am well aware of the abuse I will receive because of this article. But it is essential for people, including scientists, to recognise that it is the politics and not the science that drives many people to deny climate change.

This does mean, however, that no amount of discussing the ‘weight of scientific evidence’ for climate change will ever change the views of those who are politically or ideologically motivated.

Hence I am very sorry – but I will not be responding to comments posted concerning the science of climate change but I am happy to engage in discussion on the motivations of denial.

 


 

Mark Maslin is Professor of Climatology at University College London.

This article was originally published on The Conversation. Read the original article.

The Conversation