Tuesday 19 May 2009

Who's responsible for EMF Risk Assessment and Risk Management

The SAGE II Science Forum has set out to investigate divergence of views in science:

  • Why do they exist?
  • Are there valid objective reasons why different views can exist? and
  • If so what are the reasons for the differences?
  • How do policy makers handle different but valid conclusions in this case?
The discussions have been very interesting so far, and a lot of ground has been covered on the principles on which science (and specifically EMF science) is assessed. From a background of peer review and the nature of journals, through definitions of cellular studies, animal studies, and epidemiology, to biases, confounding factors, and the importance of replication.

The overview was useful, and it covered the methods by which scientists carry out, publish, and assess science. However, it raised one particularly pertinent question that scientists really don't like to admit. Whilst the separation between risk assessment and risk management is well understood in principle, it is still unclear who exactly is primarily responsible for each, and how much crossover is implicitly necessary for both scientists and public policy makers. With so many tools and different ways of assessing the value of a study at their disposal, those chosen in any given instance are almost entirely subjective - either a subjective decision of choosing how to objectively decide on the appropriate method, or simply a subjective decision on the method, a kind of "I think this one is better so I'll use it" approach.

It isn't just the ways of assessing the science that causes divergence - also, there is the question of what to conclude about the evidence once it has been collected and weighted? Even with exactly the same methods of weighting and assessing the science, groups of scientists then often take guidance / policy decisions into their own hands with statements such as "insufficient evidence" and "no convincing evidence", without actually defining the levels at which sufficiency would be met, or at which point they would consider the evidence convincing. Again, these thresholds are subjective, and are completely undefined - it is quite feasible for different bodies to have very different thresholds and use exactly the same terminology!

By refusing to answer the question "What level of evidence is enough to actually do something about it?", scientists claim that they are successfully leaving Risk Management decisions up to policy makers so that they can focus on the science itself, but of course this is not strictly true. By choosing how to define the evidence in scientific review summaries with arbitrary subjective terms based on undefined thresholds, lay policy makers have no way of knowing what strength of an association is being implicated by any given risk factor, nor to what certainty the association is considered to be "true". They end up having to rely largely on their own, equally subjective, interpretation of the choice of words used by scientific reviewers, and therefore policy ends up dictated largely by the choice of reviewers used.

Because of this, and the tendency of scientists to they feel that the methods of assessment that they have used are more robust or appropriate than other options possibly used by other review groups, polarisation is inevitable, with both sides believing themselves to be more justified than the other. Established "expertise" is no help to decide either: for example, with the current EMF guidelines issue, we have ICNIRP/WHO/SCENIHR on one side, considered by many to be a cornerstone in expertise in the field, and BioInitiative on the other side, with no less than 3 past Bioelectromagnetics society presidents and some of the most long standing and highly published authors in the fields of electromagnetic radiation and health, such as Professors Olle Johansson and Henry Lai. It would be disingenuous to label either side as academically weak to any extent, yet the views held are almost at opposite ends of the spectrum.

It is very hard to see a viable way to combat this issue, but a good place to start would be by ensuring that the words used are objectively defined, and consistently used in adherence with those definitions. David Gee, an outspoken administrator for the European Environment Agency, has been covering this issue for some years now, and has suggested a number of objectively definable categories into which evidence can be assigned:

  • Beyond all reasonable doubt
  • Reasonable certainty
  • Balance of probabilities/evidence
  • Strong possibility
  • Scientific suspicion of risk
  • Negligible/insignificant
With an accompanying "most likely magnitude of risk increase", the level of evidence required for action can therefore by genuinely left in the hands of policy makers who can be more confident in understanding the levels of certainty in the literature. This won't help address the issue of different scientists choosing different weightings and strengths for different papers, but at least the output becomes less ambiguous.

Naturally, this would need to be produced by an authoritative organisation such as the European Commission as a White paper or a set of guidance criteria that can be "conformed to" to have any chance of being taken up and taken seriously. Papers or reviews can then choose to conform to these standards by explicitly stating that these are the definitions they are using. This, combined with a greater transparency on the references included in a review, and the weightings used for the individual papers (including those explicitly selected for exclusion), would give policy makers much more complete information from which to base their decisions on.

Friday 6 March 2009

ICNIRP and guidelines

The RRT EMF conference in September 2008 brought together a number of the leading scientists in the EMF world on both the "there is insufficient risk to be concerned" group and the "there is sufficient risk for serious preventative action" group.

With both Mike Repacholi (ex head of the WHO international EMF project and current chairman emeritus of ICNIRP) and Paolo Vecchia (current actual chairperson of ICNIRP) talking candidly about the assessment of science used to formulate guidelines, it became very clear that there was a serious gap in addressing areas of scientific uncertainty. For example, Paolo Vecchia explained that ICNIRP guidelines were set to provide protection against scientifically "established" effects, and established effects only. Associations between EMFs and health effects such as cancers are discussed in ICNIRP documentation, but until the association is considered causal no attempt will be made to adjust the guidelines to cater for them.

To reach the status of an established effect, he explained that the supporting research must be:

  • peer-reviewed,
  • replicated by separate research groups,
  • consistently find the same effects at approximately the same magnitude, and
  • show a clear mechanistic cause and effect.
All four criteria must be fully met before an association can meet the subjectively defined criteria. Mike Repacholi explained that, for the purposes of defining causal mechanism, there are four separate scientific stages, in hierarchical order:
  1. physics (atomic and molecular interactions with forces),
  2. chemical (intracellular interactions between atomic and molecular entities with each other),
  3. biological (intercellular interactions, such as nerve signalling pathways, bystander effects etc), and then finally
  4. how these interactions then manifest themselves as health effects.
He explained that unless all (or at least most) of these stages can be identified, a causal relationship cannot be accepted. Without this, it is not possible to view the association as an established effect.

Neither of these two speakers denied that there was evidence of health effects far below these guidelines, but made it very clear that until the evidence was sufficiently strong to be considered causal it was a) not appropriate to use the evidence in the context of setting guidance levels, and b) not appropriate to present the evidence to public policy makers whilst uncertainty that remains. The idea of presenting more than one possibility and possible preventative measures with associated estimated levels of certainty was considered to be outside of their remit. This view was supported by Anders Ahlbom, and Mike Dolan made a statement to suggest that "policy makers expect one clear outlook that they can perceive as the "truth" that they can act on, and would not accept a document with multiple potential outcomes".

This of course is quite an assertion. The differentiation between research, risk assessment, and risk management is quite widely accepted. Research investigates theories and then publishes results. Risk assessors collate these results and calculate their likely impact. Risk managers read the risk assessments and decide on an appropriate proportional response. In his presentation at the conference, Anders Ahlbom was adamant that scientists should be responsible for the research, and the risk assessment, but policy decisions and risk management practises were in the realms of policy makers instead. To make this claim implies that there is a clear divide, without overlap, which cannot be the case: Risk management is tied implicitly into risk assessment. To decide what sort of interpretation of the results is reasonable (including the possibility of multiple interpretations and all of their potential impacts) is a fundamentally risk management decision, as the criteria used will be based on an evaluation of social and political priorities. Indeed, the EU's acceptance of the Precautionary Principle requires that public risk managers are made aware of reasonable concerns where harm may occur but where there is still considerable scientific uncertainty - this is only possible with a consideration of multiple outcomes with different weightings based on impact and levels of certainty.

By this criteria, it is clear for both ICNIRP and WHO that, to accept and publish an association between EMFs and a number of health effects, the certainty that the association is causal must be extremely high, in the region of 95 to 98% chance. It is worth bearing in mind that risk factors for smoking, asbestos, thalidomide, lead in petrol etc. would not have been identified using these criteria. This does not mean that they are an invalid approach per se, as it is very important to have assessment criteria for establishing effects that are beyond reasonable doubt, but it is next to useless for policy makers who are attempting to pre-empt large scale health catastrophes by making risk management judgements in areas of scientific uncertainty. Whilst proportionality is a crucial factor, it is not for risk assessors to decide to withhold significant information from public risk managers because of lack of "full scientific certainty".

Friday 30 January 2009

Wikipedian methodology and logic

I have found that generally Wikipedia has an outstanding collection of information, with good principles on weight of evidence, citations of notable material, and a comprehensively thought out set of principles on how articles should be edited and maintained. For non-controversial issues, it can often be a reliable wealth of information, particularly on many areas of science and medicine.

However, it really struggles to deal with areas of controversy in areas of science with relatively low degrees of public notability. For example, overhead powerline health effects are currently relatively strongly disputed in the scientific arena at the moment. Meta-analyses by Ahlbom and Greenland had shown back in 2000 that the association between extremely low frequency (ELF) magnetic fields of 0.4 microTesla (4 milliGauss) and above was quite clear, albeit not fully understood, which then triggered the IARC 2B Classification as a possible carcinogen in 2002.

This association was not transferred to residential living proximity to powerlines however, Despite the obvious fact that living near powerlines increases exposure to ELF magnetic fields (and exposure is around 0.4 microTesla at approximately 60-70 metres away from 275 and 400 kV overhead powerlines). This connection was not generally acknowledged in the literature until June 2005 when Draper published a study (now often referred to as the "Draper Report") in conjunction with colleagues at the Childhood Cancer Research Group and National Grid.

After years of work and debate (7 years after the scientific community were generally accepting the association!), there is now a begrudging semi-acknowledgement by the World Health Organisation that childhood leukaemia may be associated with ELF magnetic fields, but no real mention of overhead transmission lines - something which, if past experience is to be repeated, should be acknowledged somewhere around 2012.

How does this relate to Wikipedia? Well, for fully understandable reasons, Wikipedia relies on the notability of its sources for a "check" to maintain accuracy. As such, documents written by knowledgeable hobbyists such as myself or independent scientific experts such as the BioInitiative group are given considerably less weight than WHO or the UK Health Protection Agency. This makes complete sense (although I'd argue that lessening the value of the BioInitiative Report itself as quite hard to support with the nature and reputation of the majority of the authors - the argument that it is self published has little weight when the report is 600 pages long, something not submissible to any peer reviewed journals anyway), but it does mean that Wikipedia is always forced to represent information and science that is up to a decade out of date, which puts a large delay on scientific information reaching the general public.

People are generally underestimated in their ability to deal with complex arguments and levels of uncertainty. Wikipedia is effectively designed to be an open source encyclopaedia so it is perhaps not the place to do so, but at the same time there needs to be some recognition of where the limits of knowledge lie. It is useful to explain what is considered to be understood, but also important to admit "it is not known whether ...", without the need for caveats, particularly on areas of science and health.