Tuesday, 19 May 2009

Who's responsible for EMF Risk Assessment and Risk Management

The SAGE II Science Forum has set out to investigate divergence of views in science:

  • Why do they exist?
  • Are there valid objective reasons why different views can exist? and
  • If so what are the reasons for the differences?
  • How do policy makers handle different but valid conclusions in this case?
The discussions have been very interesting so far, and a lot of ground has been covered on the principles on which science (and specifically EMF science) is assessed. From a background of peer review and the nature of journals, through definitions of cellular studies, animal studies, and epidemiology, to biases, confounding factors, and the importance of replication.

The overview was useful, and it covered the methods by which scientists carry out, publish, and assess science. However, it raised one particularly pertinent question that scientists really don't like to admit. Whilst the separation between risk assessment and risk management is well understood in principle, it is still unclear who exactly is primarily responsible for each, and how much crossover is implicitly necessary for both scientists and public policy makers. With so many tools and different ways of assessing the value of a study at their disposal, those chosen in any given instance are almost entirely subjective - either a subjective decision of choosing how to objectively decide on the appropriate method, or simply a subjective decision on the method, a kind of "I think this one is better so I'll use it" approach.

It isn't just the ways of assessing the science that causes divergence - also, there is the question of what to conclude about the evidence once it has been collected and weighted? Even with exactly the same methods of weighting and assessing the science, groups of scientists then often take guidance / policy decisions into their own hands with statements such as "insufficient evidence" and "no convincing evidence", without actually defining the levels at which sufficiency would be met, or at which point they would consider the evidence convincing. Again, these thresholds are subjective, and are completely undefined - it is quite feasible for different bodies to have very different thresholds and use exactly the same terminology!

By refusing to answer the question "What level of evidence is enough to actually do something about it?", scientists claim that they are successfully leaving Risk Management decisions up to policy makers so that they can focus on the science itself, but of course this is not strictly true. By choosing how to define the evidence in scientific review summaries with arbitrary subjective terms based on undefined thresholds, lay policy makers have no way of knowing what strength of an association is being implicated by any given risk factor, nor to what certainty the association is considered to be "true". They end up having to rely largely on their own, equally subjective, interpretation of the choice of words used by scientific reviewers, and therefore policy ends up dictated largely by the choice of reviewers used.

Because of this, and the tendency of scientists to they feel that the methods of assessment that they have used are more robust or appropriate than other options possibly used by other review groups, polarisation is inevitable, with both sides believing themselves to be more justified than the other. Established "expertise" is no help to decide either: for example, with the current EMF guidelines issue, we have ICNIRP/WHO/SCENIHR on one side, considered by many to be a cornerstone in expertise in the field, and BioInitiative on the other side, with no less than 3 past Bioelectromagnetics society presidents and some of the most long standing and highly published authors in the fields of electromagnetic radiation and health, such as Professors Olle Johansson and Henry Lai. It would be disingenuous to label either side as academically weak to any extent, yet the views held are almost at opposite ends of the spectrum.

It is very hard to see a viable way to combat this issue, but a good place to start would be by ensuring that the words used are objectively defined, and consistently used in adherence with those definitions. David Gee, an outspoken administrator for the European Environment Agency, has been covering this issue for some years now, and has suggested a number of objectively definable categories into which evidence can be assigned:

  • Beyond all reasonable doubt
  • Reasonable certainty
  • Balance of probabilities/evidence
  • Strong possibility
  • Scientific suspicion of risk
  • Negligible/insignificant
With an accompanying "most likely magnitude of risk increase", the level of evidence required for action can therefore by genuinely left in the hands of policy makers who can be more confident in understanding the levels of certainty in the literature. This won't help address the issue of different scientists choosing different weightings and strengths for different papers, but at least the output becomes less ambiguous.

Naturally, this would need to be produced by an authoritative organisation such as the European Commission as a White paper or a set of guidance criteria that can be "conformed to" to have any chance of being taken up and taken seriously. Papers or reviews can then choose to conform to these standards by explicitly stating that these are the definitions they are using. This, combined with a greater transparency on the references included in a review, and the weightings used for the individual papers (including those explicitly selected for exclusion), would give policy makers much more complete information from which to base their decisions on.

2 comments:

  1. I agree that it is very tricky to properly apply scientific findings to public policy but I don't agree that the solution involves behavioral changes by either party. I think that everyone should stick to what they're best at and that means letting the scientists stick to doing science and letting our representitives represent us.

    What I feel is missing is a 3rd party in the process that sits between the two groups. In the US, the group that does that at the executive level is the Office of Science and Technology Policy. Other branches have other similar bodies. You can argue how well this works or not but I believe this is the best approach to the communication problem you're describing.

    ReplyDelete
  2. Hi Coco, many thanks for commenting!

    I'm unsure what you mean by "the scientists"? As I wrote in some of the post above, there are very prominent scientists on both sides of the debate that disagree on the issue fairly strongly. They both feel that their approach is scientific, and both have published original research and been involved in review work that gives an indication of their viewpoint and how it has been reached.

    The 3rd party idea works fine if you have on one side a bunch of scientists that broadly agree, and on the other side a bunch of policy makers that would like to know what they need to do with their policies, and simply need a way of communicating the information from one side to the other. It doesn't help in areas of genuine scientific disagreement.

    Best Regards,
    - Graham

    ReplyDelete