A few weeks ago, I set Friends of the Earth a challenge What is your worst case estimate of the human health risk from titanium dioxide and/or zinc oxide nanoparticles in sunscreens?

The challenge came out of an article from FoE on nanomaterials and sunscreens, which I subsequently critiqued on 2020 Science.  Georgia Miller and Ian Illuminto from FoE kindly responded to my challenge – not by rising to it as such, but by fleshing out the justification for the position that they take on nanomaterials and sunscreens.

That post led to a useful discussion on the issues, with comments from the NGO community, regulators and respected scientists – it’s one that I would highly recommend anyone interested in nanomaterials and sunscreens reading.

To wrap things up (for the time being), I thought it would be worth reflecting on some of the issues raised by Georgia and Ian in their response, and the ensuing discussion:

Getting nanomaterials’ use in context. First, Georgia and Ian, very appropriately in my opinion, brought up the societal context within which new technologies and products are developed and used:

“why not support a discussion about the role of the precautionary principle in the management of uncertain new risks associated with emerging technologies? Why not explore the importance of public choice in the exposure to these risks? Why not contribute to a critical discussion about whose interests are served by the premature commercialisation of products about whose safety we know so little, when there is preliminary evidence of risk and very limited public benefit.”

This is a legitimate issue, and one that is touched on by a number of people in the comments.  Decisions on what is developed, what people are exposed to, who decides what is appropriate and what is not, and who pays the consequences while who reaps the benefits, go far beyond the science and technology itself.  This is touched on by Jennifer Sass from NRDC:

I strongly support a dialogue that has space for both scientific calculations and values and perceptions of risk. We need to make that dialogue public, inclusive, transparent, and thoughtful. Risk is more than a number – its a face, a person, a community.

Guillermo Foladorio also touches on this broader societal context:

We have here 2 kind of issues. One is the “scientific” knowledge (are nano-sunscreens harmful?). This is a never endend issue. Science is a process and not a fact. The other issue, although hidden, is of great importance: focusing on a never ended scientific discussion is the field that corporations like, in the meanwhile the market of such products grows and consolidates, aside from any wondering of the needs for such new stuff; or better which percentage of the population will benefit in the case.

I would suggest that forcing a technology on society has never been acceptable behavior.  But it has certainly been easier to do in the past.  These days though, we live in a much more crowded, resource-constrained and interconnected world than ever before.  Which means that the consequences of ill-conceived technology implementation are magnified, and the dynamics of introducing new – and possibly beneficial – technologies – are far more complex than they were in the past.

This means that we need to think critically about the broader societal issues associated with technology innovation, and we need to push the dialogue further upstream in the development process – a point Jeff Morris from EPA makes.  This means rethinking how we make decisions in partnership across society, and how we begin to apply ideas like the precautionary principle in a complex world – a point eloquently made by Richard Jones.

But it also means that we need to think carefully about how we use scientific knowledge and data – “evidence” – in making decisions.

Evidence-informed decision-making. At some point, decisions need to be based on information, and in the long run you cannot get away with making that information up!  It’s one thing to evaluate critically the current state of evidence in making decisions, but quite another to preferentially select evidence that supports a predetermined position.  Yet the latter is often the default position when it comes to influencing decisions – whether by policymakers or consumers.

Having worked at the heart of science-based policy in the US for a number of years, I’m all too familiar with the line of argument that goes “what do we want to achieve?” followed by “what evidence can we find that supports us?”.  Yet this is an approach that ultimately devalues the importance of evidence in making decisions, one that can have serious adverse consequences when decisions are made on dodgy information, and one that is patently unsustainable in the long run.

My original critique of FoE’s article challenged their use of “evidence” in supporting the position they took.  To me, they showed a tendency to use selective pieces of information to sow seeds of doubt in the mind of the reader, rather than to empower the reader to make informed decisions. The social agenda was a laudable one – the use of selective science sound-bytes, less so.

This begins to come out when you read the comments on Georgia and Ian’s response from three scientists who have worked on nanoscale materials on the skin.  Despite FoE’s implications that nanoparticles in sunscreens might cause cancer because they are photoactive, Peter Dobson points out that there are nanomaterials used in sunscreens that are designed not to be photoactive. Brian Gulson, who’s work on zinc skin penetration was cited by FoE, points out that his studies only show conclusively that zinc atoms or ions can pass through the skin, not that nanoparticles can pass through.  He also notes that the amount of zinc penetration from zinc-based sunscreens is very much lower than the level of zinc people have in their body in the first place.  Tilman Butz, who led one of the largest projects on nanoparticle penetration through skin to date, points out that – based on current understanding – the nanoparticles used in sunscreens are too large to penetrate through the skin.

These three comments alone begin to cast the potential risks associated with nanomaterials in sunscreens in a very different light to that presented by FoE.  Certainly there are still uncertainties about the possible consequences of using these materials – no-one is denying that.  But the weight of evidence suggests that nanomaterials within sunscreens – if engineered and used appropriately – do not present a clear and present threat to human health.

Yet, because there are uncertainties still, we cannot afford to be complacent here.

Handling uncertainty. And this brings me to the thorny issue of uncertainty.  When we are lacking absolute evidence on safety or risk, what do we do – do we halt progress until we are sure about how safe something is, or do we muddle along until more information is available?

This question is becoming increasingly important as the rate of technology innovation – and the complexity of emerging technologies – accelerates.  Consumers, regulators, businesses and others are being forced more and more to make decisions in the face of increasing uncertainty.  At the same time, we are dependent on technology innovation as a global society – although the idea of “going back to basics” is an attractive one, it’s not going to help the marginalized in an overcrowded and resource-constrained world.  Rather, we need new ideas on how to use science and technology in ways that ensure as many people as possible have an acceptable quality of life.

The question is, how do we do this when we cannot be sure of how safe or dangerous a new technology is?

The Precautionary Principle is one approach – and a very misunderstood and misused one – to addressing this, and one brought up by FoE and others in the context of sunscreens.  It has many formulations – it’s not a hard and fast principle.  But it is currently described in the European Union in this way:

The precautionary principle should be informed by three specific principles:

  • implementation of the principle should be based on the fullest possible scientific evaluation. As far as possible this evaluation should determine the degree of scientific uncertainty at each stage;
  • any decision to act or not to act pursuant to the precautionary principle must be preceded by a risk evaluation and an evaluation of the potential consequences of inaction;
  • once the results of the scientific evaluation and/or the risk evaluation are available, all the interested parties must be given the opportunity to study of the various options available, while ensuring the greatest possible transparency.

Besides these specific principles, the general principles of good risk management remain applicable when the precautionary principle is invoked. These are the following five principles:

  • proportionality between the measures taken and the chosen level of protection;
  • non-discrimination in application of the measures;
  • consistency of the measures with similar measures already taken in similar situations or using similar approaches;
  • examination of the benefits and costs of action or lack of action;
  • review of the measures in the light of scientific developments.
  • The burden of proof

This is a pragmatic principle, that looks to using evidence and an evaluation of consequences in making informed decisions in the face of uncertainty.  It certainly does not preclude the development or implementation of a new technology until there is certainty on safety.

The emphasis on the potential consequences of inaction are particularly relevant to today’s world, where we are stuck on a technological tight-rope, and where the consequences of not doing something may be more harmful than taking action.  Richard Jones picked up on this in his suggestion for a more relevant application of the Precautionary Principle to emerging technologies:

  1. what are the benefits that the new technology provides – what are the risks and uncertainties associated with not realising these benefits?
  2. what are the risks and uncertainties attached to any current ways we have of realising these benefits using existing technologies?
  3. what are the risks and uncertainties of the new technology?

This seems a useful place to start from when faced with the reality of having to make the best possible decisions in the face of uncertainty, and where inaction isn’t a option.

But to make decisions – even when there are gaping holes in the data – you need something to go on.

So why did I pose the challenge in the first place? Despite suspicions from some that I was merely being provocative with this question, I asked it in all seriousness.  In the face of uncertainty, playing out different potential scenarios is a powerful tool in helping identify the magnitude and nature of the consequences of different choices.

When it comes to using nanomaterials in sunscreens, I genuinely would like to know whether in the worst case we are looking at mass illness and death, isolated cases of skin rashes, or something in between.  Because the likely implications of the use of such materials in the future have profound implications on the actions we take now.

If decisions are made now on futures that are unlikely to be realized, not only do we waste resources and effort, but we potentially endanger people’s lives through ill-informed choices.  This cuts both ways – if TiO2 and ZnO nanomaterials in sunscreens are likely to harm a significant number of people to a significant degree, action should be taken to avoid this as soon as possible.  But if the benefits are positive and the impacts likely to be inconsequential, inhibiting the use of such materials could cost lives.

Using the best available information to work through possible scenarios provides insight into which futures are more likely, and where efforts are best focused.  This isn’t about setting exposure levels or conducting quantitative risk assessments – it’s about helping people making informed choices.

And who should do this?  I think any group that has a stake in how contemporary decisions affect future outcomes has a part to play.  I focused on FoE because they were pushing the issue.  And I think they have sufficient people they can draw on to make a stab at working through some scenarios and estimating likely impact.

But at the end of the day, this is something that all stakeholders should be involved in.

Because these are decisions that we are all going to have to live with the consequences of.