Navigating the risk landscape that surrounds nanotechnology development can be a daunting task – especially if you are an early career researcher just getting started in the field.  There are plenty of studies and speculations around what might – or might not – be risky about nanoscale science and engineering.   But surprisingly, there are relatively few guideposts to help researchers plot a sensible course through this landscape as they set out to develop successful, safe, and responsible products.

Back in June, I wrote about seven basic “guideposts” that I find helpful in thinking about nanotech risks, from a researcher’s perspective.  You can read the the full article in the journal Nature Nanotechnology – here are the highlights though:

1.  Risk starts with something that is worth protecting.

We usually think of nanotechnology “risk” as the probability of disease or death occurring – or in the case of the environment, damage to ecosystems – from release of and exposure to engineered nanomaterials.  Yet the risk landscape that lies between novel nanotechnology research and successful product is far more complex, and being aware of its shifting hills and valleys can help avoid early, costly mistakes.

When stripped down to fundamentals, risk concerns threats to something you or others value.  Health and well-being tick the box here, alongside integrity and sustainability of the environment.  Yet so do security, friendships, social acceptance, and our sense of personal and cultural identity.  These broader dimensions of “value” often depend on who is defining them, and the circumstances under which they are being defined.  Yet they are critically important in determining the progress of nanoscale science and engineering in today’s increasingly interconnected world.

2.  “Nanotechnology” is an unreliable indicator of risk.

While the products of nanotechnology do present risks that need to be understood and addressed, the term”nanotechnology” itself is an unreliable indicator of risk.

Of course, some nanoscale materials have attributes that are associated with a potential to cause harm.  The size, surface coating, surface charge, composition, and shape of nanoscale particles for instance may be indicators of how a material may interfere with biological pathways. Yet attributes such as these are rarely unique to the nanoscale, meaning that the prefix “nano” cannot effectively be used as a predictor of risk on its own.

Instead of using the term “nanotechnology” to indicate possible risk, it’s far more productive to consider the actual properties of a material, and the harm they might cause, irrespective of whether the material is considered “nano” or not.

3.  We live in a post-chemicals world.

While “nanotechnology” may be an unreliable indicator of risk, the many different attributes of materials coming out of nanoscale science and engineering still present risks that cannot always be navigated using chemicals-based approaches. Many of the risk analysis and governance methods we rely on are based on the assumption that nanomaterials can be precisely defined, and remain the same between tests, laboratories, and locations – just like chemicals.  Yet unlike their constituent chemicals, nanomaterials cannot be precisely defined.

Instead of using precise formulas, materials – especially powders and aerosols – are defined by statistical parameters such as geometric mean particle diameter and geometric standard deviation.  These are always a compromise though, and never capture the full complexity of the material.

Treating nanomaterials as if they are precisely-defined chemicals can lead to serious errors of judgment where risk is concerned.  It potentially glosses over variations in materials properties from batch to batch, from production process to production process, and from day to day.  And it runs the danger of obscuring attributes that may be important to determining risk by using crude and irrelevant ways of characterizing materials and exposures.

Rather, it’s more helpful to base the potential for a material to cause harm on characteristics that are potentially associated with biologically relevant behavior, with the understanding that these may vary within a material, between batches of the same material, between similar materials produced in different places and using different processes, between different days, and under different conditions.

4.  Benchmarking is important.

Benchmarking – the process of comparing how one material behaves in toxicity tests to other materials – has always been important in risk research.  Yet as risk research becomes more interdisciplinary, this sometimes becomes overlooked.

Benchmarking helps contextualize research results and place plausible boundaries around the potential impacts of being exposed to a material.  In other words, it helps make practical sense of observed behaviors, and avoid over-speculation on their relevance.

5.  We have co-evolved with nanoscale materials.

The nanoscale materials we have co-evolved with over millions of years re a particularly relevant set of benchmark materials. Of course, novel engineered nanoscale materials may interfere biological processes in unexpected ways, and this may lead to harm.  Yet because we evolved hand in hand with naturally occurring nanoscale materials, it is also important to remember that biological mechanisms exist that are designed to minimize their impact, and even to take advantage of their presence.

Because of this, our bodies are sometimes surprisingly adept at managing materials that we think of as being highly novel. On the other hand, nanomaterials that seem quite mundane – fumed silica for instance, or carbon black – may interfere biological pathways in potentially harmful ways, precisely because those pathways are attuned to materials with some shared characteristics.

When developing new nanomaterials, it’s important to understand how biological and environmental systems respond differently to them, compared to every day nanomaterials.

6.  How you think about nanotechnology risk is probably incomplete.

Nanotechnology risk is an inherently transdisciplinary field.  And because of this, it’s hard for any one individual or disciplinary group to develop a complete picture of the causal chain between material design and the risk it is likely to result in.  Where limitations in understanding are not acknowledged and addressed, unreliable research and ill-informed decisions can arise.

Nanomaterials designers and engineers for instance will often have an exquisite understanding of their materials’ physical and chemical properties, but lack the training or conceptual models to understand how they may interfere with biological pathways.  Toxicologists will typically have a deep grasp of biological pathways and how to study perturbations that can lead to harm, but may struggle with understanding the materials science that influences material behavior.  And while researchers may have a good grasp of the science of risk, they may fall short when it comes to understanding the social, legal and policy landscape that also impacts risk, and how it is perceived and managed.

To bridge these understanding gaps, researchers across collaborating disciplines need to be aware of their limitations.  They need to constantly re-assess their assumptions, as well as have the humility to recognize and respect expertise outside of their domain.  And they need to take responsibility for communicating their knowledge clearly and accurately to all collaborators and partners, as well as engaging fully with them.

7.  We need to be quick to question, and slow to respond.

Despite all the caveats above about nanotechnology and risk, we will inevitably develop new materials that have the potential to cause harm.  Because of this, researchers and developers need the freedom to ask the “what if?” questions that are necessary to map out the risk landscape between emerging ideas and responsible future products.

However, there is often a long and tortuous pathway between exploratory research and outcomes that can justifiably be applied outside the lab.  If we’re too eager to advocate action on every interesting result that risk research throws up, we run the additional risk of making hard-to-rescind decisions on potentially misleading immature science. Premature calls-to-action also run the risk of shutting down speculative research, for fear that a “what if?” question will lead to action before the validity of the question is known.

We need the freedom to conduct speculative and creative risk research, without pre-emptive and potentially harmful decisions being made too early.  Yet we also need to ability to respond proactively where early warnings of potential harm do begin to emerge – even before the science is mature.

 

These guideposts (which are fleshed out further in the published paper) paper are far from comprehensive – more than anything, they represent my own admittedly limited and probably occasionally blinkered experiences from working in the field for over 15 years.  They are, however, hopefully, a starting point for identifying what’s important when it comes to ensuring the responsible development of successful nanotechnology applications, now and in the future.  Especially for researchers just getting into the field.

Feature Image: Nano magnetic flux lines. Map showing magnetic flux lines for nickel nanoparticles. Brookhaven National Laboratory. CC BY-NC-ND 2.0