Could we one day 3D print Arnold Schwarzenegger’s brain? Before you ask, yes, this is a post about risk. And no, I’m not talking about the dangers of immortalizing the star of Terminator Genisys‘ real-life biological brain.
But to begin somewhere near the beginning:
3D Printing
3D printing – and additive manufacturing more broadly – are on a roll. The idea of creating objects by building them up them layer by layer has been around for a while. But over the past couple of years, there have been massive advances in access to low cost, extremely sophisticated 3D printing technologies.
At one end of the spectrum, you have devices like the $100 Peach Printer. At the other, industrial 3D printers that are capable of making on-demand parts for jet engines and other high performance products. And in between, printers that are enabling everyone from kids and hobbyists to entrepreneurs make stuff that it wasn’t possible to make just a few years ago.
The technology is opening new doors to how products are made. But it’s also potentially leading to new health risks. Whether it’s the products of 3D printers (how do you control weapons that can be printed at-source, or ensure the safety of a bespoke 3D printed car?), to the emissions from the devices (just how many 3D printers in a classroom does it take before the kids are inhaling more nanoparticles and fumes than is healthy?), 3D printing raises questions around risk and safety.
Environmental Implications of Additive Manufacturing
This past October, I participated in a National Science Foundation workshop on the Environmental Implications of Additive Manufacturing. It was a valuable meeting – I want to emphasize that, just in case you suspect that my mind began to drift during the proceedings with what follows. We talked extensively about the potential health risks of 3D printing and other forms of additive manufacturing, and how these may be avoided (there will be a report produced – probably next year). But I must confess, as I learned about what is likely to be possible in the near future, I did find myself mulling over some of the more speculative implications of 3D printing.
There’s a saying that, in additive manufacturing, complexity is free. Of course, nothing is totally free. But I was particularly taken in the workshop by the relatively low investment and energy costs associated with generating incredibly complex structures using techniques such as 3D printing. This is core to the transformative nature of additive manufacturing – it’s what fundamentally enables us to create products with processes such as 3D printing that are far beyond the reach of more conventional manufacturing technologies. It’s also what will potentially enable us to create devices and products that present us with truly emergent risks – just because we can make things we’ve never been able to make before.
Complexity is Cheap
There comes a point with conventional (i.e. non-additive) manufacturing where it becomes economically unfeasible to manufacture structurally complex products. It’s just too difficult, or too expensive. In contrast, within resolution and material capabilities, few such limits exist for 3D printers. A very simple 2D analogy is printing images on an inkjet printer. Within the resolution limits of the printer, it’s just as easy to print an incredibly detailed high resolution photograph as it is a black rectangle – complexity is cheap.
The same applies to 3D printing – but this time in three dimensions. With this technology, we are at the cusp of a manufacturing revolution where we can make highly complex three dimensional products that were unattainable and unimaginable just a few years ago. And by “we” I mean anyone from a kid tinkering in their basement to a global corporation.
It was this transformative potential that got me speculating about what we might be able to 3D print that would be nigh on impossible using non-additive technologies. And from there, it didn’t take long to arrive at the idea of 3D printing something that is intimately dependent on three dimensional complexity — an artificial brain!
Brain Research
We still have an awful lot to learn about the human brain. Both the US and Europe are currently funding massive initiatives researching the organ. And the more we learn, the more apparent it’s becoming that how the brain functions — and presumably the emergent property of “mind” that results — is tied to its physical, three dimensional form. So much so that, I suspect, the only way we could create a self-contained artificial mind would be to manufacture a physical substrate that has a similar degree of three dimensional complexity as the human brain.
In this month’s edition of the journal Nature Nanotechnology, I take a closer look at how a convergence between 3D printing, neurotechnology and nanotechnology could lay the groundwork for manufacturing a simple artificial brain. Some of the core concepts are also explored in a shorter (and probably easier to digest) piece in Slate. In both articles, I suggest that a physical substrate for an artificial mind would need to have at least two key features – a matrix of billions of neuron-like components, all massively interconnected to each other; and a highly integrated heat-management system (surprisingly, one of the biggest technological hurdles to creating high power processors).
Both of these features are unachievable using current non-additive manufacturing technologies. But with advances in nanotechnology and neuromorphic processing, they may be highly achievable using 3D printing. It all comes down to three dimensional complexity – prohibitively difficult using conventional manufacturing, but trivial in comparison with additive manufacturing.
Arnie’s Brain
And this of course is where Arnie’s brain comes in. Not his human brain, but that of the character he plays in the Terminator movies.
If you’re not up to speed on the Terminator franchise, the central antagonist is Skynet – an artificial intelligence (AI) system that rapidly moves from self-awareness to the realization that it’s survival depends on humanity’s destruction (to be precise, the AI initiates armageddon on August 29 1997 – not that I had that factoid at my fingertips!). It’s a far fetched idea, but one that respectable thought-leaders like Stephen Hawking and Elon Musk are both worried about.
I personally don’t buy this vision of an AI “singularity” — being an academic surrounded by brilliant minds leaves you a little jaded as to what brilliant minds can actually achieve! Nevertheless, the development of AI raises some serious questions around risk.
Future Risks
If artificial intelligences will depend on highly complex three dimensional processor substrates, any risks they may be associated with have so far remained somewhat distant possibilities. Let’s face it, we’re a long way from an army of Watson’s taking over the world, despite the stupendous power of IBM’s cognitive computer. Yet the convergence of nanotechnology, neurotechnology and 3D printing is potentially a game-changer here. Not necessarily in the creation of a global AI that is intent on the destruction of humanity, but in the creation of artificial minds that challenge our very notions of humanity.
Arnold Schwarzenegger’s character in the Terminator movies is equipped with a processor-based artificial brain that enables him to think, react and learn. Much as we see in Isaac Asimov famous yet fictitious positronic brain, Arnie is able to do what he does because there is a melding between hardware and software in his central processing core – a melding that depends on intricate complexity in three dimensions.
Not that I think we’re likely to create something quite as fantastical as the Terminator’s brain in the near future. But even a foreshadowing of such an artificial mind will raise questions around impacts to health and well-being.
Many of these will raise moral risks. To what extent does an intelligent machine have rights? If nearly anyone can “give birth” to an artificial mind, where do legal and moral responsibility lie for the health and well-being of this mind? To what extent may the extension of “human” rights to artificial constructs diminish rights and expectations within human society? Could withholding moral rights from machines corrode moral codes within human communities – with resulting impacts on health and well-being? Will prolonged interactions with intelligent machine change human behavior in potentially harmful ways?
And of course there will be the security risks that science fiction writers and futurists have long speculated about — including intelligent machines designing super-intelligent progeny that actively destroy or enslave humanity, or absentmindedly crushing us. Or more likely, demonstrating the dangers of relying on intelligence to do the smart thing by — with the best intentions in the world — catastrophically messing up the planet we live on.
Responsible Innovation
These are incredibly speculative and certainly not empirically testable risks. But technology innovation has a habit of turning on a dime and confounding even the sharpest pundits. And so while I’m not convinced I’ll see us 3D printing Arnie’s brain in my lifetime, it would seem foolish not to use this and similar speculative scenarios to begin examining how we might respond responsibly to such developments.
This is a part of risk science that needs the freedom to dream, and the realism to anchor those dreams in plausible outcomes. In the case of 3D printing, the technology is closer than we might imagine to producing highly novel processor substrates. When it does, we would be wise to be ahead of the curve in terms of developing the resulting technologies responsibly.
Hi Andrew! Fascinating stuff … I took a look at the Nature Nanotechnology article due to your mention of memristors (or neuristors). It’s a topic I’ve been following since 2008 when Williams made his announcement. I’ve even written and presented on the topic of memristors and artificial intelligence in the context of something I’ve been calling ‘cognitive entanglement’. I didn’t have time to give your NN article its due but I will come back to it. Like you, I’m quite interested in the hardware aspects although I must admit I missed the 3D printing possibilities. BTW, is this article an antidote to Stephen Hawking’s latest alarm or purely coincidental? Also, are you aware of Sir Martin Rees’s efforts to found a centre for existential risk at Cambridge? (He’s very concerned about AI and the possibility that robots, etc. will become smarter than humans … as to whether or not humans are all that smart, I leave those discussions to the philosophers 🙂 Cheers, Maryse
Hi Maryse,
It was coincidental to Hawkins’ remarks. Not that I’ve given it a great deal of thought, but I do worry that there are weak links in the thought-chain linking the emergence of AI to a possible existential risk/threat. One is the hubris that assumes something we make in our own image of “intelligent supremacy” will fulfill our fears of what intelligent humans might do. It’s a hubris that ignores the challenges of developing new understanding rather than rehashing old understanding (how close are we to an AI that could make scientific discoveries); that overlooks the many, many factors other than intelligence narrowly defined that determine function and power, and that falls prey to assumptions of unrestrained exponential growth/development. None of this means that AI doesn’t present an existential threat – just that the chances of it being a significant threat are closer to the grey goo bucket than not.
Nice essay. Are you familiar with philosopher Nick Bostrom’s recent book, Superintelligence? In it, he explores in detail what it would take to achieve this, but the goal is slightly different. His speculative aim is to translate the contents of a brain into functional AI software. He calls it brain emulation. It begins with vitrification of a freshly donated brain, followed by a molecular level scan and then translation at a molecular level into appropriate software. Many components, including the necessary computing power do not yet exist, but he predicts that they will, and when they do the result will be a fully functional brain with life-memories intact. (This summary doesn’t do full justice; you should read the book.)
Thanks Clay – great shout out to Nick’s work. I must confess that I have some skepticism over the plausibility of downloading the human brain/mind to an artificial substrate, but if this was to ever become likely, I suspect that there would need to be close coupling between hardware and software, and that the hardware would require a level of 3D complexity not currently achievable with conventional fab technologies.
Thank you, Andrew. Although emulation appears to me clearly valid in principle, I share your skepticism, at least in this respect: I think it far more likely that the so-called Singularity will be breached in the Cloud, via self-improving AI, first. At that point, all bets will be off — as befits a phenomenon by that name. Cheers, Clay