Full time thinker. The science of mind.
Whole Brain Emulation (WBE) or Mind Uploading (sometimes called “mind copying” or “mind transfer”) is the process of copying mental content (including long-term memory and “self”) from a particular brain substrate and copying it to a computational device, such as a digital, analog, quantum-based or software-based artificial neural network. The computational device could then run a simulation model of the brain information processing, such that it responds in essentially the same way as the original brain (i.e., indistinguishable from the brain for all relevant purposes) and experiences having a conscious mind.
Mind uploading may be accomplished by either of two methods: Copy-and-Transfer or Gradual Replacement of neurons. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The simulated mind could reside in a computer that’s inside (or connected to) a humanoid robot or a biological body that can experience everything we experience: emotion, addiction, ambition, consciousness, and suffering.
Among some futurists and within the transhumanist movement, mind uploading is treated as an important proposed life extension technology. Some believe mind uploading is our current best option for preserving who we are as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our “mind-file,” and a means for functional copies of human minds to survive a global disaster or interstellar space travels. Whole brain emulation is discussed by some futurists as a “logical endpoint” of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology. Mind uploading is a central conceptual feature of numerous science fiction novels and films.
Substantial mainstream research in related areas is being conducted in human and animal brain mapping and simulation, development of faster super computers, virtual reality, brain-computer interfaces, connectomics and information extraction from dynamically functioning brains. According to supporters, many of the tools for mind uploading already exist and are currently under active development. Neuroscientist Randal Koene has formed a nonprofit organization called Carbon Copies to promote mind uploading research.
Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
“Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.”
Mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.
Such a machine intelligence capability provide a computational substrate necessary for uploading.
However, even though uploading is dependent upon such a general capability, it is distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information is a form of artificial intelligence, sometimes called an infomorph or “noömorph.”
If the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby – from a purely mechanistic perspective – reducing or eliminating “mortality risk” of such information. This general proposal appears to have been first made in the biomedical literature in 1971 by biogerontologist George M. Martin of the University of Washington.
A method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into
The ethics of pulling the plug on an AI
A recent article by Anders Sandberg in the Journal of Experimental & Theoretical Artificial Intelligence, dives into some of the ethical questions that would (or at least should) arise from successful whole brain emulation. The focus of his paper, he explained, is “What are we allowed to do to these simulated brains?” If we create a WBE that perfectly models a brain, can it suffer? Should we care?
Would a computer program that perfectly models an animal receive the same consideration an actual animal would? In practice, this might not be an issue. If a software animal brain emulates a worm or insect, for instance, there will be little worry about the software’s legal and moral status. After all, even the strictest laboratory standards today place few restrictions on what researchers do with invertebrates. When wrapping our minds around the ethics of how to treat an AI, the real question is what happens when we program a mammal?
“If you imagine that I am in a lab, I reach into a cage and pinch the tail of a little lab rat, the rat is going to squeal, it is going to run off in pain, and it’s not going to be a very happy rat. And actually, the regulations for animal research take a very stern view of that kind of behavior,” Sandberg says. “Then what if I go into the computer lab, put on virtual reality gloves, and reach into my simulated cage where I have a little rat simulation and pinch its tail? Is this as bad as doing this to a real rat?”
As Sandberg alluded to, there are ethical codes for the treatment of mammals, and animals are protected by laws designed to reduce suffering. Would digital lab animals be protected under the same rules? Well, according to Sandberg, one of the purposes of developing this software is to avoid the many ethical problems with using carbon-based animals.
To get at these issues, Sandberg’s article takes the reader on a tour of how philosophers define animal morality and our relationships with animals as sentient beings. These are not easy ideas to summarize. “Philosophers have been bickering about these issues for decades,” Sandberg says. “I think they will continue to bicker until we upload a philosopher into a computer and ask him how he feels.”
While many people might choose to respond, “Oh, it’s just software,” this seems much too simplistic for Sandberg. “We have no experience with not being flesh and blood, so the fact that we have no experience of software suffering, that might just be that we haven’t had a chance to experience it. Maybe there is something like suffering, or something even worse than suffering software could experience.”
Ultimately, Sandberg argues that it’s better to be safe than sorry. He concludes a cautious approach would be best, that WBEs “should be treated as the corresponding animal system absent countervailing evidence.” When asked what this evidence would look like—that is, software designed to model an animal brain without the consciousness of one—he considered that, too. “A simple case would be when the internal electrical activity did not look like what happens in the real animal. That would suggest the simulation is not close at all. If there is the counterpart of an epileptic seizure, then we might also conclude there is likely no consciousness, but now we are getting closer to something that might be worrisome.”
So the evidence that the software animal’s or a human brain is not conscious looks…exactly like evidence that a biological brain is not conscious.
Despite his pleas for caution, Sandberg doesn’t advocate eliminating emulation experimentation entirely. He thinks that if we stop and think about it, compassion for digital test animals could arise relatively easy. After all, if we know enough to create a digital brain capable of suffering, we should know enough to bypass its pain centers. “It might be possible to run virtual painkillers which are way better than real painkillers,” he says. “You literally leave out the signals that would correspond to pain. And while I’m not worried about any simulation right now… in a few years I think that is going to change.”
This, of course, assumes that animals’ only source of suffering is pain. In that regard, to worry whether a software animal may suffer in the future probably seems pointless when we accept so much suffering in biological animals today. If you find a rat in your house, you are free to dispose of it how you see fit. We kill animals for food and fashion. Why worry about a software rat?
One answer—beyond basic compassion—is that we’ll need the practice. If we can successfully emulate the brains of other mammals, then emulating a human is inevitable. And the ethics of hosting human-like consciousness becomes much more complicated.
Beyond pain and suffering, Sandberg considers a long list of possible ethical issues in this scenario: a blindingly monotonous environment, damaged or disabled emulations, perpetual hibernation, the tricky subject of copies, communications between beings who think at vastly different speeds (software brains could easily run a million times faster than ours), privacy, and matters of self-ownership and intellectual property.
All of these may be sticky issues, Sandberg predicts, but if we can resolve them, human brain emulations could achieve some remarkable feats. They are ideally suited for extreme tasks like space exploration, where we could potentially beam them through the cosmos. And if it came down to it, the digital versions of ourselves might be the only survivors in a biological die-off.
If software looks like a brain and acts like a brain—will we treat it like one?
What makes us special?
Of course all of this is moot if we never get to this point with the technology. So let’s get back to the questions of if and when.
For many, the idea of successful brain emulation is so strange, so far beyond our normal experience, that it feels best to dismiss it. While Sandberg sees his work as an effort to prevent suffering at a future time, others might ask why he wastes intellectual capital worrying about an event that will never occur. Software cannot experience emotion or become conscious, critics say, or it will at least need more than just a digital copy of the brain to do so. There is *something* else in there that makes us special, right?
Many scientists say no.
“Nearly every honest scientist in the neuroscience community will, upon closer scrutiny, admit that they do believe it is ‘in principle’ possible to emulate a brain’s functions,” says Randal Koene, neuroscientist and neuroengineer. “To claim otherwise would be to claim that there is some kind of non-physical magic at work in living brains, and scientists who tend toward that sort of thinking are exceedingly rare these days.”
Koene is one of a group of people who came up with the term “whole brain emulation.” He can envision a day in which we have software brains as capable and conscious as our current biological ones. And Koene agrees with Sandberg that one day software will have the capability to suffer. “If you have a software implementation of a whole brain emulation, then I’m pretty confident that the mind produced by that software is able to suffer just as you or I could. I think it’s quite possible that other types of software will also be able to suffer, some that are based on animal or artificial intelligence designs.”
Kenneth Hayworth, a neuroscientist and president of the Brain Preservation Foundation, also sees no reason why a digital brain should be somehow less than a real one. And to those who would argue that these digital uploads of a person’s mind are merely copies, Hayworth suggests considering what type of thing a person is to start. “We have discovered through cognitive science and neuroscience that we are like a program; we are like a data file on a computer in the sense that the information that makes us unique is the only thing that is truly us.”
A notable voice of dissent on the prospect of WBEs is Duke Neuroscience Professor Miguel Nicolelis. Nicolelis has made headlines for his lab’s work with brain-machine interfaces and primate neuroprosthetics. In 2013, he was quoted in MIT’s Technology Review as saying, “Downloads will never happen… There are a lot of people selling the idea that you can mimic the brain with a computer… You could have all the computer chips ever in the world and you won’t create a consciousness.”
Even if Nicolelis is rare in his belief that we won’t be able to mimic a brain on a computer, there are more widespread concerns that the prospect of WBE has been oversold. The Human Brain Project, for example, is creatin a brain-like system of computer chips, funded by the European Union. But that project has come under fire due to allegations of mismanagement and lack of realistic goals. But the problems may be much deeper, as some scientists say the concept was premature from the start and was sold to government agencies that were ill-equipped to evaluate it.
Let’s pause for a moment to remember that we still do not have a full understanding of our own human consciousness. At exactly what point do humans pass from living to dead? We don’t know. Where does our feeling of consciousness originate? It’s under debate. Why do some people wake up from anesthesia during surgery, and how does the brain orchestrate this awakening? Surgeons wish they knew.
Unlike the other scientists quoted, Alice Parker does not self-identify as a futurist or trans-humanist. But the University of Southern California professor is known for her opinions on the feasibility of WBEs, and she leads a project to reverse engineer the brain using analog computations. Parker is cautious about the idea—“We are a long way away,” she insists—because she believes our understanding of the brain is still so rudimentary. “There are mechanisms that seem to be critically important that we are just discovering… We who are building electronic circuits are following neuroscientists and mimicking whatever we can with the sense that things are emerging every day and you never know when there will be a complete paradigm shift as a result of new information.”
Parker softens these statements by noting that neural networks designed to perform specific functions, such as “recognizing cats in an image,” are coming along nicely. But broader abstract reasoning by a computer, Parker estimates, will take decades. Unlike Nicolelis, she still believes it is possible. “If we fully understood the neural mechanisms, biological mechanisms, it is possible to have cognitive processing that appears to be able to do abstract reasoning,” she says. “I think when we are at that point when there seems to be awareness, then we need to have that discussion about the way these organisms should be treated.”
On the issue of whether consciousness would emerge from a WBE, Parker says that no one really knows, and opinions differ among even her most highly prestigious colleagues. To her, even Hayworth’s assertion that the brain is like a program feels too light. “I think it’s so much more complicated and it’s so much more multidimensional, and there’s so many more subtleties,” she says. “To say a brain is like a program is too simplified. It’s breathtakingly complicated.”