Two essays about the future of minds written by people more rigorous and educated than me both make a mistake — at least what I perceive as a mistake — that seems like a very strange mistake for such intelligent people to make. My hypothesis is that I’m missing something. Maybe explaining why I think they’re wrong will lead one of you to point out what I’m missing.
Note: usually “artificial intelligence” is a pretty broad term, but in this case regard it as “conscious intelligence housed in a non-human, non-flesh substrate”.
One of the essays I found puzzling was written by Scott Aaronson, a quantum computing theorist who is a professor at MIT, soon to be a professor at UT Austin instead. He wrote Quantum Computing since Democritus, published by Cambridge University Press.
Most of Aaronson’s relevant post is about quantum physics’ implications on the nature of consciousness, which I thoroughly do not understand. But then there’s an idea within the larger context that seems easy to refute.
Aaronson explains at length that a computer couldn’t fully replicate a brain because there’s no way to fully replicate the initial conditions. This has something to do with quantum states but also makes common sense, if you roll with the quantum states element of the argument. He continues:
“This picture agrees with intuition that murder, for example, entails the destruction of something irreplaceable, unclonable, a unique locus of identity — something that, once it’s gone, can’t be recovered even in principle. By contrast, if there are (say) ten copies of an AI program, deleting five of the copies seems at most like assault, or some sort of misdemeanor offense! And this picture agrees with intuition both that deleting the copies wouldn’t be murder, and that the reason why it wouldn’t be murder is directly related to the AI’s copyability.”
To refute this, let’s conduct a thought experiment. Pretend that you can copy a human brain. There are ten copies of me. They are all individually conscious — perfect replicas that only diverge after the point when replication happened. Is it okay to kill five of these copies? No, of course not! Each one is a self-aware, intelligent mind, human in everything but body. The identicalness doesn’t change that.
Why would this be any different when it comes to an artificial intelligence? I suppose if the AI has no survival drive then terminating it would be okay, but then the question becomes whether the boundary of murder is eliminating a survival drive — in which case stepping on bugs would qualify — or eliminating a consciousness.
Earlier in the essay, Aaronson poses this question:
“Could we teleport you to Mars by ‘faxing’ you: that is, by putting you into a scanner that converts your brain state into pure information, then having a machine on Mars reconstitute the information into a new physical body? Supposing we did that, how should we deal with the ‘original’ copy of you, the one left on earth: should it be painlessly euthanized? Would you agree to try this?”
No, of course I wouldn’t agree to being euthanized after a copy of me was faxed to Mars! That would be functionally the same as writing down what I consist of, killing me, and then reconstructing me. Except wait, not me, because I am not the clone — the clone just happens to be a replica.
My own individual consciousness is gone, and a new one with the same memories and personalities is created. The break in continuity of self means that actually there are two selves. They each feel their own pain and joy, and each will have its own fierce desire to survive.
Aaronson goes on:
“There’s a deep question here, namely how much detail is needed before you’ll accept that the entity reconstituted on Mars will be you? Or take the empirical counterpart, which is already an enormous question: how much detail would you need for the reconstituted entity on Mars to behave nearly indistinguishably from you whenever it was presented the same stimuli?”
Commenter BLANDCorporatio expressed much the same point that I want to:
“My brain is on Earth at the beginning of the process, stays on Earth throughout, and I have no reason to suspect my consciousness is suddenly going to jump or split. I’ll still feel as if I’m on Earth (regardless of whether a more or less similar individual now runs around on Mars). Conversely, if the me on Earth is destroyed in the copying, then I’m gone, however similar the Mars one is.”
So that’s that.
The second instance of this fallacy, which could maybe be called the cloned-consciousness-as-continuous-consciouness fallacy, comes from an essay that Robin Hanson wrote in 1994. (Per Slate Star Codex, “He’s obviously brilliant — a PhD in economics, a masters in physics, work for DARPA, Lockheed, NASA, George Mason, and the Future of Humanity Institute.”) You may be familiar with Hanson as the speculative economist who wrote The Age of Em. His instance of the CCaCC fallacy emerges from a different angle (remember the hyper-specific definition of “artificial intelligence” that I mentioned in the beginning):
“Imagine […] that we learn how to take apart a real brain and to build a total model of that brain — by identifying each unit, its internal state, and the connections between units. […] if we implement this model in some computer, that computer will ‘act’ just like the original brain, responding to given brain inputs with the same sort of outputs. […] Yes, recently backed-up upload soldiers needn’t fear death, and their commanders need only fear the loss of their bodies and brains, not of their experience and skills.”
But… no! By the same argument I used to refute Aaronson, when an “upload” soldier dies, that is still a death. Reverting to a previous copy is not the same as continuing to live.
This seems really simple and obvious to me. So what am I missing?
Hat tip to the reader who recommended that I check out Hanson’s work — I can’t remember which one of you it was, but I appreciate it.
If you’re interested in further discussion, there are thoughtful comments on this page (just scroll down a bit), on Facebook, and on Hacker News. I particularly like what HN user lhankbhl said, because it expresses the problem so succinctly:
You are placed in a box. Moments later, you are told, “We have successfully made a copy of you. We are sending it home now. You must be disposed of.”
Will you allow them to dispose of you?
This is the question being posed, not whether a copy will have no idea if it is the original. The point is that it isn’t relevant if one is a copy. No one was moved, it’s only that a second person now exists and killing either is murder of a unique person.
(Again, uniqueness is not a question of whether these people will think or react to situations in the same way, but rather that there are two different consciousnesses at play.)
One of the commenters below recommended this video that investigates the Star Trek angle:
abb says:
Here’s an analogy which may help you see why the philosophers aren’t so quick to jump to your conclusion.
Every time you sleep, there is a discontinuity in your consciousness.
When you wake up, you cannot tell if you have merely stopped sleeping, or if you have been perfectly cloned whilst asleep.
Do you therefore fear sleeping? Why is sleeping less distressing than being “faxed”?
June 21, 2016 — 10:17 pm
Sonya Mann says:
This is a really good question. I guess the reason why it doesn’t bother me is that I know I’m the same entity — here I am in the same body. But then you get into all kind of second-order questions, like how do I know I’m in the same body? I don’t have an answer…
June 23, 2016 — 1:17 pm
M.C. Escherichia says:
In the same way that the faxed copy of you on Mars is not “you”, so too the version of you that will exist in a week is not “you” either. There is no immaterial soul floating about, there’s just a person with certain memories and psychology. Nothing else makes a future person “you”.
Making a copy and deleting the original after a second is just equivalent to forgetting a second’s worth of memories.
June 22, 2016 — 12:21 pm
Sonya Mann says:
I see what you’re saying, but I disagree. Each copy has the same drive to survive — why would either be okay with being eliminated?
June 23, 2016 — 1:18 pm
Jon W says:
You seem to equate the physical manifestation/carrier with the actual consciousness. You’re saying “of course we can’t kill the clone that seems like a human,” but there is no “of course” about it at the consciousness level — only at the current societal moral level.
Presumably consciousness cloning would lead to modifications of the “rules” of morals and ethics.
Regarding “faxing to Mars” — at the quantum level, “reading” /is destructive/ and thus the killing of the source seems an unavoidable side effect. Perhaps this is actually what makes consciousness “special?”
So, finally, if non destructive faxing were possible, a system could be built where you sedate before faxing, and on successful receipt, terminate the sedated source. But I don’t think we’ll ever (need to) get there for brains. The question (unanswered) is whether consciousness could be created without this quantum, read-is-destructive, limitation.
June 22, 2016 — 12:36 pm
Sonya Mann says:
If you think of an answer or come across one, please send it to me.
June 23, 2016 — 1:19 pm
Patrick Stevens says:
It looks like you’re missing that this happens to humans *all* the time. We go to sleep each night. We wake up in the morning: some time has passed, our state has changed in our absence, and we may even be in a completely different place if we fell asleep on the train. We are still definitely us, and no-one disagrees with this.
I think I’d personally be fine with teleportation, for this reason: it’s basically an expedited sleeper train.
June 22, 2016 — 12:49 pm
Sonya Mann says:
It still seems very different to me — I’m still an individual entity, even if some elements have changed.
June 23, 2016 — 1:21 pm
Ralf Maximus says:
Richard Morgan’s SF book _Altered Carbon_ (and others in the series) explore these issues in depth. Very much worth a read.
June 22, 2016 — 12:59 pm
Sonya Mann says:
I actually ordered that from Amazon recently! Thanks for the recommendation 🙂
June 23, 2016 — 1:21 pm
Nightwhistler says:
Did you ever read “Down and out in the Magic Kingdom” by Corey Doctorow? It has some interesting views on the subject of people using backups of themselves.
What is proposed there (though not quite explicitly) is that yes, the original dies… but the restored copy feels like it’s the continuation of the original. Do this a couple of times and “you” (the Xth generation copy of you) will start to feel immortal, even though it’s not really true.
June 22, 2016 — 1:03 pm
Sonya Mann says:
Sounds like an interesting read. The idea of not caring about your own discontinuation is so foreign to me…
June 23, 2016 — 1:25 pm
Andrew Wooldridge says:
You might find this YouTube video about transporters relevant: https://www.youtube.com/watch?v=nQHBAdShgYI
June 22, 2016 — 1:28 pm
Sonya Mann says:
Oooh, thank you!
June 23, 2016 — 1:27 pm
Anthony says:
In the last example you are missing that some people’s concept of “me” extends to other copies and they would willingly give up some since of self per copy. Yes, both copies would fight fiercely to stay alive… but it is possible that they would be willing to die much more quickly if they knew the other copy survived. (solider example) There is a great sci-fi book called Altered Carbon that you should read that gives a perfect example of this concept, and also that not all of humanity would agree with it.
June 22, 2016 — 1:47 pm
Sonya Mann says:
I agree, I think this is the key. I just don’t share that assumption, so the reasoning that stems from it seems really weird to me.
June 23, 2016 — 1:27 pm
Max S. Feinberg says:
Along with your argument, I would say that the consciousness is an illusion well perpetuated by the brain. In other words, what if the brain is like Linux? Many containers or processes living together to create a consciousness/mind? In that scope, the memory deamon and the self deamon, together, are the identity, the one who is writing this and believing he IS. So by copying me, you are creating a new consciouness that will be different from the moment of creation (new experiences). But well, maybe there is something else that I’m missing too.
June 22, 2016 — 2:11 pm
Sonya Mann says:
That is a fascinating analogy and it sort of poses the question, how many levels of consciousness are there?
June 23, 2016 — 1:28 pm
mako says:
If a nonpatternist says they really consider the distinction of place and time that separates the copy from the original to be important to them, what can we do? Point at some essential reduction of human aspirations(which may or may not exist) and tell them that they don’t want what they think they want?
>each will have its own fierce desire to survive
I think this might be your mistake. There are a lot of things adult and adolescent humans are willing to give their lives for. The creation of a perfect clone in another location might as well be one of them.
I consider the upload to be one class of discontinuity. Sleep is another. If you accept that it becomes pretty clear that humans are not afraid of discontinuity. So yeah, maybe we really can point at the reduction of human desires and say “No, you’re wrong about what you want, you want to upload, you’re just confused.”
June 22, 2016 — 2:22 pm
Sonya Mann says:
I’m perfectly happy to let anyone who wants to teleport / upload / whatever go ahead and do it.
June 23, 2016 — 1:29 pm
CapTVK says:
This discussion “what is existence?” isn’t new at all as has been explored and discussed in philosophy countless times. Aaronson does it from the viewpoint of a quantum scientist and Hanson is simply doing something similar but going all out on the economics part of what would happen if we could make digital copies or ‘EMulated humans’, EM’s for short*.
There’s an old but funny cartoon about this philosophical question: “To Be” (by John Weldon). With a scientist demonstrating his newest invention: a teleporter.
Back in the day it was Intended for kids but these days it’s apparently still used in philosophy classes.
https://www.youtube.com/watch?v=pdxucpPq6Lc
*With ‘EM’ being a short half-palindrome for ‘ME’. Not sure if this was intended by Hanson or not but now I tend to read the title “The Age of Em as “The age of Me”.
June 22, 2016 — 2:44 pm
Luis says:
Hey, congratulations for expressing the same kind of thoughts that I have when I see the same topic.
What would it take for one person feel that he is migrating to another substrate, without losing consciousness?
But then again, every night when we sleep we lose consciousness, regain it when we dream, lose it again and regain it when we wake up.
This is very hard topic.
June 22, 2016 — 4:11 pm
Consciousness 6580 (and counting) says:
“The break in continuity of self means that actually there are two selves.” We actually lose consciousness every night. There’s no special unbroken thread from birth to death. Consciousness is not a binary either. It can be slowly lost or gained.
June 22, 2016 — 6:42 pm
Pomax says:
Could I perhaps challenge you to explain why you think an exact copy, atom-for-atom, electric pulse-by-pulse, still permits you to somehow identify “the original” vs. “The copy”? Because linguistics aside (phrasing skews are interpretation; using words like “we made a copy” biases your assumption based on what you expect the word “copy” to mean) if there is no discernible difference between two instances, then the very idea that they differ doesn’t hold. Both instances “are you”. Consider that claiming that one is the real you, and another is a copy, is itself falling prey to the “too alien to accept” fallicy: this doesn’t happen in the real world (on the macroscopic level – it happens all the time at the quantum level) so most people will have never had to think this through, but in this case you kind of have to: if there really is no difference, then a copy isn’t just a copy: at the moment of copying we have one thing, in multiple locations, of which both instances then immediately start to diverge. And neither instance is technically “the same” as the one that existed prior to copying.
June 22, 2016 — 6:50 pm
Sonya Mann says:
Yeah, I agree. Using phrases like “the original” and “the copy” would perhaps be better replaced with self1 and self2, or self-x and self-y.
June 23, 2016 — 1:31 pm
DevMac says:
There’s a book called The Resurrected Man by Sean Williams in which teleportation is core to the story. It deals with this issue in a couple of ways more nuanced than I can explain in a comment. There is a group of people opposed to teleportation and they refer to the technology as a “murdering twinmaker”.
Worth the read.
I’ve spent many hours trying to sleep whilst considering the death that may await in the alcove of a teleporter.
June 22, 2016 — 7:24 pm
Sonya Mann says:
Thanks for the recommendation! And whoa, poetic last line there.
June 23, 2016 — 1:31 pm
Jose says:
If I create a program that can play chess (using a Depth-first search algorithm), the program clearly shows some degree of intelligence even though it consists just in a few thousands lines of code.
Will deleting the ‘chess playing’ program or a copy of it can be considered murder?
Similarly, it is not murder deleting a copy or all copies of a more complicated AI program which for example recognize images from pictures (and it is programmed using ‘neural networks’).
a more complicated AI program that it is programmed using ‘deep neural networks’ is still just a program consisting of a few million lines of code.
In the future, an even more advanced programming technique will allow programmers to create a program that exhibits even more ‘human like qualities’. Will it be considered murder to delete that program?
I think that the author tries to establish an analogy that if humanity could replicate a human brain (therefore creating an exact copy of the original brain) and we indeed consider that killing the original person is murder, then we should also reconsider our initial thought that deleting a copy of a computer program that exhibits some degree of intelligence should also be murder.
On the other hand, from a different point of view, a person could also be considered to be like a very advanced computer made of flesh. (which will be exactly the case if one day humans could reconstitute a brain state into a physical body – fabricated using some kind of 3D printer for example)
very interesting article. thanks for sharing
June 22, 2016 — 11:26 pm
Sonya Mann says:
I don’t know where the line between sentient and non-sentience lies… maybe solving that will be key to everything else. (If we can ever solve it.)
June 23, 2016 — 1:32 pm
oz modic says:
I don’t think you’re missing anything… I think a lot of people have some certain innate sense of what the future will hold for us as conscious individuals, even though it seems to directly conflict with what we know about how perception and consciousness actually do work… Indeed, making a clone of a consciousness does not preserve the original consciousness, and maybe some day technology will catch up enough to the point where we can determine where the seat of consciousness truly lies, but until that point… It seems that we are unfortunately stuck with the current meat sack that our consciousness was given the chance to adhere to.
I want to believe that maybe some day we can upload consciousness and actually transfer it between different points, while still retaining the original consciousness, but I don’t know how that would be possible, because I don’t know exactly what consciousness is, or from whence it derives. I just know that it is. Great article, really gave me a lot to think about…. I’m glad I’m not the only one. I’ve been thinking about things like this since I was like 5 or 6 years old, when I remember watching this cartoon called O, Canada that had a bit about a guy who created a teleporation machine, but it never could quite answer if what was teleported was merely a copy, and the original was, for all intents and purposes, dead and gone, no longer a conscious entity.
Ugh. Too early to be thinking like this. But this will have my day goijg and being a fun one I’m sure so… Thanks for sharing your thoughts on this manner
June 23, 2016 — 6:01 am
Sonya Mann says:
My pleasure, thank *you* for sharing your thoughts as well!
June 23, 2016 — 1:32 pm
Paul M. Parks says:
You’re not alone in this. Even when I watched Star Trek as a kid, I wondered the same thing when Captain Kirk would beam somewhere via the transporter. As I understood it, It wasn’t really Captain Kirk that reappeared on a planet’s surface, but a copy. The transporter on the ship disassembled all of his atoms and reassembled them somewhere else. That sounds pretty destructive.
The inhabitants down on the planet, though, didn’t perceive that they were dealing with a new copy of Captain Kirk. Indeed, the new copy of Captain Kirk didn’t perceive that he was a new copy, since his knowledge and memories picked up right where the poor, unfortunate original had left off.
There were some interesting plot points raised by the atomic decomposition employed by the transporter. Occasionally, it was used to cure diseases when the transportees were reconstituted. Now, imagine that we had such a technology available today. If you were told that you had terminal cancer, but that the cancer could be cured by tearing you apart to the sub-atomic level, removing the cancerous cells, and putting together a new copy of you that was cancer-free, would you opt for this “therapy”? I know that I would, since my current body would be doomed anyway, and my family and friends would perceive the copy as the new, healthy version of me.
June 23, 2016 — 8:21 am
Sonya Mann says:
Agreed, I would choose the same as your in that cancer scenario… but I would feel “some type of way” about it.
June 23, 2016 — 1:33 pm