David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy
David Chalmers, Professor of Philosophy and Neural Science at NYU, joins us to discuss his newest book Reality+: Virtual Worlds and the Problems of Philosophy.
- Virtual reality as genuine reality
- Why you can live a good life in VR
- Why we can never know whether we're in a simulation
- Consciousness in virtual realities
- The ethics of simulated beings
Watch the video version of this episode here
Check out David's book and website here
0:00 Intro
2:43 How this books fits into David's philosophical journey
9:40 David's favorite part(s) of the book
12:04 What is the thesis of the book?
14:00 The core areas of philosophy and how they fit into Reality+
16:48 Techno-philosophy
19:38 What is "virtual reality?"
21:06 Why is virtual reality "genuine reality?"
25:27 What is the dust theory and what's it have to do with the simulation hypothesis?
29:59 How does the dust theory fit in with arguing for virtual reality as genuine reality?
34:45 Exploring criteria for what it means for something to be real
42:38 What is the common sense view of what is real?
46:19 Is your book intended to address common sense intuitions about virtual reality?
48:51 Nozick's experience machine and how questions of value fit in
54:20 Technological implementations of virtual reality
58:40 How does consciousness fit into all of this?
1:00:18 Substrate independence and if classical computers can be conscious
1:02:35 How do problems of identity fit into virtual reality?
1:04:54 How would David upload himself?
1:08:00 How does the mind body problem fit into Reality+?
1:11:40 Is consciousness the foundation of value?
1:14:23 Does your moral theory affect whether you can live a good life in a virtual reality?
1:17:20 What does a good life in virtual reality look like?
1:19:08 David's favorite VR experiences
1:20:42 What is the moral status of simulated people?
1:22:38 Will there be unconscious simulated people with moral patiency?
1:24:41 Why we can never know we're not in a simulation
1:27:56 David's credences for whether we live in a simulation
1:30:29 Digital physics and what is says about the simulation hypothesis
1:35:21 Imperfect realism and how David sees the world after writing Reality+
1:37:51 David's thoughts on God
1:39:42 Moral realism or anti-realism?
1:40:55 Where to follow David and find Reality+
Transcript
Lucas Perry: Welcome to the Future of Life Institute Podcast. I’m Lucas Perry. Today’s episode is with David Chalmers and explores his brand new book Reality+: Virtual Worlds and the Problems of Philosophy. For those not familiar with David, he is a philosopher and cognitive scientist who specializes in the philosophy of mind and language. He is a Professor of Philosophy and Neural Science at New York University, and is the co-director of NYU’s Center for Mind, Brain and Consciousness. Professor Chalmers is widely known for his formulation of the “hard problem of consciousness,” which asks, “Why a physical state, like the state of your brain, is conscious rather than nonconscious?"
Before we jump into the interview, we have some important and bitter-sweet changes to this podcast to announce. After a lot of consideration, I will be moving on from my role as Host of the FLI Podcast, and this means two things. The first is that FLI is hiring for a new host for the podcast. As host, you would be responsible for the guest selection, interviews, production, and publication of the FLI Podcast. If you’re interested in applying for this position, keep your eye on the Careers tab on the futureoflife.org website for more information.
The second item is that even though I will no longer be the host of the FLI Podcast, I won’t be disappearing from the podcasting space. I’m starting a brand new podcast focused on exploring questions around wisdom, philosophy, science, and technology, where you’ll see some of the same themes we explore here like existential risk and AI alignment. I’ll have more details about my new podcast soon. If you’d like to stay up to date, you can follow me on Twitter at LucasFMPerry, link in the description. This isn’t my final time on the FLI Podcast, I’ve got three more episodes including a special farewell episode, so there’s still more to come!
And with that, I’m very happy to introduce David Chalmers on Reality+.
Welcome to the podcast David, it's a really big pleasure to have you here. I've been looking forward to this. We both love philosophy so I think this will be a lot of fun. And we're here today to discuss your newest book, Reality+. How would you see this as fitting in with the longer term project of your career and philosophy?
David Chalmers: Oh boy, this book is all about reality. I think of philosophy to being about, to a very large extent about the mind, about the world and about relationships between the mind and the world. In a lot of my earlier work, I've focused on the mind. I was drawn into philosophy by the problem of consciousness, understanding how a physical system could be conscious, trying to understand consciousness in scientific philosophical terms.
But there are a lot of other issues in philosophy too. And as my career has gone on, I guess I've grown more and more interested in the world side of the equation, the nature of reality, the nature of the world, such that the mind can know it. So I wrote a fairly technical book back in 2012 called Constructing the World. That was all about what is the simplest vocabulary you can use to describe reality?
But one thing that was really distinctive to this book was thinking about it in terms of technology. In philosophy, it often is interesting and cool to take an old philosophical issue and give it a technological twist. Maybe this is most clear in the case of thinking about the mind and then thinking about the mind through the lens of AI, are artificial minds possible? That's a big question for anybody. If they are, maybe that tells us something interesting about the human mind. If artificial minds are possible then maybe the human mind is in relevant ways analogous for example to an artificial intelligence.
Then, well, the same kind of question comes up for thinking about reality and the world. Are artificial worlds possible? Normally we think about, okay, ordinary physical reality and the mind's relation to that, but with technology, there's now a lot of impetus to think about artificial realities, realities that we construct, and the crucial case there is virtual realities, computational based realities, virtual worlds even of the kind we might construct say with video games or full scale virtual realities, full scale universe simulations. And then a bunch of analogous questions come up, are artificial realities genuine realities?
And just in the artificial mind case, I want to say artificial minds are genuine minds. Well, likewise in the artificial world case, I want to say, yeah, virtual realities are genuine realities. And that's in fact, the central slogan of this new book Reality+, which is very much trying to look at some of these philosophical issues about reality through the lens of technology and virtual realities, as well as trying to get some philosophical insight into this virtual reality technology in its own right by thinking about it philosophically. This is the process I call techno-philosophy, using technology to shed light on philosophy and using philosophy to shed light on technology.
Lucas Perry: So you mentioned... Of course you're widely known as a philosopher of consciousness and it's been a lot of what you focused on throughout your career. You also described this transition from being interested in consciousness to being interested in the world increasingly over your career. Is that fair to say?
David Chalmers: Yeah. You can't be interested in one of these things without being interested in the other things. So I've always been very interested in reality. And even in my first book on consciousness, there was speculation about the nature of reality. Maybe I talked about it from bit hypothesis there. Maybe reality is made of information. I talked about quantum mechanics and potential connections to consciousness. So yeah, you can't think about, say the mind body problem without thinking about bodies as well as minds, you have to think about physical reality.
There's one particular distinctive question about the nature of reality namely how much can we know about it? And can we know anything about the external world? That's a very traditional problem in philosophy. It goes back to Descartes saying, how do you know you're not dreaming right now? Or how do you know you're not being fooled by an evil demon who's producing sensations as of an external world when none of this is real? And for a long time, I thought I just didn't have that much to about this very big question in philosophy.
I think of the problem with consciousness, the mind body problem. That's a really big question in the history of philosophy. But to be honest, I'm going to say it's probably not number one. Number one at least in the Western philosophical tradition is how do we know anything about the external world? And for a long time, I thought I didn't have anything to say about that. And at a certain point, partly through thinking about yeah, virtual realities and the simulation hypothesis, I thought, yeah, maybe there is something new to say here via this idea that virtual realities are genuine realities. Maybe these hypotheses that Descartes put forward saying, "If this is the case, then none of this is real." Maybe Descartes was actually thinking about these hypotheses wrongly.
And I actually got drawn into this. Around the same time, just totally fortuitously I got invited to write an article for the Matrix Website. Their production company, Red Pill, it was a philosopher called Chris Grawl, who worked for them. And I guess the Wachowskis were super interested in philosophy. They wanted to see what philosophers thought of philosophical issues coming from the movie. So I ended up writing an article called The Matrix as Metaphysics, putting forward this rough point of view, which is roughly in the context of the movie that even in the movie, they say, well, if we're in the Matrix, none of what we're experiencing is real. All this is illusion or a fiction.
I tried to argue, even if you're in the Matrix, these things around you are still perfectly real. There are still trees, there are still cats, there are still chairs. There are still planets. It's just that they're ultimately digital, but they're still perfectly real. And I tried to use that way of thinking about the Matrix to provide a response to the version of Descartes who says, "We can never know anything about the external world, because we can't rule out that none of this is real."
All those scenarios Descartes had in mind. I think some sense there are actually scenarios where things are real and that makes this vision of reality. Maybe it makes reality a bit more like virtual reality, but that vision of reality actually puts knowledge of the external world more within our grip. And from there, there's a clean path from writing that article 20 years ago to writing this book now, which takes this idea of virtual reality as genuine reality and tries to just draw it out in all kinds of directions, to argue for it, to connect to present day technology, to connect it to a bunch of issues in philosophy and science. Because if I to start thinking this way about reality, at least I've found it changes everything. It changes all kinds of things about your vision of the world.
Lucas Perry: So I think that gives a really good taste of what is to come in this interview and also what's in your book. Before we dive more into those specifics, I'm also just curious what your favorite part of the book is. If there's some section or maybe there isn't that you're most excited to talk about, what would that be?
David Chalmers: Oh, I don't know. I was going to say my favorite parts of the book had the illustrations, amazing illustrations by Tim Peacock, who's a great illustrator who I found out about and I asked if he'd be able to do illustrations for the book. And he took so many of these scenarios, philosophical thought experiments, science fiction scenarios, and came up with wonderful illustrations to go along with it. So we've got Plato's Cave, but updated for the 21st century with people in virtual reality inside Plato's Cave with Mark Zuckerberg running the cave, or we have an ancient Indian thought experiment about Narada and Vishnu updated them in the light of Rick and Morty. We've got a teenage girl hacker creating a simulated universe in the next universe up.
So these illustrations are wonderful, but I guess that doesn't quite answer your question, which parts do I especially want to talk about? I think of the book as having roughly two halves. Half of it is broadly about the simulation hypothesis. The idea that the universe is a simulation and trying to use that idea to shed light on all kinds of philosophical problems. And the other half is more about real virtual reality, the coming actual virtual reality technology that we have and will develop in the next say 50 to 100 years and trying to make sense of that and the issues it brings up.
So in the first part of the book, I talk about very abstract issues about knowledge and reality and the simulation hypothesis. The second part of the book gets a bit more down to earth and even comes to issues about ethics, about value, about political philosophy. How should we set up a virtual world? That was more of a departure for me to be thinking about some of those more practical and political issues, but over time I've come to find they're fascinating to think about.
So I guess I'm actually equally fascinated by both sets of issues. But I guess lately I've been thinking especially about some of these second class of issues, because a lot of people given the coming... All the corporations now are playing up the metaverse and coming virtual reality technology. That's been really interesting to think about.
Lucas Perry: So given these two halves in general and also the way that the book is structured, what would you say are your central claims in this book? What is the thesis of the book?
David Chalmers: Yeah, the thesis of the book that I lay out in the introduction is virtual reality is genuine reality. It's not a second class reality. It's not fake or fictional. Virtual reality is real. And that breaks down into a number of sub-thesis. One of them is about the existence of objects, and it's a thesis in metaphysics. It says the objects in virtual reality are real objects, a virtual tree is a real object. It may be a digital object, but it's real all the same. It has causal powers. It can affect us. It's out there independently of us. It needn't be an illusion.
So yeah, virtual objects are real objects. What happens in virtual reality really happens. And that's one kind of thesis. Another thesis is about value or meaning. That you can lead a valuable life, you can lead a meaningful life inside a virtual world. Some people have thought that virtual worlds can only ever be escapist or fictions or not quite the real thing. I argue that you can lead a perfectly meaningful life.
And the third kind of thesis has tied closer to the simulation hypothesis idea. And there I don't argue that we are in fact in a computer simulation, but I do argue that we can never know that we're not in a simulation. There's no way to exclude the possibility that we're in a simulation. So that's a hypothesis to take very seriously. And then I use that hypothesis to flesh out a number of different... Just say we are in a simulation then, yeah, what would this mean for say our knowledge of the world? What would this mean for the reality of God? What would this mean for the underlying nature of the metaphysics underneath physics and so on? And I try and use that to just put forward a number of sub-thesis in each of these domains.
Lucas Perry: So these claims also seem to line up with really core questions in philosophy, particularly having to do with knowledge, reality and value. So could you explain a little bit what are some of the core areas of philosophy and how they line up with your exploration of this issue through this book?
David Chalmers: Yeah, traditionally philosophy is at least sometimes divided up into three areas, metaphysics, epistemology and the theory of value. Metaphysics is basically questions about reality. Epistemology is basically questions about knowledge and value theory is questions about value, about good versus bad and better versus worse. And in the book, I divide up these questions about virtual worlds into three big questions in each of these areas, which I call the knowledge question, the reality question and the value question.
The knowledge question is, can we know whether we're in a virtual world in particular? Can we ever be sure that we're not in a virtual world? And there I argue for an answer of no, we can ever know for sure that we're not in a virtual world, we can never exclude that possibility. But then there's the reality question, which is roughly, if we are in a virtual world, is the world around us real? Are these objects real? Are virtual realities genuine realities or are they somehow illusions or fictions? And there I argue for the answer, yes, virtual worlds are real. Entities and events in virtual world are perfectly real entities and events. Even if we're in a simulation, the objects around us are still real. So that's a thesis in metaphysics.
Then there's the question in value theory, which is roughly, can you lead a good life in a virtual world? And there as I suggested before I want to argue, yes, you can lead a good and meaningful life in a virtual world. So yeah, the three big questions behind the book, each correspond then to a big question, a big area of philosophy. I would like to think they actually illuminate not just questions about virtual worlds, but big questions in those areas more generally. The big question of knowledge is, can we know anything about the external world?
The big question of reality is, what is the nature of reality? The big question about value is, what is it to lead a good life? Those are big traditional philosophical questions. I think thinking about each of those three questions through the lens of virtual reality and trying to answer the more specific questions about what is the status of knowledge, reality and value in a virtual world, that can actually shed light on those big questions of philosophy more broadly.
So what I try to do in the book is often start with the case of the virtual world, give a philosophical analysis of that, and then try to draw out morals about the big traditional philosophical question more broadly.
Lucas Perry: Sure. And this seems like it's something you bring up as a techno-philosophy in the book where philosophy is used to inform the use of technology and then technology is used to inform philosophy. So there's this mutual beneficial exchange through techno-philosophy.
David Chalmers: Yeah. Philosophy is this two-way interaction between philosophy and technology. So what I've just been talking about now, using virtual reality technology and virtual worlds to shed light on big traditional philosophical questions, that's the direction in which technology sheds light on philosophy, or at least thinking philosophically about technology can shed light on big traditional question in philosophy that weren't cast in terms of technology, can we know we're not in a simulation? That sheds light on what we can know about the world. Can we lead a good life in a virtual world? That sheds some light on what it is to lead a good life and so on.
So yeah, this is the half of techno-philosophy, we're thinking about technology sheds light on philosophy. The other half is thinking philosophically, using philosophy to shed light on technology and just thinking philosophically about virtual reality technology, simulation technology, augmented reality technology and so on. And that's I think something I really try to do in the book as well. And I think these two things, these two processes of course complement each other. Because thinking, you think philosophically about technology, it shed some light on the technology, but then it turns out actually to have some impact on the broader issues of philosophy at the same time.
Lucas Perry: Sure. So what's coming up for me is Plato's Cave Allegory is actually a form of techno-philosophy potentially, where the candle is a kind of technology that's being used to cast shadows to inform how Plato's examining the world.
David Chalmers: That's interesting. Yeah. I hadn't thought about that. But I suppose back around Plato's time, people did a whole lot with candles and fire. These were very major technologies of the time. And maybe at a certain point people started developing puppet technology and started doing puppet style shows that were a form of, I don't know, entertainment technology for them. And then for Plato then to be thinking about the cave in this way, yeah, it is a bit of a technological setup and Plato is using this new technology to make claims about reality.
Plato also wrote about other technologies. He wrote about writing, the invention of writing and he was quite down on it. He thought or at least his spokesman's Socrates said, "In the old days people would remember all the old tales, they'd carry them around in their head and tell them person to person, and now that you can write them down, no one has to remember them anymore." And he thought this was somehow a step back in the way in which some people these days think that putting all this stuff on your smartphone might be a step back. But yeah, Plato was very sensitive to the technologies of the time.
Lucas Perry: So let's make a B line for your central claims in this book. And just before we do that, I have a simple question here for you. Maybe it's not so simple but... So what is virtual reality?
David Chalmers: Yeah, the way I define it in the book, I make a distinction between a virtual world and virtual reality, where roughly virtual reality technology is immersive. It's the kind of thing you experience say with a Oculus Quest headset that you put onto your head and you experience a three dimensional space all around you. Whereas a virtual world needn't be immersive. When you play a video game, when you're playing World of Warcraft or you're in Fortnite, typically you're doing this on a two dimensional screen, it's not fully immersive, but there's still a computer generated world.
So my definitions are a virtual world is an interactive computer generated world. It has to be interactive. If it's just a movie, then that's not yet a virtual world, but if you can perform actions within the world and so on and it's computer generated, that's a virtual world. A virtual reality is an immersive interactive computer generated world. Then the extra condition, this has to be experienced in 3D with you at the center of it, typically these days experienced with a VR headset and that's virtual reality. So yeah, virtual reality is immersive interactive computer generated reality.
Lucas Perry: So one of the central claims that you mentioned earlier was that virtual reality is genuine reality. So could you begin explaining why is it that you believe the virtual reality is genuine reality?
David Chalmers: Yeah. Because a lot of this depends on what you mean by real and by genuine reality. And one thing I do in the book is try and break out number of different meanings of real, what is it for something to be real? One is that it has some causal power that it could make a difference in the world. One is that it's out there independent of our minds. It's not just all in the mind. And one, maybe the most important is that it's not an illusion. It's not just that things are roughly as they seem to be. And I try to argue that if we're in VR, the objects we see have all of these properties, basically the ideas. When you're in virtual reality you're interacting with digital objects, objects that exist as data structures on computers, the actual concrete processes up and running on a computer room. We're interacting with concrete data structures realized in circuitry on these computers.
And those digital objects have real causal powers. They make things happen. They're when two objects interact in VR, the two corresponding data structures on a computer are genuinely interacting with each other. When a virtual object appears a certain way to us, that data structure is at the beginning of a causal chain that affects our conscious experience in much the same way that a physical object might be at the start of a causal chain affecting our experience.
And most importantly, I want to argue that, just say, let's take the extreme case of... I find it useful to start with the extreme case of the simulation hypothesis, where all of this is a simulation. I want to say in that case when I have an experience of say a tree in front of me or here's a desk and a chair, I'm going to say none of that is illusory. There's no illusion there. You're interacting with digital object. It's a digital table or a digital chair, but it's still perfectly real.
And the way that I end up arguing for this in the book is to argue that the simulation hypothesis should be seen as equivalent to a kind of hypothesis which has become familiar in physics, the version of the so-called it from bit hypothesis. The it from bit hypothesis says roughly that physical reality is grounded in a level of interaction of bits or some computational process. The paradigm illustration here would be Conway's Game of Life where you have a cellular automaton with cells that could be on or off and simple rules governing their interaction.
And various people have speculated that the laws of physics could be grounded in some kind of algorithmic process, perhaps analogous to Conway's Game of Life. People call this digital physics. And it's not especially widely believed among physicists, but there are some people who take it seriously. And at least it's a coherent hypothesis that, yeah, there's a level of bits underneath physical objects in reality. And importantly, if the it from bit hypothesis is true, this is not a hypothesis where nothing is real, it's just a world where there still are chairs and tables. There still are atoms and quarks. It's just they're made of bits. There's a level underneath the quarks, the level of bits that things are perfectly real.
So in the book I try to argue that actually the simulation hypothesis is equivalent to this it from bit hypothesis. It's basically, if we're in a simulation, yeah, there are still tables and chairs, atoms and quarks. There's just a level of bits underneath that. All this is realized maybe by a computer process involving the interaction of bits and maybe there's something underneath that in turn that leads to what I call the it from it hypothesis. Maybe if we're in a simulation, there's a number of levels like this.
But yeah, the key then is the argument that these two hypotheses are equivalent, which is a case I try to make in chapter nine of the book. The argument itself is complex, but there's a nice illustration to illustrate it. On one hand, we've got a traditional God creating the universe by creating some bits, by, yeah, "Let there be bits," God says and lays out the bits and gets them interacting. And then we get tables and chairs out of that. And in the other world we have a hacker who does the same thing except via a computer. Let there be bits arranged on the computer, and we get virtual tables and chairs out of that. I want to argue that the God creation scenario and the hacker simulation scenario basically are isomorphic.
Lucas Perry: Okay. I'm being overwhelmed here with all the different ways that we could take this. So one way to come at this is from the metaphysics of it where we look at different cosmological understandings. You talk in your book about there being, what is it called? The dust theory? There may be some kind of dust which can implement any number of arbitrary algorithms, which then potentially above that there are bits, and then ordinary reality as we perceive it as structured and layered on top of that. And looking at reality in this way it gives a computationalist view of metaphysics and so also the world, which then informs how we can think about virtual reality and in particular the simulation hypothesis. So could you introduce the dust theory and how that's related to the it from bit argument?
David Chalmers: Yeah. The dust theory is an idea that was put forward by the Australian science fiction writer, Greg Egan in his book, Permutation City, which came out in the mid 90s, and is a wonderful science fiction novel about computer simulations. The dust theory is in certain respects even more extreme than my view. I want to say that as long as you have the right computation and the right causal structure between entities in reality, then you'll get genuine reality. And I argue that can be present in a physical reality, that can be present in a virtual reality. Egan goes a little bit more extreme than me. He says, "You don't even need this causal structure. All you need is unstructured dust."
We call it dust. It's basically a bunch of entities that have no spatial properties, no temporal properties. It's a whole totally unstructured set of entities, but we think of this as the dust and he thinks the dust will actually generate every computer process that you can imagine. He thinks they can generate any objects that you imagine and any conscious being that you can imagine and so on. Because he thinks there's ways of interpreting the dust so that it's for example, implementing any computer program whatsoever. And in this respect, Egan has actually got some things in common with philosophers like the American philosophers, Hilary Putnam, and John Searle, who argued that you can find any computation anywhere.
Searle argued that his wall implemented the WordStar, word processing program. Putnam suggested that maybe a rock could implement complex computations, basically, because you can always map the parts of the computation of the physical object onto the parts of the computation. I actually disagree with this view. I think it's two unconstrained. I think it makes it too easy for things to be real.
And roughly the reason is I think you need constraints of cause and effect between the objects. For a bunch of entities in a rock or a wall to implement, say a WordStar, they have to be arranged in a certain way so they go through certain state transitions. And so they would go through different state transitions and different circumstances to actually implement that algorithm. And that requires genuine causal structure. And yeah, way back in the 90s, I wrote a couple of articles arguing that the structure you'll find in a wall or a rock is not enough to implement most computer programs.
And I'd say exactly the same for Egan's dust theory, that the dust does not have enough structure to support a genuine reality because it doesn't have these patterns of cause and effect, obeying counterfactuals, if this had happened, then this would've happened. And so you just don't get that rich structure out of the dust. So I want to say that you can get that structure, but to get that structure you need dust structured by cause and effect.
And importantly I think, in average computer simulation like the simulation hypothesis, it's not like the dust, computer simulations really have this rich causal structure going on inside the computer. You've got circuits which are hooked up to each other in the patterns of cause and effect that are isomorphic to that in the physical reality. That's why I say virtual realities are genuine realities because they actually have this underlying computational structure.
But I would disagree with Egan that the dust is a genuine reality because the dust doesn't have these patterns of cause and effect. I ended up having a bunch of email with Greg Egan about this and he was arguing for his own particular theory of causation, which went another way. But yeah, at least that's where I want to hold the line, cause and effect matters.
Lucas Perry: My questions are, so what is the work then that you see the dust theory doing in your overall book in terms of your arguments for virtual reality as genuine reality?
David Chalmers: The dust theory comes relatively late in the book, right? Earlier on I bring in this it from bit idea that yeah, all of reality might be grounded in information in bits, in computational processes. I see that dust theory is being, but partially tied to a certain objection somebody might make, that I've made it too easy for things to be real now. If I can find reality in a whole bunch of bits like that, maybe I'm going to be able to find this reality everywhere. And even if we're just connected to dust, there'll be trees and chairs, and now isn't reality made trivial. So partly I think thats an objection I want to address, one say no it's still not trivial to have reality. You need all this structure, this kind of cause and effect structure or roughly equivalently, a certain mathematical structure in the laws of nature.
And that's really a substantive constraint, but it's also a way of helping to motivate the view that I call structuralism about, and that many others have called structuralism or structural realism about physical reality, which I think is kind of actually the key to my thesis. Why does virtual reality get to count as genuine reality? Ah, because it has the right structure. It has the right causal structure. It has the right kind of mathematically characterizable interactions between different entities. What matters is not so much what these things are made of intrinsically, but the interactions and the relations between them. And that's a view that many philosophers of science these days find very plausible. It goes back to Punqueray and Russell and Carnap and others, but yeah, very popular these days. What matters lets say for a theory in physics to be true is that basically you've got entities with the right kind of structure of interactions between them.
And if that view is right, then it gives a nice explanation of why virtual reality, it counts as genuine reality because when you have a computer simulation of a given physical of say of the physical world that has all that preserves computer simulation preserves, the relevant kind of structure. So yeah, the structure of the laws of physics could be found at a physical reality, but it could also that structure could also be found in a computer simulation of that reality. Computer simulations have the right structure, but then it's yeah. So it turns that's not totally unconstrained. Some people think, Egan thought the dust is good enough. Some people think purely mathematical structure is good enough. In fact, your sometime boss, Max Tegmark, I think may, may think something like this in his book, on the mathematical universe, he argues that reality is completely mathematical.
And at least sometimes it seems to look as if he's saying the content of our physical theories is just purely mathematical claims that there exists certain entities with a certain mathematical structure. And I worry that as with Egan that if you understand the content of our theories is purely mathematical, then you'll find that structure anywhere. You'll find it in the dust. You'll find it in any abstract about mathematics. And there's a worry that actually our physical theories could be trivialized and they can all end up being true, because we can always find dust or mathematical entities with the right structure. But I think if you add the constraint of cause and effect here, then it's no longer trivialized.
So I think of Egan and Tegmark as potentially embracing a kind of structuralism, which is even broader than mine lets in even more kinds of things as reality. And I don't be quite so unconstrained. So I want to add some of these constraints of cause and effect. So this is rather late in the book, this is kind of articulating this, the nature of the kind of structuralism that I see as underlying this view of reality.
Lucas Perry: So, Egan and Max might be letting in entities into the category of what is real, which might not have causal force. And so you're adopting this criteria of cause and effect being important in structuralism for what counts as genuine.
David Chalmers: Yeah. I worry that if we don't at least have, I think cause and effect is very important to our ordinary conception of reality that for example of things have causal powers. If we don't have some kind of causal constraint on reality, then it becomes almost trivial to interpret reality as being anywhere. I guess I think of what we mean by real is partly a verbal question, but I think of causal powers is very central to our ordinary notion of reality. And I think that manages actually to give us a highly constrained notion of reality. Where realities are at least partly individuated by their causal structures, but where it's not how, it's not now so broad that arbitrary conglomerates of dust get to count as being on a par with our physical world or arbitrary sets of mathematical entities likewise.
Lucas Perry: Let's talk more about criteria for what makes things count as real or genuine or whether or not they exist. You spend a lot of time on this in your book, sort of setting and then arguing for different positions on whether or not certain criteria are necessary and or sufficient for satisfying some understanding of like, what is real or what is it that it means that something exists or that it's genuine. And this is really important for your central thesis of virtual reality being genuine reality. Cause it's important to know like what it is that exists and how virtual reality fits into what is real overall. So could you explore some of the criteria for what it means for something to be part of reality or what is reality?
David Chalmers: Yeah. I end up discussing five different notions of reality of what it is for something to be real. I mean, this kind of goes back to The Matrix where Neo says this isn't real and Morpheus says, "What is real? How do you define real?" That's the question? How do you define "real?" And I talk about five main, any number of different things people have meant by real, but I talk about five main strands in our conception of reality. One very broad one is something is real just if it exists. Anything that exists is real. So if that tree exists, it's real. If the number two exists, it's real. I think that's often what we mean. It's also a little bit unhelpful as a criterion, because it just pushes back the question to what is it for something to exist? But it's a start.
Then the second one is the one we've just been talking about the criterion of causal powers. This actually goes back to a one of Plato's dialogue where the Iliadic stranger comes in and says for something to be real, it's got to be able to make a difference. It's got to be able to do something, that's the causal power criterion. And so if you to be real, you've got to have effects. Some people dispute that's necessary. Maybe numbers could be real, even if they don't have effects, maybe consciousness could be real, even if it doesn't have effects, but it certainly seems to be a plausible sufficient condition so that's causal powers. Another one is mind independence, existing independently of the mind. There's this nice slogan from Philip K Dick where he said that reality, something is real if when you stop believing in it, it doesn't go away. Reality is that which when you stop believing in it, it doesn't go away.
That's basically to say its existence doesn't depend on our beliefs. Some things such that their existence depends on our beliefs. I don't know the Easter bunny or something, but more generally I'd say that some things have existence that depends on our minds. Maybe a Mirage of some water up ahead. That basically depends on there being a certain conscious experience in my mind, but there are some things out there independent of my mind that aren't all in my mind, that don't just depend on my mind. And so this leads to the third criteria and something is real when it doesn't wholly depend on our minds, it's out there independently of us.
Now this is a controversial criterion. People think that somethings like money may be real, even though it largely depends on our attitudes towards money. Our treating something as money as part of what makes it money. And actually in the Harry Potter books, I think its Dumbledore has a slogan that goes the opposite way of Philip K Dick's. At one point towards the end of the novels, Dumbledore says to Harry, Harry says, "ah, but none of this is real and this is all just happening inside my head" and Dumbledore says something like, "just because all this is happening inside your head, Harry, why do you think that makes it any less real?"
So I don't know. There is a kind of mental reality you get from the mind, but at any way, I think mind independence is one important thing that we haven't often have in mind when we talk about reality. A fourth one is that we sometimes talk about genuineness or authenticity. And one way to get at this is we often talk about not just whether an object is real, but whether it's a real something like maybe you have a robot kitten, okay, it's a real object. Yes. It's a real object. It's a genuine object with causal powers out there independently of us. But is it a real kitten? Is it a real kitten? Most people would say that, no, a robot kitten maybe it's a real object, but it's not a real kitten. So it's not a genuine, authentic kitten.
More generally for any X we can ask, is this a real X? And that's this criterion of genuineness, but then maybe the deepest and most important criterion for me is the one of not, basically something is real if it's not an illusion, that is if it's rough the way it seems to be. It seems to me that I'm in this environment, there are objects all around me in space with certain colors. There's a tree out there and there's a pond. And roughly I'd say that things are, all that's real if there are things out there roughly as they seem to be, but if all this is an illusion, then those things are not real. So then we say things are real. If they're not an illusion, if they're roughly, as they seem to be. So one thing I then do is to try to argue that for the simulation hypothesis, at least if we're in a simulation, then the objects we perceive are real in all five of those senses, they have causal powers. They can do things they're out there independently of our minds. They exist. They're genuine.
That's a real tree, at least by what we mean by tree. And they're not illusions. So five out of five on what I call the reality checklist, ordinary virtual reality, I want to say it gets four out of five. The virtual objects we interact with are they're still digital objects with causal powers out there independently of us. They exist. They needn't be illusions. I argue that at length that your experiences in VR needn't be illusions. You can correctly perceive a virtual world as virtual, but arguably they're not at least genuine. Maybe for example, the virtual kitten that you interact with in VR. Okay, it's a virtual kitten, but it's not a genuine kitten anymore than the robot kitten is. So maybe virtual tables are not, at least in our ordinary language, genuine tables. Virtual kittens are not genuine kittens, but they're still real objects, but maybe there's some sense in which they fail one of the five criteria for reality. So I would say ordinary virtual realities, at least as we deal with them now may get to four out of five or 80% on the reality checklist.
It's possible that our language might evolve over time to eventually count virtual chairs as genuine chairs and virtual kittens as genuine kittens. And then we might be more VR inclusive in our talk. And then maybe we'd come to regard virtual reality is five out of five on the checklist. But anyway, that's the rough way I ended up breaking on these notions into at least five. And of course, one way to come back is to say, ah, you've missed the crucial notion of reality actually, to be real requires this and VR is not real in that sense. I just read a review of the book where someone said, ah, look obviously VR isn't real because it's not part of the base level of reality. The fundamental outer shell of reality. That's what's real. So I guess this person was advocating. To be real you've got to be part of the base fundamental outer shell of reality. I mean, I guess I don't see why that has to be true.
Lucas Perry: I mean, isn't it though?
David Chalmers: Well.
Lucas Perry: It's implemented on that.
David Chalmers: Yeah. It's true so that's one way to come back to this is to say the digital objects ultimately do exist in the outer shell. They're just diverse.
Lucas Perry: They're undivided from the outer shell. They just look like they're just like can be conceptualized as secondary.
David Chalmers: Yeah, no, it is very much continuous with, I want to say the very least they're on a par with like micro universes. I mean we have people talk now about, say baby universes. Growing up in black holes, inside a larger universe and people take that seriously and then we'd still say, okay, well this universe is part of this universe and that part of the universe can be just as real as the universe as a whole. So I don't think, yeah. So I guess I don't think being the whole universe is required to be real. We've got to acknowledge reality to parts of the world.
So we have kind of like a common sense ontology. A common sense view of the world and to me it seems like that's more Newtonian feeling science evolves and then we get quantum mechanics. And so something you describe you explore in your book is this difference between I forget what you call it, like the conventional view of the world. And then, oh, sorry, the manifest in the scientific image is what you call it. And part of this manifest image is that it seems like humans' kind of have like our common sense ontology is kind of platonic.
So how would you describe the common sense view of what is real?
David Chalmers: Yeah, I talk about the garden of Eden, which is our naive pre-theoretical sense of the world before we've started doing science and developing a more sophisticated view. I do think we have got this tendency to think about reality as like yeah, billiard balls out there and solid objects, colored objects out there in a certain space, an absolute three-dimensional space with one dimension of time. I think that's the model of reality we had in the garden of Eden. So yeah, one of the conceits in the book is well in the garden of Eden things actually were that way. There were three absolute dimensions of space and one dimension of time objects were rock solid. They were colored the way I marked this in the book is by capital letters, say in the garden of Eden, there was capital S "Space" and capital T "Time" where objects were capital S "Solid" and capital C "Colored."
They were capital R "Red" and capital G "Green." And maybe there was capital G "Good" and bad and capital F "Free will" and so on. But then we develop the scientific view of the world. We eat from the tree of knowledge. It gives us knowledge of science and then, okay, well, the world is not quite like that naive conception implied there's no, there's four dimensional space time without an absolute space or a time. Objects don't seem to have these primitive colors out there on their services. They just have things like reflectance properties that reflect light in a certain way that affects our experience in a certain way. Nothing is capital S "Solid." The objects are mostly empty space, but they still manage to resist penetration and then the right way. So I think of this as the fall for Eden. And for many things we've gone from capital S "Space" to lowercase S "space." We've gone from capital S "solidity" to lowercase S "solidity."
And one thing that I think goes on here is that we've moved from kind of a conception of these things as primitive. Primitive space and primitive colors is just like redness out there on the surface of things, what I call primitivism to, rather to a kind of functionalism where we understand things in terms of their effects. To be red now is not to have some absolute intrinsic quality of redness, but it's to be such as to affect us to produce certain experiences to look red. To be solid is not to be absolutely intrinsically solid, but to interact with other objects in such a way that they're solid.
So I think in general, this goes along with moving from a conception of reality as all these absolute intrinsic properties out there to a much more structuralist conception of reality here where what matters for things being real is the right patterns of causal interaction with each other of entities with each other. I'm not saying all there is to reality is structure. My own view is that consciousness in particular is not just reducible to this kind of abstract structure consciousness does in fact have some intrinsic qualities and so on. So I do think that's important too, but I do think in general, the move from the naive conception to the scientific conception of reality has often involved going from these kind of a conception of these primitive intrinsic qualities to a more structural conception of reality.
Lucas Perry: Right. So I imagine that many of the people who will resist this thesis in your book that virtual reality is genuine reality, maybe coming at it from some of these more common sense intuitions about what it means for something to be real, like red as a property that's intrinsic on the surface of a thing. How do you see your book, so are there like common sense intuitions or misconceptions that you see your book as addressing?
David Chalmers: I guess I do think, yeah. Many people do find it as common sense that virtual reality is not full scale reality. First class reality. It doesn't live up to our ordinary conception of reality. And sometimes I think they may have in mind this Edenic conception of reality, the way it was in the Garden of Eden to which my reply is. Yeah. Okay. I agree. Virtual reality does not have everything that we had in the Garden of Eden conception of reality, but neither does ordinary physical reality, even in the kind of physical reality developed in light of science, it's not the garden of Eden picture of reality anymore. We've lost absolute space and absolute time. Now we've lost absolute colors and absolute solidity. What we have is now this complex mathematical structure of entities these interacting at a deep level.
I mean, the further you look, the more evanescent it gets, quantum mechanics is just this it's wave function where objects don't need to have determinate possessions, and who knows what's going on there in string theory and other quantum gravity theories, it looks like space may not be fundamental at all. People have entertained the idea that time is not fundamental at all. So I think a physical reality in a way it's, I'm saying virtual reality is genuine in reality, but one way to paraphrase that is virtual reality is just as real as physical reality. If you want to hear that by saying, well, physical reality is turned out to be more like virtual reality, then I wouldn't necessarily argue with that physical reality is not the Garden of Eden billiard ball conception of reality anymore.
It's this much more evanescent thing, which is partly characterizable by, it's just playing all these, having the right kind of a certain kind of structure. And I think all that we can find in virtual reality. So yeah. So one thing I would do to this person questioning is to say, well, what do you think even about physical reality in light of the last hundred years or so of science?
Lucas Perry: Yeah. The reviewer's comments that you mentioned come off to me as kind of being informed by the Eden view.
David Chalmers: Yeah. I think it's right. It's quite common that's really what it is. It's our naive conception of reality and what reality is like, but yeah, maybe then it's already turned out that the world is not real in that sense.
Lucas Perry: One thing I'd like to pivot here into is exploring value more. How do you see the question of value fitting into your book? There's this other central thesis here that you can live a good life in virtual reality, which seems to go against people's common intuitions that you can't. There's this survey about whether or not people would go into experience machines and most people wouldn't.
David Chalmers: Yeah, Nozick had this famous case of the experience machine, where your body's in a tank, and you get all these amazing experiences of being highly successful. Most people say they wouldn't enter the experience machine. I think of professional philosophers on a survey we did, maybe 15% said they would enter and 70 odd percent said they wouldn't. And a few agnostic. The experience machine though, and many people have treated that as a model for VR in general. But I think the experience machine as Nozick described it, is actually different from VR in some respects. One is that very important respect is that the experience machine seems to be scripted, seems to be pre-programmed you go in there and your life will live out script. You get to become world champion, but it wasn't really anything you did. That was just the script playing itself out. Many people think that's fake. That's not something I actually did. It was just something that happened to me.
VR by contrast, you go into VR, even an ordinary video game, you still got some degree of free will. You're to some extent controlling what happens. You go into Second Life, or Fortnite whatever basically, you've got all kinds of it's not scripted. It's not pre-programmed, it's open ended. I think the virtual worlds of the future will be increasingly open ended. I don't think worries about the experience machine tend to undermine virtual worlds. More generally, I think I want to argue that yeah, virtual worlds can basically be on a par with physical worlds, especially once we've recognized that they needn't be illusions, they needn't be pre-programmed and so on. Then what are they missing? I think you've got what's important to a good life? Maybe consciousness, the right subjective experiences. Also, relationships, very, very important. But I think in the VR certainly at least in a multi-user VR where many people are connected.
That's another thing about the experience machine, it's just you, presumably who's conscious. But in a VR with I'm assuming a virtual world with many conscious beings, you can have relationships with them and get the social meaning of your life. That way knowledge and understanding, I think you can come to have all those things in VR. I think basically all the determinants of a good life, it's hard to see what's in principle missing in VR. There are some worries. Maybe if you want a fully natural life, a life, which is as close to nature as possible, VR is not going to do it because it's going to be removed from nature. But then many of us live in cities or spend most of our time indoors. That's also removed from nature and it's still compatible with a meaningful life. There are issues about birth and death, which it's not obvious how genuine birth and death will work at least in near term virtual worlds.
Maybe once there's uploading, there'll be birth and death in virtual worlds if the relevant creatures are fully virtual. But you might think if virtual was lack birth and death, there are aspects of meaning that they lack. I don't want to say they're exactly on a path with physical reality and all respects, but I'd say that virtual realities can at least have the prime determinants of a good and meaningful life. It's not to say that life in virtual reality going to be wonderful. They may well be awful just as life in physical reality could be awful. But my thesis is roughly that at least the same range of value from the wonderful to the awful, is possible in virtual reality, just as it is in physical reality.
Lucas Perry: It sounds like a lot of people are afraid that they'll be losing out on some of the important things you get from natural life, if virtual life were to take over?
David Chalmers: What are the important things you have in mind?
Lucas Perry: You mentioned people want to be able to accomplish things. People want to be a certain sort of person. People want to be in touch with a deeper reality.
David Chalmers: I certainly think in VR, you can be a certain person, very characteristic. With your own personal traits, you can have transformative experiences in virtual reality. Probably you can develop as a person. You can certainly have achievements in VR.
People who live and spend a lot of time, long term in worlds like second life certainly have real achievements, real relationships. Being in touch with a deeper reality, if by a deeper reality, you mean nature. In VR you're somewhat removed from nature, but I think that's somewhat optional.
In the short term at least, there are things like the role of the body, in existing VRs embodiment is extremely primitive. You've got these avatars, but our relationship with them is nothing like our relationship with our physical body. Things like eating, drinking, sex, or just physical companionship and so on. There's not genuine analogs for those in existing VR. Maybe as time goes on, those things will become better. But I can imagine people thinking I value experiences of my physical body, and real eating and drinking and sex and companionship and so on and physical bodies.
But I could also imagine other people saying actually in VR now, in 200 years time people will say we've got these virtual bodies, which are actually amazing. Can do all that and give you all those experiences and much more and hey, you should try this. Maybe different people would prefer different things. But I do think to some considerable extent, thoughts about the body may be responsible for a fair amount of resistance to VR.
Lucas Perry: Could you talk a little bit about the different kinds of technological implementations of virtual reality? Whether it be uploading, or brains connected to virtual realities.
David Chalmers: Right now the dominant virtual worlds are not even VR at all of course. The virtual worlds people use the most now are video game style worlds typically on desktop or mobile computers on 2D screens.
But immersive VR is picking up speed fast with virtual reality headsets, like the Oculus Quest and they're still bulky and somewhat primitive. But they're getting better every year and they'll gradually get less bulky and more primitive with more detail, better images and so on.
The other form factor, which is developing fast now is the augmented reality form with something like glasses, or transparent headsets that allow you to see the physical world, but also project virtual objects among the physical world.
Maybe it's an image of someone you're talking to. Maybe it's just some information you need for dealing with the world. Maybe it's a Pokemon Go creature you're trying to acquire for your digital collection.
That's the augmented reality form factor in glasses. A lot of people think that over the next 10 or 20 years, the augmented and virtual reality form factors could converge. Eventually we'll be able to maybe have a set of glasses that could project digital objects into your environment, based on computer processes.
Maybe you could dial maybe a slider, which you go all the way down to dial out the physical world, be in a purely virtual world. Dial all the way up to be in a purely physical world, or in between, have elements of both.
That's one way the technology seems to be going. The longer term there's the possibility of bringing in brain computer interfaces. I think VR with standard perceptual interfaces works pretty well for vision and for hearing. You can get pretty good visual and auditory experiences from VR headsets, but embodiment is much more limited via sense of your own body.
But maybe once brain computer interfaces are possible, then there'll be ways of getting elements, these computational elements to interact directly with bits of your brain. Whether it's say visual cortex, auditory cortex for vision and hearing, or for the various aspects of embodied experience processed by the parts of the brain responsible for bodily experience.
Maybe that could eventually give you more authentic bodily experiences. Then eventually, bits of the potentially all kinds of computational circuitry could come to be embedded with brain circuitry in terms of circuitry, which is going to be partly biological and partly digital.
In the long term of course, there's the prospect of uploading, which is the uploading the brain entirely to a digital process. Maybe once our brains are wearing out, we've replaced some of them with silicon circuitry, but you want to live forever upload yourself completely.
You're running on digital circuitry. Of course, this raises so many philosophical issues. Will it still be me? Will I still be conscious? And so on. But assuming that it is possible to do this and have conscious beings and with this digital technology, then that being could then be fully continuous with the rest of the world.
That would just open up so much potential for new virtual reality, combined with new cognitive process, possibly giving rise to experiences that become now even imagine. Now this is very distant future, I'm thinking 100 plus years who knows.
Lucas Perry: You have long AGI timelines.
David Chalmers: This all does interact with AGI. I'm on record as 70% chance of AGI within a century. Maybe that's sped up a bit.
Lucas Perry: You have shorter timelines.
David Chalmers: As far as this interacts with AI, I'm maybe on 50 years mean expected value for years until AGI. Once you go to AGI, all this stuff ought to have happened pretty fast. Maybe there's a case for saying that within a century is conservative.
Lucas Perry: For uploads?
David Chalmers: Yeah, for uploads. I think once you go to AGIs, uploads are presumably-
Lucas Perry: Around the corner?
David Chalmers: ... uploads are around the corner. At least if you believe like me, that once you go to AGI, then you'll have AGI plus, and then you'll have AGI plus, plus super intelligence. Then the AGI plus, plus is not going to have too much trouble with uploading technology and the like.
Lucas Perry: How does consciousness fit in all this?
David Chalmers: One very important question for uploading is whether uploads will even be conscious. This is also very relevant to thinking about the simulation hypothesis. Because if computer simulations of brains are not conscious, then it looks like we can rule out the simulation hypothesis, because we know we are conscious.
If simulations couldn't be conscious, then we're not simulations. At least the version of the simulation hypothesis, where we are part of the simulation could then be ruled out.
Now as it happens, I believe that simulations can be conscious. I believe consciousness is independent of substrate. It doesn't matter whether you're up and running on biology or on silicon, you're probably going to be conscious.
You can run these familiar thought experiments, where you replace say neurons by silicon chips, replace biology by digital technology. I would argue that consciousness will be preserved.
That means at the very least gradual uploading, where you upload bits of your brain lets say a neuron at a time. I think that's a pretty plausible way to preserve consciousness and preserve identity. But if I'm wrong about that and I could be, because nobody understands consciousness.
If I'm wrong about that, then uploads will not be conscious and these totally simulated worlds that people produce could end up being worlds of zombies. That's at least something to worry about.
It'd be certainly risky to upload everybody to the cloud, to digital processes. Always keep some people anchored in biology just in case consciousness does require biology, because it'd be a rather awful future to have a world of super intelligent, but unconscious zombies being the only beings that exist.
Lucas Perry: I've heard from people who agree with substrate independence that digital or classical computer can't be conscious. Are you aware of responses like that? Slash do you have a response to people who agree that consciousness is substrate independent, but the classical digital computers can't be conscious.
I'm not sure what their exact view is, but something like the bits don't all know about all the other bits. There's no integration to create a unified conscious experience.
David Chalmers: The version of this I've heard I'm most familiar with, comes from Giulio Tononi's Integrated Information Theory. Tononi and Christof Koch have argued that processes running on classical computers that is on von Neumann architectures cannot be conscious.
Roughly because von Neumann architectures have this serial core that everything is run through. They argue that this doesn't have the property that Tononi calls integrated information and therefore is not conscious.
Now I'm very dubious about these arguments. I'm very dubious about a theory that says this serial bottleneck would undermine consciousness. I just think that's all part of the implementation.
You could still have 84 billion simulated neurons interacting with each other. The mere fact that their interactions are mediated by a common CPU, I don't see why that should undermine consciousness.
But if they're right then fine, I'd say they've just discovered something about the functional organization that is required for consciousness. It needs to be a certain parallel organization as opposed to this serial organization.
But if so, you're still right, it's still perfectly substrate independent. As long as we upload ourselves not to a von Neumann simulation, but to a parallel simulation, which obviously it's going to be the most powerful and efficient way to do this anyway, then uploading ought to be possible.
I guess another view is that consciousness could turn out to require to rely on quantum computation in a certain essential way. A mere classical computer might not be conscious, whereas quantum computers could be.
If so, that's very interesting, but I would still imagine that all that would also be substrate independent and for uploading them, we just need to upload ourselves to the right quantum computer. I think those points while interesting, don't really provide fundamental obstacles to uploading with consciousness here.
Lucas Perry: How do you see problems in the philosophy of identity fitting in here into virtual reality? For example with Derek Parfit's thought experiments.
David Chalmers: Parfit had these famous thought experiments about the teletransporter from Star Trek, where you duplicate your body. Is that still me at the other end? The uploading cases are very similar to that in certain respects.
The teletransporter, you've got so many different cases. You've got is the original still around, then you create the copy? What if you create two copies? All these come up in the uploading case too.
There's destructive uploading where we destroy the original, but create an upload. There's non-destructive uploading, where we keep the original around, but also make an upload. There's multiple copy uploading and so on.
In certain respects, there's very much analogous to the teleporter case. The change is that we don't duplicate the being biologically. We end up with a silicon isomorph, rather than a biological duplicate.
But aside from that, they're very similar. If you think that silicon isomorph can be just as conscious as biological beings, maybe the two things roughly go together.
The same puzzle cases very much arise. Just say the first uploads are non-destructive, we stay around and we create uploaded copies. Then the tendency is going to be to regard the uploads is very different people from the original.
If the first uploads are destructive, you make copies while destroying the original. Maybe there's going to be much more of a tendency to regard the uploads as being the same person as the original.
If we could make multiple uploads all the time, then there'll be maybe a tendency to regard uploads as second class citizens and so on. The thought experiments here are complex and wonderful.
I tend myself to be somewhat sympathetic with Parfit's deflationary views of these things, which is there may not be very much absolute continuity of people over time. Per the very concept of personal identity, maybe one of these Edenic concepts, that we actually persist through time as absolute subjects.
Maybe all there are just different people at different times that stand in psychological and memory and other continuity relations to each other. Maybe that's all there is to say.
This gets closer now to Buddhist style, no self views, at least with no identic capital S "Self," but I'm very unsure about all of these matters about identity.
Lucas Perry: How would you upload yourself?
David Chalmers: I think the safest way to do it, would be gradually. Replace my neurons one at a time by digital circuits. If I did it all at once, destroy the original creator uploaded copy, I'd worry that I'd be gone. I don't know that. I just worry about it a bit more.
To remove that worry, do it gradually and then I'm much less worried that I'd be gone. If I can do it a bit at a time I'm still here. I'm still here. I'm still here. To do it with maximum safety, maybe I could be conscious throughout, with a continuous stream of consciousness throughout this process.
I'm here watching the operation. They change my neurons over and in that case, then it really seems to me as if there's a continuous stream of consciousness. Continuous stream of consciousness seems to either, I don't know if it guarantees identity over time, but it seems pretty close to what we have in ordinary reality.
We're having continuous stream of consciousness overtime, seems to be the thing that goes along with what we usually think of as identity over time. It's not required because we can fall asleep and arguably lose consciousness and wake up.
Most people would say we're still the same person, but still being continuously conscious for a period, seems about as good a guarantee as you're going to get of being the same person. Maybe this would be the philosophically safest way to upload.
Lucas Perry: Is sleeping not an example that breaks that?
David Chalmers: I'm not saying it's a necessary condition for a personal identity, just a sufficient condition, just plausibly continuous consciousness sufficient for identity over time. So far there is identity over time. Yes, probably too stronger condition.
It maybe you can get identity from much weaker relations, but in order to be as safe as possible, I'm going to go with the strongest sufficient condition.
Lucas Perry: One neuron at a time.
David Chalmers: Maybe 10 neurons at a time. Maybe even a few columns at a time. I don't know.
Lucas Perry: Do you think Buddhist's that realize no self, would be more willing to upload?
David Chalmers: I would think so and I would hope so. I haven't done systematic polls on this. Now I'm thinking I've got to get the data from the last PhilPapers survey and find views on uploading, which we asked about out. We didn't ask about are you Buddhist? We didn't ask do you for example, specialize in Asian philosophy?
I wonder if there could at least be a correlation between specialization and Asian philosophy, and certain views about uploading. Although it'll be complicated by the fact that this will also include Hindu people who very much believe in absolute self, and Chinese philosophers who have all kinds of very different views. Maybe it would require some more fine grained survey analysis.
Lucas Perry: I love that you do these surveys. They're very cool. Everyone should check them out. It's a really cool way to see what philosophers are thinking. If you weren't doing them, we wouldn't know.
David Chalmers: Go to philsurvey.org. This later survey in 2020, we surveyed about 2000 odd philosophers from around the world, on 100 different philosophical questions like God, theism, or atheism, mind, physicalism or non-physicalism and so on.
We actually got data about what professional philosophers tend to believe. You can look at correlations between questions, correlations with area, with gender, with age and so on. It's quite fascinating. Go to philsurvey.org you'll find the results.
Lucas Perry: Descartes plays a major role in your book, both due to his skepticism about the external world, and whether or not it is that we know anything about it. Then there's also the mind body problem, which you explore. Since we're talking about consciousness and the self, I'm curious if you could explain how the mind body problem fits in all this?
David Chalmers: In a number of ways. Questions about the mind are not front and center in this book, but they come up along the way in many different contexts. In the end, actually part five of the book has three chapters on different questions about the mind.
One of them is the question we've just been raising. Could AI systems be conscious? Could uploading lead to a conscious being and so on? That's one chapter of the book. But another one just thinks about mind, body relations in more ordinary virtual realities.
One really interesting fact about existing VR systems, is that if you actually look at virtual worlds, that Cartesian thought that Descartes thought there's a physical world that the mind interacts with, and the mind is outside the physical world, but somehow interacts with it.
You look at a virtual world, virtual worlds often have their own physics and their own algorithmic processes that govern the physical processes in the virtual world. But then there's this other category of things, users, players, people who are using VR and they are running on processes totally outside the virtual world.
When I enter a VR, the VR has its own physics, but I am not subject to that physics. I've got this mind which is operating totally outside the virtual world. You can imagine if somebody grew up in a virtual world like this.
If Descartes grew up in a virtual world, we've got an illustration where Descartes grows up inside Minecraft and gets into an argument with Princess Elizabeth, about whether the mind is outside this physical world interacting with it.
Most people think that the actual Descartes was wrong, but if we grew up in VR, Descartes would be right. He'd say yeah, the mind is actually something outside. He'd look at the world around him and say, "This is subject to its physics and so the mind is just not part of that. It's outside all that. it exists in another realm and interacts with it."
There's a perspective of the broader realm, which all this looks physical and continuous. But at least from the perspective of the virtual world, it's as if Descartes was right.
That's an interesting illustration of a Cartesian interaction of dualism, where the mental and the physical are distinct. It shows a way at which something like that could turn out to be true under certain versions of the simulation hypothesis, say with brains interacting with simulations.
Maybe even is true of something isomorphic of it is true, even in ordinary virtual realities. At least that's interesting and making sense of this mind body interaction, which is often viewed as unscientific or non naturalistic idea. But here's a perfectly naturalistic version of mind body dualism.
Lucas Perry: I love this part and also found it surprising for that reason, because Cartesian dualism, it always feels supernatural, but here's a natural explanation.
David Chalmers: One general theme in this book is that there's a lot of stuff that feels supernatural, but once you look at it through the lens of VR, needn't be quite so supernatural, looks a lot more naturalistic. Of course, the other example is God. If your creator is somebody, a programmer in the next universe up, suddenly God doesn't look quite so supernatural.
Lucas Perry: Magic is like using the console in our reality to run scripts on the simulators world, or something like that.
David Chalmers: This is naturalistic magic. Magic has to obey its own principles too. There's just different principles in the next universe up.
Lucas Perry: Clearly it seems your view is consciousness is the foundation of all values, is that right?
David Chalmers: Pretty much. Pretty much. Without consciousness no value. I don't want to say consciousness is all there is to value. There might be other things that matter as well, but I think you probably have to have consciousness to have value in your life.
Then for example, relations between conscious beings, relations between consciousness and the world can matter for value. Nozick's experience machine tends to suggest that consciousness alone is not quite enough.
There's got to be maybe things like actually achieving your goals and so on that matters as well. But I think consciousness is at the very core of what matters and value.
Lucas Perry: We have virtual worlds and people don't like them, because they want to have an interaction with whatever's like natural, or they want to be a certain kind of person, or they want the people in it to be implemented. They want them in real space, things like that.
Part of what makes being in Nozick's experience machine unsatisfactory, is knowing that some of these things aren't being satisfied. But what if you didn't know that those things weren't being satisfied? You thought that they were.
David Chalmers: I guess my intuition is that's still bad. There's this famous case that people have raised just say your partner is unfaithful to you, but it's really important to you that your relationship be monogamous. However, your partnership, your partner, although professing monogamy has gone off and had relationships with all these other people.
You never know and you're very happy about this and you go to your death without ever knowing. I think most people's intuition is that is bad. That life is not as good, as one where the life was the way this person wanted it to be with the monogamous partner.
That brings out that having your goals or your desires satisfied, the world being the way you want it to be, that matters to how good and meaningful a life is. Likewise, I'd say that I think the experience machine is a more extreme example of that.
We really want to be doing these things. If I was to find out 100 years later that hey, any success I'd had in philosophy wasn't because I wrote good books. It's just because there was a script that said there'd be certain amounts of success and sales and whatever.
Then boy, that would render any meaning I'd gotten out of my life perfectly hollow. Likewise, even if I never discovered this, if I had the experience of having the successful life, but it was all merely pre-programmed, then I think that would render my life much less.
It'd still be meaningful, but just much less good than I thought it had been. That brings out that the goodness, or the value of one's life depends on more than just how one experiences things to be.
Lucas Perry: I'm pushing on consequentialist or utilitarian sensibilities here, who might bite the bullet and say that if you didn't know any of those things, then those worlds are still okay. One thing that you mentioned in your book is that your belief that virtual reality is good, is independent of the moral theory that one has. Could you unpack that a bit?
David Chalmers: I don't know if it's totally independent, but I certainly think that my view here is totally consistent with consequentialism and utilitarianism that says, what matters in moral decision making is maximizing good consequences, or maximizing utility.
Now, if you go on to identify the relevant good consequences, with conscious states like maximizing pleasure, or if you say all there is to utility, is the amount of pleasure. Then you would take a different view of the experience machine.
If you thought that all that there is to utility is pleasure and you say in the experience machine, I have the right amount of pleasure so that's good enough. But I think that's going well beyond consequentialism, or even utilitarianism.
That's adding a very specific view of utility and is the one that the founders utilitarianism had, like Bentham and Mill. I would just advocate a broader view of consequentialism, or utilitarianism where there are values that go beyond value driving from pleasure, or from conscious experience.
For example, one source of value is having your desires satisfied or achieving your goals. I think that's perfectly consistent with utilitarianism, but maybe more consistent with some forms than others.
Lucas Perry: Is having your values satisfied, or your preference satisfied, not just like another conscious state?
David Chalmers: I don't think so, because you could have two people who go through exactly the same series of conscious states, but for one of whom their desires are satisfied, and for the other one, their desires are not satisfied, maybe they both think their desires are satisfied, but one of them is wrong. They both want their partners to be monogamous. One partner is monogamous and the other one is not. They might have exactly the same conscious states, but one has the world is the way they want it to be, and the other one, the world is not the way they want it to be.
This is what Nozick argued and others have argued in light of the experience machine is that, yeah, there's a value maybe in desire, satisfaction that goes be on the value of consciousness, per se. I should say both of these views, even the pleasure centric view are totally consistent with my general view of VR. If someone says, "All that matters is experiences," well, in a certain sense, great. That makes it even easier to lead a good life in VR. But I think if the dialectic is the other way around, even if someone rejects that view ... I tend to believe there's more that matters than just consciousness. Even if you say that, you can still have a good life in a virtual world.
I mean, there'll be some moral views where you can't. Just say you've got a biocentric view of what makes a life good. You got to have, somehow. the right interactions with real biology. I don't know, then maybe certain virtual worlds won't count as having the right kind of biology, and then they won't count as valuable. So I wouldn't say these issues are totally independent of each other, but I do think on plausible moral theories, yeah, very much going to be consistent with being able to have a good life in virtual worlds.
Lucas Perry: What does a really good life in a virtual world look like to you?
David Chalmers: Oh boy. What does a really good life look like to me? I mean, different people have different values, but I would say I get value partly from personal relationships, from getting to know people, by having close relationships with with my family, with partners, with friends, with colleagues. I get a lot of value from understanding things, from knowledge and understanding. I get some value from having new experiences and so on. And I guess I'd be inclined to think that in a virtual world, the same things would apply. I'd still get value from relationships with people, I'd still get value from knowledge and understanding, I'd still get value from new kinds of experience.
Now, there made ways in which VR might allow this to go beyond what was possible outside VR. Maybe, for example, there'll be wholly new forms of experience that go way beyond what was possible inside physical reality, and maybe that would allow for a life which is better in some respects. Maybe it'll be possible to have who knows what kind of telepathic experiences with other people that give you even closer relationships that are somehow amazing. Maybe it'll allow immortality, where you can go on having these wonderful experiences for an indefinite amount of time, and that could be better.
I guess in the short term, I think, yeah, the kind of good experiences I'll have in VR are very much continuous with the good experiences I'll have elsewhere. It's a way of I meet friends sometimes in VR, interact with them, talk with them, sometimes play games, sometimes communicate, maybe occasionally have a philosophy lecture or a conference then. So right now, yeah, what's good about VR is pretty much continuous with a lot of what's good about physical reality. But yeah, in the long them, there may be ways for it to go beyond.
Lucas Perry: What's been your favorite VR experience so far?
David Chalmers: Oh boy, everything is fairly primitive for now. I enjoy a bunch of VR games, and I enjoy hanging out with friends. One enjoyable experience was I gave a little lecture of about VR, in VR, to a group of philosopher friends. And we were trying to figure out the physics of VR, of the particular virtual world we were in, which was on a app called Bigscreen.
Lucas Perry: Yeah.
David Chalmers: And yeah, you do things in Bigscreen, like you throw tomatoes, and they behave in weird ways. They kind of a baby laws of physics, but they kind of don't, and the avatars have their own ways of moving. So we were trying to figure out the basic laws of Bigscreen, and we didn't get all that far, but we figured out a few things. We were doing science inside a virtual world. And presumably, if we'd kept going, we could have gotten a whole lot further and gotten further into the depths of what the algorithms really are that generate this virtual world or that might have required a scientific revolution or two. So I guess that was a little instance of doing a bit of science inside a virtual world and trying to come to some kind of understanding, and it was at least a very engaging experience.
Lucas Perry: Have you ever played any horror games?
David Chalmers: Not really, no. I'm not much of a gamer, to be honest. I play some simple games like Beat Saber or, what is it? SUPERHOT. But that's not really a horror game. Super assassins come after you, but what's your favorite horror game?
Lucas Perry: I was just thinking of my favorite experience and it was probably ... Well, I played Killing Floor once when I first got the VR, and it was probably the most frightening experience of my life. The first time you turn around and there's and embodied thing that feels like it's right in your face, very interesting. In terms of consciousness and ethics and value, we can explore things like moral patiency and moral agency. So what is your view on the moral status of simulated people?
David Chalmers: My own view is that the main thing, the biggest thing that matters for moral status is consciousness. So as long as simulated beings are conscious as we are, then they matter. Now maybe current non-player characters of the kind you find in video games and so on are basically run by very simple algorithms, and most people would think that those beings are not conscious, in which case their lives don't matter, in which case it's okay to shoot these current NPCs in video games.
I mean, maybe we're wrong about that, and maybe they have some degree of consciousness and we have to worry. But at least the orthodox view here would be that they're not, and even on a view that describes some consciousness, it's probably a very simple form of consciousness. But if we look now to a longterm future where there are simulations of brains and simulated AGIs inside these simulated worlds with capacities equivalent to our own, I'd be inclined to think that these beings are going to be conscious like us. And if they're conscious like us, then I think they matter morally, the way that we do, in which case, one should certainly not be indiscriminately killing simulated beings just because it's convenient, or just indiscriminately creating them and, and turning them off. So I guess if we do get to the point where ...
I mean, this applies inside and outside simulations. If we have robot style AGIs that are conscious and they have moral status like ours, if we have simulation style, AGIs, inhabiting simulations, they also have moral status, much like ours. Now it may be hard for, I'm sure there's going to be a long and complicated path to actually keep that playing out in social and legal context, and there may be all kinds of resistance to granting simulations, legal rights, social status, and so on. But philosophically, I guess I think that, yeah, if they're conscious like us, they have a moral status like ours.
Lucas Perry: Do you think that there will be simulated agents with moral status that are not conscious, for example? They could at least be moral agents and not be conscious, but in a society and culture of simulated things, do you think that there would be cases where things that are sufficiently complex, yet not conscious, would still be treated with moral patiency?
David Chalmers: It's interesting. I'm inclined to think that any system that has human level behavior is likely to be conscious. I'm not sure that there are going to be cases of zombies that lack consciousness entirely, but behave in extremely sophisticated ways just like us. But I might be wrong. Just say Tononi and Koch are right and that no being running on von Neumann architecture is conscious, then yeah. Then it might be smart to develop those systems because they won't have moral status, but they'll still be able to do a lot of useful things. But yeah, would they still then be moral agents?
Well, yeah, presumably these behaviorally equivalent systems could do things that look a lot like making moral decisions, even though they're not conscious. Would they be genuine agents if they're not conscious? That maybe partly a verbal matter, but they would do things that at least look a lot like agency and making moral decisions. So they'd at least be moral quasi-agents. Then it's an interesting question whether they should be moral patients, too. If you've got a super zombie system making moral decisions, does it deserve some moral respect? I don't know. I mean, I'm not convinced that consciousness is the only thing that matters, morally. And maybe that, for example, intelligence or planning or reasoning carries some weight independent of consciousness.
If that's the case, then maybe these beings that are not conscious could still have some moral status as moral patients, that is deserving to be treated well, as well as just moral agents, as well as just performing moral action. Maybe it would be a second class moral patiency. Certainly, if the choice was between, say, killing and being like that and killing an equivalent conscious being, I'd say, yeah, kill the unconscious one. But that's not to say they have no moral status there.
Lucas Perry: So one of your theses that I'd like to hit on here as well was that we can never know that we're not in a simulation. Could you unpack this a bit?
David Chalmers: Yeah. Well, this is very closely connected to these traditional questions in epistemology. Can you know you're not dreaming now? Could you know that you're not being fooled by an evil demon now? The modern tech version is, "Can you know you're not in a simulation?" Could you ever prove you're not in a simulation? And there's various things people might say, "Oh, I am not in a simulation." I mean, naively, "This can't be a simulation because look at my wonderful kitten here. That could never be simulated. It's so amazing." But presumably there could be simulated kittens. So that's not a decisive argument.
More generally, I'm inclined to think that for any evidence anyone could come up with that's allegedly proof that we're not in a simulation, that evidence could be simulated, and the same experience could be generated inside a simulated world. This starts to look like there's nothing, there's no piece of evidence that could ever decisively prove we're not in a simulation. And the basic point is just that a perfect simulation would be indistinguishable from the world it's the simulation of. If that's the case, awfully hard to see how we could prove that we're not in a simulation.
Maybe we could get evidence that we are in a simulation. Maybe the simulators could reveal themselves to us and show us the source code. I don't know. Maybe we could stress test the simulation by running a really intense computer process, more advanced than before suddenly, and maybe it stresses out the simulation and leads to a bug or something. Maybe there are ways we could get evidence.
Lucas Perry: Maybe we don't want to do that.
David Chalmers: Okay. Maybe that will shut us down.
Lucas Perry: That'll be an x-risk.
David Chalmers: Yeah. Okay. Yeah. Maybe not a good idea. So there are various ways we could get evidence that we are in a simulation, at least in an imperfect simulation. But I don't think we can ever get the evidence in the negative that fully proves that we're not in a simulation. We can try and test for various imperfect simulation hypotheses, but if we get just ordinary the expected results, then it's always going to be consistent with both. And there are various philosophers who tried to say, "Ah, there are things we could do to refute this idea." Maybe it's meaningless. Maybe we could rule it out by being the non-simulation hypothesis being the simpler hypothesis and so on.
So in the book, I try and argue none of those things were work either. And furthermore, once you think about the Bostrom style simulation argument, that says it may be actually quite likely that we're in a simulation because most populations are likely, it seems pretty reasonable to think that most intelligent populations will develop simulation technology. Once you start the thinking that way, I think it makes it even harder to refute the simulation hypothesis, basically, because by this point, these simulation style hypotheses used to be science fiction cases, very distant from anything we have direct reason to believe in.
But as the technology is developing, these stimulation-style hypotheses become realistic hypotheses, ones which is actually very good reason to think are actually likely to be developed both in our world and in other worlds. And I think that actually makes these ... That's had the effect of making these Cartesian scenarios move from the status of science fiction to being live hypotheses, and I think that makes them even harder to refute. I mean, you can make the abstract point that we can never prove it without the modern technology. But I think once they actually exist, once this technology is an existing technology, it becomes all the harder to epistemologically dismiss.
Lucas Perry: You give some credence in your book for whether or not we live in a simulation. Could you offer those now?
David Chalmers: Yeah. I mean, of course, anything like this is extremely speculative. But basically, in the book, I argue that if there are many conscious human-like simulations, and we are probably simulations ourself, and then the question is, "Is it likely that there are many conscious human-like simulations?" And there's a couple of ways that could fail. First, it could turn out that simulating beings like us or universes like ours is not even possible. Maybe the physics is uncomputable. Maybe consciousness is uncomputable. So maybe conscious human-like simulations like ours could be impossible. That's one way this could fail to happen. That's what I call a sim blocker. These are things that would block these simulations from existing. So one class of sim blockers is, yeah, simulations like this are impossible. But I don't think that's more than 50% likely. I'm actually more than 50% confident that simulations like this are possible.
The other class of sim blockers is, well, maybe simulations like this are possible, but for various reasons they'll never be developed, or not many of them will be to developed. And this class of sim blockers includes the ones that Bostrom focuses on. For example, I think there's two of them, either we'll go extinct before we get to that level of technology where we can create simulations, or we'll get there, but we'll choose never to create them, or intelligence civilizations will choose never to create them. And that's the other way this can go wrong is, yeah, these things are possible, but not many of them will ever be created. And I basically say, "Well, if these are possible, and if they're possible, many of them will be created, then many of them will be created and we'll get a higher probability, we're in a simulation."
But then I think, "Okay, so what are the probabilities of each of those two premises?" That conscious human-like simulations are possible? Yeah, I think that's at least 50%. Furthermore, if they're possible, will many of them be created? I don't know. I don't know what the numbers are here, but I guess I'm inclined to think probably my subjective credence is over 50% in that two, given that it just requires some civilizations who eventually create a whole lot of them.
Okay, so 50% chance of premise one, 50% chance of premise two. Let's assume they're roughly independent of each other. That gives us a 25% chance they're both true. If they're both true, and most beings are simulations, if most beings are simulations, and we're probably simulations, putting all that together, I get roughly, at least, 25% that we're in a simulation. Now there's a lot of room for the numbers to go wrong. But yeah, to me, that's at least very good reason, A, to take the hypothesis seriously, and B, just suggests if it's at 25%, we certainly cannot rule it out. So that gives a a quasi-numerical argument that we can never know that we're not in a simulation.
Lucas Perry: Well, one interesting part that seems to feed into the simulation argument is modern work on quantum physics. So we had Joscha Bach on who talked some about this, and I don't know very much about it, but there is this debate over whether the universe is implemented in continuous numbers or non-continuous numbers. And if the numbers were continuous, then the universe wouldn't be computable. Is that right?
David Chalmers: I'm not quite sure which debate you have in mind, but yeah, it certainly is right, that if the universe maximally is doing a real-valued computation, then real-valued computations can only be approximated on finite computers.
Lucas Perry: Right.
David Chalmers: On digital computers.
Lucas Perry: Right. So could you explain how this inquiry into how our fundamental physics work informs whether or not our simulation would be computable?
David Chalmers: I mean, there's many aspects of that question. One thing that some people have actually looked into, whether our world might involve some approximations, some shortcuts. So Zohreh Davoudi and some other physicists have tried to look at the math and say, "Okay, just say our simulation, say there was a simulation that took certain shortcuts. How would that show up empirically?" So it's, okay, this is going to be an empirical test for whether there are shortcuts in the way our physics is implemented.
I don't think anyone's actually found that evidence yet, but, ah, there's some principle evidence we could get of that. But there is the question of whether our world is fundamentally analog or digital, and if our world is fundamentally analog with perfectly precise, continuous, real values making a difference to how the universe evolves, yeah, then that can never be perfectly simulated on a finite digital computer. I would still say it can be approximated. And as far as we know, we could be living in a finite approximation to one of those continuous worlds, but yeah, maybe there could eventually be some empirical evidence of that.
Of course, the other possibility is we're just running on an analog computer. If our physics is continuous and the physics of the next world up is continuous, maybe there will be analog computers developed with maximally continuous quantities, and we could be running on an analog computer like that. So I think even if the physics of our world turns out to be perfectly analog and continuous, that doesn't rule out the simulation hypothesis. It just means we're running on an analog computer in the next universe up.
Lucas Perry: Okay. I'm way above my pay grade here. I'm just recalling now, I'm just thinking of how Joscha was talking that continuous numbers aren't computable, right? So you would need an analog computer. I don't know anything about analog computers, but it seems to me like they-
David Chalmers: It's hard to program analog computers because they require infinite precision, and we found out beings are not good at building things with infinite precision. But we could always just set a few starting values randomly and let the analog computation go from there. And as far as I can tell, there's no evidence that we're not living in a simulation that's running on an analog computer like that.
Lucas Perry: I see. So if we discover our fundamental physics to be digital or analog, it seems like that wouldn't tell us a lot about the simulation, just that the thing that's simulating us might be digital or analog.
David Chalmers: In general, discovering things about our ... I mean, the relationship between the physics of our world and the physics of the simulating world is fairly weak, especially if you believe in universal computation, any classical algorithm can be implemented in a vast variety of computers running on a vast variety of physics. But yeah, but there might be some limits. For example, if our world has a perfectly analog physics that cannot be simulated on a finite digital computer, could be simulated on an infinite digital computer, you can simulate analog quantities with infinite strings of bits, but not on a finite digital computer.
So yeah, discovering that our physics is digital would be consistent with the next universe up being digital, but also consistent with it being analog. Analog worlds can still run digital computers. I mean, it'd be very suggestive if we did actually discover digital physics in our world. I'm sure if we discovered that our physics is digital, that would then get a lot of people thinking, "Hey, this is just the kind of thing you'd expect if people are running a digital computer in the next universe app." That might incline people to take the simulation hypothesis more seriously, but it wouldn't really be any kind of demonstration.
Yeah. If we somehow discover that our physics is perfectly analog, I don't really know exactly how we could discover that because at any given point, we'll only have a finite amount of evidence, which will always be consistent with just being a very close approximation, but just say we could discover that our world runs analog physics. Yeah, then that would be inconsistent, whether this just being a digital simulation in the next universe up, but still quite consistent with it being a simulation running on a analog computer in the next universe up. I don't know how that connects to Joscha's way of thinking about this.
Lucas Perry: Yeah. I'm not sure. I'd love to see you guys talk-
David Chalmers: I hope it's least consistent.
Lucas Perry: ... about this.
David Chalmers: Has he written about this somewhere?
Lucas Perry: I'm not sure. There are lots of podcasts have been talking about it, though.
David Chalmers: Okay, cool.
Lucas Perry: Yeah. So we've gone over a lot here, and it leaves me not really trusting my common sense experience of the world. So pivoting a little bit here, back into the Edenic view of things ... Sorry if I get the word that you used wrong, but it seems like you walk away from that with a view of imperfect realism. Is that right?
David Chalmers: Yeah. Imperfect realism is the perfect thing. Capital S "Solidity" doesn't exist, but the lower case thing, small S "solidity," does exist. An imperfect analog of what we initially believed in.
Lucas Perry: So how do you see the world now? Any differently? What is the world like to David Chalmers after having written this book? What is a person to you?
David Chalmers: I don't know. I mean, I think there's your everyday attitude towards the world and your theoretical attitude towards the world. And I find my everyday attitude towards the world isn't affected that much by discoveries in philosophy, or in science for that matter. We mostly live in the manifest image. Maybe we even treat it a little bit like the Garden of Eden, and that's fine. But then there's this knowledge of what underlies it or what could underlie it. And that's, yeah, once you start thinking philosophically, that gets mind boggling.
I mean, you don't need to go to the simulation hypothesis or the virtual world to get that reaction. I mean, quantum mechanics is quite enough. Oh my God, we live in this world of the quantum wave function where nothing actually has these direct positions and possibly the wave functions collapsing, or possibly many worlds. And so I mean, boy, it's just mind boggling. It is rather hard to integrate ordinary life in reality. So most of us just kind of go on living in the manifest image. Yeah, so once I start thinking about, "Yeah could we be in a simulation?" It's got a similar kind of separateness, I guess.
Mostly I go on living in the manifest image and don't factor this in. But I mean, it does open up all kinds of possibilities once you start thinking that there is maybe this reality plus of all these different levels of reality, like, "Could it be that someday it might be possible to escape this particular virtual world, or maybe when we die, does our code sometimes get uploaded by simulators to go hang out back in other levels of reality. Maybe there are naturalized versions of reincarnation or life after death, and I don't want to say this is why I'm thinking about this stuff. It's not for these quasi-religious reasons, but suddenly, possibilities that had seemed very far out possibilities to me, like life after death, at least come to seem a little bit closer and more like open possibilities than they'd seemed before. So that's at least interesting.
Lucas Perry: One thing you bring up a bit in your exploration here is God. And all these things that you're mentioning, they seem like science and philosophy coming back to traditionally religious ideas, but through a naturalistic exploration, which is quite interesting. So do you have any different thoughts on God after having written this book?
David Chalmers: It's interesting. I'm not remotely religious, myself. I've always thought of myself as an atheist, but yeah, after writing this book, I'm at least ... There is a version of God that I could at least take seriously. This is the simulator. They did, after all, create the world, this world. They may have a lot of power and a lot of knowledge of this world, as gods are meant to have. On the other hand, they're quite unlike traditional gods. In some ways, the simulator needn't be all good, needn't be particularly wise. Oh, also didn't create all of reality. It just created a little bit of reality. Maybe it's a bit like what's sometimes called a demiurge, the the local god, the under-boss god who created this world, but wasn't the one in charge of the whole thing.
So yeah, maybe simulators are a bit more like demiurges. More importantly, I don't think I'd be inclined to erect a religion around the simulation idea. Religions come with ethical practices and really changing your way of life. I don't think there's any particular reason to orient our ethics to a simulation. I mean, maybe you can imagine there'd be some practices that if we really believed we were in a simulation, or there's a good chance of that, we should at least start doing some things differently. Maybe some people might want to try and attract the attention of the simulators. I don't know. That's all very speculative. So I don't find myself ...
I think the one moral of all this for me is that actually ethics and meaning and so on, actually, you don't get your ethics or your meaning from who created you or from whether it's a God or a simulator. Ethics and meaning comes from within. It comes from ourselves, our consciousness, and our interactions.
Lucas Perry: Do you take a line that's similar to Peter Singer in thinking that that is like an objective rational space? Are you a moral realist or anti-realist about those things?
David Chalmers: I tend towards moral anti-realism, but I'm not sure. I find those issues very difficult. Yeah, I can get in the mood where, "Pain is bad," just seems like an absolute fact.
Lucas Perry: Yeah.
David Chalmers: That's just an objective fact. Pain is objectively bad. And then I get at least to some kind of value realism, if not moral realism. Some moods all go that way. Other moods, it's just, yeah, it's all a matter of our attitude towards it. Finally, it's a matter of what we value. If somebody valued pain, it would be good for them. If they didn't, it wouldn't be. And I can go back and forth. I don't have a fixed view of these matters.
Lucas Perry: Are there any questions that I haven't asked you that you would've liked me to ask you?
David Chalmers: Not especially. You asked a lot of great questions, and there are a million others, but actually one interesting thing with this book coming out is getting to do a few of these, having a few of these conversations and seeing all the different questions and different aspects of the book that different people focused on. So, no. I think we've covered a lot of territory here, and yeah, these are a lot of cool things to think about.
Lucas Perry: All right. Well, I'm mindful of the time here, David. Thank you so much for all of your time. If people want to check you out, follow you, and get your new book, where are the best places to do that?
David Chalmers: Probably my website, which is consc.net. Consc, the first five letters of consciousness, or just do a search for my name. And then yeah, the book is ... I've got a page for the book on my website, consc.net/reality, or just search for name of the book, Reality+. I'm not on Twitter or Instagram or any of those things, unfortunately. Maybe I should be one of these days, but for now, I'm not. But yeah, the book will be available January 25th, I guess. All good book sellers. So I hope some of your listeners might be interested to check it out.
Lucas Perry: All right. We'll include links to all of those places in the description of wherever you might be listening or watching. Thank you so much, David. It's always a pleasure speaking with you. I love hearing about your ideas, and it's really a great book at an important time. I think just before all this VR stuff is about to really kick off, and with the launch of the metaverse. It's really well timed.
David Chalmers: Oh, thanks, Lucas. This was all, yeah, a lot of fun to talk about this stuff with you.