Skip to content
All Podcast Episodes

Bart Selman on the Promises and Perils of Artificial Intelligence

Published
20 May, 2021

  • Negative and positive outcomes from AI in the short, medium, and long-terms
  • The perils and promises of AGI and superintelligence
  • AI alignment and AI existential risk
  • Lethal autonomous weapons
  • AI governance and racing to powerful AI systems
  • AI consciousness

 

Transcript

Lucas Perry: Welcome to the Future of Life Institute Podcast. I'm Lucas Perry. Today's episode is with Bart Selman and explores his views on possible negative and positive futures with AI. The importance of AI alignment and safety research in computer science, facets of national and international AI governance, lethal autonomous weapons, AI alignment, and safety at the association for the advancement of artificial intelligence and a little bit on AI consciousness.

Bart Selman is a Professor of Computer Science at Cornell University, and previously worked AT&T Bell Laboratories. He is a co-founder of the Center for Human-Compatible AI, and is currently the President of the Association for the Advancement of Artificial Intelligence. He is the author of over 90 publications and has a special focus on computational and representational issues. Professor Selman has worked on tractable inference, knowledge representation, stochastic search methods, theory approximation knowledge compilation, planning, default reasoning, and the connections between computer science and statistical physics.

And so without further ado, let's get into our conversation with Bart Selman.

So to start things off here, I'm curious if you can share with us an example of a or a few futures that you're really excited about and an example of a or a few futures that you're quite nervous about or which you fear most.

Bart Selman: Okay. Yeah. Thank you. Thank you for having me. So just let me start with an example of a future in the context of AI that I'm excited about is the new capabilities that AI brings it should have the potential to make a life for everyone much easier and much more pleasant. I see AI as complementing our cognitive capabilities. So I can envision household robots or smart robots that assist people living in their houses, living independently longer, including doing kinds of work that are sort of monotonous and not that exciting for humans to do. So, AI has the potential to compliment our capabilities and to hugely assist us in many ways, including in areas you might not have thought of. Like for example, policymaking and governance. So AI systems are very good at thinking in high dimensional terms, trade offs between many different factors.

For humans, it's hard to actually think in a multi-dimensional trade-off. We tend to boil things down to one or two central points and argue about a trade off in one or two dimensions. Most policy decisions involve 10, 20 different criterias that may conflict, or be somewhat contradictory and exploring that space, AI can assist us. I mean, finding better policy solutions and better governance for everybody. So I think AI has this tremendous potential to improve life for all of us provided that we learned to share this capabilities that we have policies in place and mechanisms in place to make this a positive experience for humans. And i'll have to draw a parallel, human labor, physical labor machines have freed us from heavy duty physical labor. AI systems can help us with a sort of monotonous cognitive labor or as I mentioned, household robots and other tools that will make our life much better. So that's for the positive side. So should I continue with a negative?

Lucas Perry: So before we get into the negative, I'm curious if you could explain a little bit more specifically what these possible positive futures look like on different timescales. So you explained AI assisting with cognitive capabilities with monotonous jobs. And so, over the coming decades, it will begin to occupy some of these roles increasingly, but there's also the medium term, the long term and the deep future in which the positive fruits of AI may come to bear.

Bart Selman: Yeah. So, that's an excellent point. I think one thing that in any transition and as I say, these medium cognitive capabilities that will help us live better lives, it will also disrupt the labor force and the workforce. And that this is a process that I can see play out over the next five, 10, maybe 15 years, a significant change in workforce. And I am somewhat concerned about how that will be managed because basically I feel we are moving to our future where people would have more free time. We'd have more time to be creative, to travel and to live independently. But of course, everybody needs to have the resources to do that. So there is an important governance issue of making sure that in this transition to a world was more leisure time that we find ways of having everybody benefit from these new future.

And this is really I think, 5, 10, 15 years process that we're faced now, and important, that is done right. Further out in the future, it's a little my own view of AI is that, machines will excel at certain specific task as we've seen very much with AlphaGo, AlphaZero. So, very good at that specific tasks and those systems will come in first self-driving cars, specialized robots for assisting humans. So we'll first get these specialized capabilities. Those are not yet general AI capabilities. That's not AGI. So the AGI future, I think is more 20, 25 years away.

So we first have to find ways of dealing with incorporating these specialized capabilities, which are going to be exciting as a scientist. You know, I already see AI transforming the way we approach science and do scientific discovery and really complementing our ways. I hope people get excited in the areas of creativity, for example, in computers or AI system, bringing a new dimension to these type of human activities that will actually be exciting for people to be part of. And that's an aspect that we started to see emerge, but people are not fully aware of yet.

Lucas Perry: So we have AI increasingly moving its way into specialized kind of narrow domains. And as it begins to proliferate into more and more of these areas, it's displacing all of the traditional human solutions for these areas, which basically all just includes human labor. So there's an increase in human leisure time. And then what really caught my attention was you said AGI maybe 20, 25 years away. Is that your sense of the timeline where you start to see real generality or?

Bart Selman: Yeah. That's in my mind a reasonable sense of a timeline, but we cannot be absolutely certain about that. And it's sort of, for AI researchers it is a very interesting time. The hardest thing at this point in the history of AI is to predict what AI can and cannot do. I've learned as a professor, never to say that deep learning can't do something because every time it surprises me and it can do it a few years later. So, we have a certain sense that, oh! the field is moving so fast that everything can be done. On the other hand, in some of my research, I look at some of these advances and if I can give you a specific example. So, my own research is partly in planning, which is a process of how humans plan out activities.

They have certain goals, and then they plan, what steps should I take to achieve those goals? And that's can be very long sequences of actions to achieve complicated goals. So we worked on sort of a puzzle style domain, which is called Sokoban. And most people will not be familiar with it but it's a kind of a game where it's modeled after workers in a warehouse that have to move around boxes. And so there is a little grid world and you push around the boxes to get them from certain initial state to goal states somewhere else on the grid. And there are walls, and there are corners and all kinds of things you have to avoid. And what's amazing about the planning task is for traditional planning, this was really a very challenging domain. So we picked it because traditional planners could do maybe a hundred steps, a hundred pushes as we call them, but that was about it.

There were puzzles available on the web that required 1500 to 2000 steps. So it was beyond way beyond any automated program. And AI researchers had worked on this problem for decades. So we of course used reinforcement learning RL with specific some clever curriculum training, some clever forms of training. And suddenly we could solve these 2000 steps, 1500 steps, Sokoban puzzles. So, we were very calm. We're still very excited about that capability. And then we started looking, what did the deep net actually know about the problem? And our biggest surprise there was that although the system had learned very subtle things, that humans that are beyond human capabilities, it also was totally ignorant about other things that were trivial for humans. So in the Sokoban puzzle you don't want to push your box in a corner because once it's in a corner, and you can't get it out of a corner. This is something that a human player discovers in the first, I would say first minute of pushing some boxes around.

We realized, I guess the deep learning routine network never conceptualized the notion of a corner. So it would only learn about corners if it had seen something being pushed in a particular corner. And if it had never seen that corner being used or encountered, it would not realize it shouldn't push the box in there. So we had to sort of, we realized that this did this deep net had a capability that is definitely super human, in terms of being able to solve these puzzles. But also holes in its knowledge of the world that were very surprising to us. And that's, I think part of what makes AI at this time, very difficult to predict. Will these holes be filled in while we develop AI systems that also get these obvious things, right?

Or will AI be at this amazing level of performance, but do things in ways that are to us, like quite odd. And so I think there are hard challenges that we don't quite know how to fill in, but because of the speed with which things are developing, it's very hard to predict whether they will be solved in the next two years on the next, or it will take another 20 years. But I do want to stress, there are surprising things about the I call it "the ignorance of tje learn models" that surprised us humans. Yeah.

Lucas Perry: Right. There are ways in which models fail to integrate really rudimentary parts of the world into their understanding that lead to failure modes that even children don't encounter.

Bart Selman: Yeah. It's... So the problem when we as humans interact with AI systems or think about AI systems, we anthropomorphize. So we think that they do think similar to the way we do things, because that's sort of how we look at complex systems, even animals are anthropomorphized. So, we think that things has to be done a way similar to our own thinking, but we're discovering that they can do things very differently and leave out pieces of knowledge that are sort of trivial to us.

I have discussion with my students and I point that out and they're always sort of even skeptical of my claim. And they say, "well, it should know that somewhere." And we actually do experiments. They say, "no. Okay, if it never seen the box go in that corner, it will just put it in the corner next time." And so they actually have to see it to believe it, because it sounds that how can you be the world's best Sokoban solver, and not know what a human knows in the first minute, but that's the surprise. And that it also makes the field exciting, but that bakes the challenges of super intelligence and general intelligence and the impacts of an AI safety, particularly challenging topic.

Lucas Perry: Right. So, predicting an actual timeline seems very difficult, but if we don't go extinct, then do you see the creation of AGI and superintelligence as inevitable?

Bart Selman: I do believe so. Yes. I do believe so. I think the path I see is we will develop these specialized capabilities, but in more and more areas in almost all areas, and then how they start merging together and in systems that do a two or three or four, and then a thousand specialized tasks. And so generality they will emerge almost inevitably. My only hesitation is what could go wrong? Why might it not happen if there is some aspect of cognition that is really beyond our capabilities of modeling. But I think that is unlikely. I think one of the surprises in the deep net world and the neural network world is that, before the deep learning revolution, if you can call it that before it happened, a lot of people looked at artificial neural networks as being too simplistic compared to real neurons.

So, there was this sense that, yeah, these little artificial neural networks are nice models, but they're way too simplistic to capture what goes on in the human brain. The big surprise was is that apparently that level of simplification is okay, that you can get the functionality of a much more complex, real neural network. You get that level of performance and complexity using much simpler units. So that sort of convinced me that yes, the digital approximations we make, simplifications we make, as long as we connect things in sufficiently complex networks, we get properties emerged that match our human brain capabilities. So that makes me think that at some point we will reach AGI. It's just a little hard to say exactly when, and I think it may not matter that much exactly when we'll have challenges, in terms of AI safety and value alignment that are already occurring today before we have AGI. So we have to deal with challenges right from the start, we don't have to wait for AGI.

Lucas Perry: So in this future where we've realized AGI, so do you see superintelligence as coming weeks, months, or years after the invention of AGI. And then what is beautiful about these futures to you in which we have realized AGI and superintelligence.

Bart Selman: Yeah. So what's exciting about these possible, I mean, there are certain risks and the superintelligence would go against humans. I don't think that is inevitable. I think these systems, they will do things... They will show us aspects of intelligence that to us will look surprising, but will also be exciting. So some of my other work, we'd look at mathematical theorem improving. And we look at AI systems for approving new open conjectures in mathematics. The systems clearly do a very different kind of mathematics than humans do, are very different kinds of proofs, but it's sort of exciting to see a system that can check a billion step proof in a few seconds and generate a billion steps proof in an hour, and realizing that we can prove something to be true mathematically.

So find a mathematical truth that is beyond the human brain. But since we've designed to program and we know how it works and we use the technology, it's actually a fun way to compliment our own mathematical thinking. So that's what I see as the positive sense in which superintelligence will actually be of interest to humans to have around as a compliment to us, assuming they will not turn on us. But I think that's manageable. Yeah.

Lucas Perry: So how long do you see superintelligence arising after the invention of AGI, even though it's all kind of vague and fuzzy, like what's what...

Bart Selman: Yeah. How long... So I think when I think of superintelligence, I think of it more as superintelligent in certain domains. So I assume you are referring to superintelligence as superseding AGI.

Lucas Perry: What I mean is like vastly more intelligent than the sum of humanity.

Bart Selman: I think that's a super interesting question. And I have discussed that I can see capability at vastly more intelligent in areas like mathematical discovery, scientific discovery, thinking about problems with multiple conflicting criteria that has to be weighted against each other. So a particular task I can see superintelligence being vastly more powerful than our own intelligence. On the other hand, there is also a question in what sense superintelligence would manifest itself. That is, if I had to draw an analogy is if you meet somebody who is way smarter than you are, and everybody now meets such a person I've met a few in my life, these people will impress you about certain things and gets you insight. That, "Oh, this is exciting." But when you go to have dinner with them and have a good meal, and they're just like regular people.

So, superintelligence doesn't necessarily manifest itself in all aspects. It will be surprising as certain kinds of areas and tasks and insights, but it will not... I do not believe it will come out. I guess If I draw an analogy like you can't, if you go for dinner with that's bad for dogs, but if you go for dinner with a dog, you will fully dominate all the conversations and be sort of superintelligence compared to the dog.

I'm not sure it's not clear to me that there is an entity that will dominate our intelligence in all aspects. So there will be lots of activities, lots of conversations, lots of things we can have as a superintelligent being, that are quite understandable, quite accessible to us. So the analogy that there will be an entity that dominates our intelligence uniformly, that I'm not convinced exists. And it sort of, that goes back to the question, what human intelligence is, and human intelligence is actually quite general. So there's an interesting question. What is meant by superintelligence? How would we recognize it? How would it manifest itself?

Lucas Perry: When I think of superintelligence, I think of like a general intelligence that is more intelligent than the sum of humanity. And so part of that generality is its capability to run an emulation of like maybe 10,000 human minds within its own general intelligence. And so the human mind becomes a subset of the intelligence of the superintelligence. So in that way, it seems like it would dominate human intelligence in all domains.

Bart Selman: Yeah, what I'm trying to say is I can see that, and that's sort of, if you would play a game of chess with such a superintelligence they would beat you if they would give you a... It would not be fun if they... If he would do some maths with you, the mathematics and then show you some proofs of Fermat's Last Theorem, it will be trivial for the superintelligence. So I can see a lot of specific task and domains where the superintelligence would indeed run circles around you and around any human, but how would it manifest it? So, yeah, on these individual questions, but you have to have the right questions as it is, I guess what I struggle is a little bit, you have to have the right questions to show to superintelligence.

So, like for example. The question, what should we do about income inequality? So like a practical problem in the United States. Would a superintelligence necessarily have something superintelligent to say about that? And that's not that clear to me because there might actually... That it's a tough problem, but it may just be as tough for the superintelligence as it is for any human. So a superintelligent politician has solutions to all our problems suddenly, would it win every debate? I think interestingly, the answer's probably no. There're certain... So super intelligence manifests itself on tasks that require high level of intelligence, like problem solving task, mathematical domains, scientific domains games, but daily life and governance. It's a little less clear to me. So in that sense, and that's what I mean by you're going to have dinner with a superintelligence, which you be just sitting there. I can't say anything useful about income inequality, because the superintelligence will say much more better things about it. I'm not so sure.

Lucas Perry: Maybe you've both had a wine or two, and you you ask the superintelligence and you know, why is there some thing rather than nothing? Or like, what is the nature of moral value? And they're just like...

Bart Selman: What's the purpose of life. I'm not sure the superintelligence is going to get me a better answer to that. So, yeah.

Lucas Perry: And this is where philosophy and ethics and metaethics merges with computer science, right? Because it seems like you're talking about, there are domains in which AI will become superintelligent. Many of these domains, the ones that you listed sounded very quantitative. Ones which involve kind of the scientific method and empiricism, not that these things are necessarily disconnected from ethics and philosophy, but if you're just working with numbers with a given objective, then there's no philosophy that really needs to be done, if the objective is given. But if you ask about how do we deal with income inequality, then the objective is not given. And so you do philosophy about what is the nature of right and wrong? What is good? What is valuable? What is the nature of identity and all of these kinds of things and how they relate to building a good world. So I'm curious, do you think that there are true or false answers to moral questions?

Bart Selman: Yeah, I think that there are clearly wrong answers in this. So, I think moral issues, I think that's a spectrum to me and that we can probably, as humans agree on certain basic moral values, it's also very human kind of topic. So I think, we can agree on basic moral values, but I think the hard part is we also see among people and among different cultures, incredible different views of moral value. So saying which one is right and which one is wrong, may actually be much harder than we would like it to be. So this comes back to the value alignment problem and not like these into discussions about it. It's a very good research field and a very important research field. But the question always is, whose values? And, we now realize that even within a country, people have very different values that are actually hard to understand between different groups of people.

So there is a challenge. There might be uniquely human of other... So it feels like there should be universal truths in morality thinks about equality, for example, but I'm a little hesitant because I'm surprised about how much disagreement I see about these, what I would think are universal truths that somehow are not universal truths for all people. So that's another complication. And again, if you tie that back to superintelligence, so a superintelligence is going to have some position on it, but it is going to be yes or no, but it may not agree with everybody. And there's no superintelligent position on it in my mind. So that's, that's a whole area of AI and value alignment that is very challenging.

Lucas Perry: Right. So, it sounds like you think that you have some intuition that there are universal moral truths, but it's conflicting to why there was so much disagreement across different persons. So I guess I'm curious about two things. The first is one thing that you're excited about for the future and about positive outcomes from AGI. Is it worlds in which AGI and superintelligence can help assist with moral and philosophical issues like around how to resolve income inequality and truth around moral questions? And then the second part of the question is, do you think that superintelligences is created by other species across the universe? Do you think that they would naturally converge on certain ethics, whether those ethics be universal truths or whether they be relative game theoretic expressions of how intelligence can propagate in the universe.

Bart Selman: Yeah, so two very good questions. So, as the first one, I am quite excited about the idea that a superhuman level of intelligence or an extreme level intelligence will help us better understand moral judgments and decisions and issues of ethics. I almost feel that humans are a little stuck in this debate. And a lot has to do, I think, with an inability to explain clearly to each other, why certain values matter and other values should be viewed differently, that it's often even a matter of, can we explain to each other what are good moral judgments and good moral positions? So, I have some hope that AI systems, smart AI system would be better at actually sorting out some of these questions, and then convincing everybody, because in the end, we have to agree on these things. And perhaps these systems will help us find more common ground.

So that's a hope I have for AI systems that truly understand our world, and are truly capable of understanding, because part of the alpha super smart AI would be understanding many different positions, and maybe something that limits humans in getting agreements on ethical questions, is that we actually have trouble understanding the perspective of another person that has a conflicting position. So superintelligence might be one way of modeling everybody's mind, and then, being able to bring a consensus about ... I have an optimistic view of, there may be some real possibilities there for superintelligence. Your second question of whether some alien form of superintelligence would come to the same basic ethical values as we may come to? That's possible. I think it's very hard to, yeah.

Lucas Perry: Yeah, sorry, whether those are ultimate truths, as in facts, or whether they're just relative game theoretic expressions of how agents compete and cooperate in a universe of limited resources.

Bart Selman: Yes, yes. From a human perspective, you would hope there are some universal shared ethical perspective, or an ethical view of the world. I'm really on the fence, I guess. I could also see that, that in the end, very different forms of life, that we would even hardly recognize, would basically interact with us via sort of a game theoretic competition mode, and that they cannot, and because they're so different from us, that we would have trouble finding shared values. So I see possibilities for both outcomes. If other life forms share some commonality with our life form, I'm hopeful for a common ground. But that seems like a big assumption, because they could be so totally different, that they cannot connect at a more fundamental level.

Lucas Perry: Taking these short and long term perspectives, what is really compelling and exciting for you about good futures from AI? Is it the short to medium term benefits? Are you excited and compelled by the longer term outcomes, the possibility of superintelligence allowing us to spread for millions or billions of years into the cosmos? What's really compelling to you about this picture of what AI can offer?

Bart Selman: Yeah. I'm optimistic about the opportunities, both short term and longer term. I think it's fairly clear that humanity is actually struggling with, there's an incredible range of problems right now, sustainability, global warming, political conflicts. You could be quite pessimistic, almost, about the human future. I'm not, but these are real challenges. So I'm hopeful that actually AI will help humanity in finding a better path forward. Now, as I mentioned briefly, even in terms of policy and governance, AI systems may actually really help us there. So far this has never been done. AI systems haven't been sufficiently sophisticated for that, but in the next five to 10 years, I could see systems starting to help human governance. That's the short term. I actually think AI can have a significant positive impact in resolving some of our biggest challenges.

In a longer term, it's harder to anticipate what the world would look like, but of course, spreading out as a superintelligence and living on, in some sense, spreading out across the universe and over many different timescales, having AI continue the human adventure is actually sort of interesting, how we wouldn't be confined to our little planet. We would go everywhere. We'd go out there and grow. So that could actually be an exciting future that might happen. It's harder to imagine exactly what it is, but it could be quite a human achievement. In the end, whatever happens to AI, it is, of course, a human invention. So I think science and technology are human inventions, and that's almost what we can be most proud of, in some ways, of things that we actually did figure out how to do well, aside from creating a lot of other problems on the planet. So we could be proud of that.

Lucas Perry: Is there anything else here in terms of the economic, political and social situations of positive futures from AI that you'd like to touch on, before we move on to the negative outcomes?

Bart Selman: Yeah. I guess the main thing, I'm hoping that the general public and politicians will become more aware, and will be better educated about the positive aspects of AI, and the positive potential it has. The range of opportunities to transform education, to transform health care, to deal with sustainability questions, to deal with global warming, scientific discovery, the opportunities are incredible.

What I would hope is that those aspects of AI will receive more attention in the broader public, and with politicians and with journalists. It's so easy to go after the negative aspects. Those negative aspects and the risk have received a disproportional attention from the positive aspect. So that's my hope.

As part of the AAAI organization, the professional organization for artificial intelligence, part of our mission is to inform Washington politicians of these positive opportunities, because we shouldn't miss out on those. That's an important mission for us, to make that clear, that there's something to be missed out on, if we don't take these opportunities.

Lucas Perry: Yeah. Right. There's the sense that all of our problems are basically subject to intelligence. As we begin to solve intelligence, and what it means to be wise and knowing, there's nothing in the laws of physics that are preventing us from solving any problem that is, in principle, solvable within the laws of physics. It's like intelligence is the key to anything that is literally possible to do.

Bart Selman: Yeah. Underlying that is rational thought, our abilities to analyze things, to predict a future, to understand complex systems. And as the rationality underlies that scientific thought process, and humans have excelled at that, and AI can boost that further. That's an opportunity we have to grab. And I hope we, people, recognize that more.

Lucas Perry: I guess, two questions here, then. Do you think existential risk from AI is a legitimate threat?

Bart Selman: I think it's something that we should be aware of, that it could develop as a threat, yeah. The timescale is a little unclear to me, how near that existential threat is, but it's something that we should be aware of that there is a risk of runaway intelligence systems not properly controlled. Now, I think that the problems will emerge much more concretely and earlier, for example, cybersecurity and AI systems that break into computer networks that are hard to deal with. So it will be very practical threats to us, that will take most of our attention. But the overall existential threat, I think, is indeed also there.

Lucas Perry: Do you think that the AI alignment problem is a legitimate, real problem, and how would you characterize it, assuming you think it's a problem?

Bart Selman: I do think it's a problem. What I like about the term, it sort of makes it crisp, that if we train a system for a particular objective, then it will learn how to be good at that objective. But in learning how to do that, it may violate basic human principles, basic human values. I think, as a general paradigm statement, that we should think of what happens to systems that we train to optimize a certain objective, that they need to achieve that in a way that aligns with human values, I think, is a very fundamental research question and a very valid question. In that sense, I'm a big supporter, in the research community, of taking the value alignment problem very serious.

As I said before, there are some hesitation about how to approach the problem. I think, sometimes, the value alignment folks gloss over this issue, of what are common values, and are there any common values? So the value alignment, solving it assumes, "Okay, well, when we get the right values in, we're all done." What worries me a little bit in that context, as well, these common values are possibly not as common as we think they are. But that's the issue of how to deal with the problem. But the problem itself, and as a research domain, is very valid. As I said early on, with the little Sokoban example, it is an absolutely surprising aspect of the AI systems for training, how they can achieve incredible performance, but doing it in a way, not knowing certain things that are obvious to us, in some very nonhuman ways. So that's clearly coming out in a lot of AI systems. And that's related to the value alignment problem. This fact that we can achieve a super high level of performance, even when we train it carefully with the human generated training data, and things like that, it still can find ways of doing things that are very nonhuman, and potentially very non-value aligned. That makes it even more important to study the topic.

Lucas Perry: Do you think that the Sokoban example, that you can translate the pushing the boxes into corners as a expression of the alignment problem, like imagining if pushing boxes into corners was morally abhorrent to humans?

Bart Selman: Yes. Yeah, that's an interesting way of putting it. It is an example of what I sort of think of as it's a domain, and it's a toy domain, of course, but there's certain obvious truths to us that are obvious. In that case, pushing a box in a corner is not a moral issue, but it's definitely something that is obvious to us. If you replace it with some moral truths to us that is obvious to us, it is an illustration of the problem. It's an illustration of when we think of training a system, and even if you think of, let's say, bringing up a child, or a human learner, you have a model of what that system will learn, what that human learns, and how the human will make decisions. The Sokoban example is sort of a warning of, with an AI system, it will learn the performance, the test, so it will pass the final test. But it may do so in ways that you would never have expected to achieve it.

With the corner example, it's a little strange, almost, to me, to realize that oh, you can solve this very hard Sokoban problem, without ever knowing about what a corner is. And it literally doesn't. It's the surprises of getting to a human level performance, and missing, and not quite understanding how that's done. I think another, for me, a very good example, is machine translation systems. So machine translation systems, we see incredible performance of machine translation systems, where they basically map strings in one language, to a string in English to Chinese, or English to French, having discovered a very complex transformation function in the deep net, trained on hundreds of thousands of sentences, but it's doing it without actually understanding. So it can translate and an English text into a French text or a Chinese text, at a reasonable level, without having any understanding of what the text is about. Again, to me, it's that nonhuman aspect. Now, researchers might push back and say, "Well, the network has to understand something about the texts, deep in the network."

I actually think that we'll find out that the network understands next to nothing about a text. It just has found a very clever transformation that we initially, when we started working on a natural language translation didn't think would exist. But I guess it exists, and you can find it with a gradient descent deep network. Again, it's an example of showing a human level cognitive ability achieved in a way that is very different from the way we think of intelligence. That means, when we start using these systems, we are not aware. So if people in general are not aware that your machine translation app has no idea what you're talking about.

Lucas Perry: So, do you think that there's an important distinction here to be made between achieving an objective, and having knowledge of that particular domain?

Bart Selman: Yes, yes. I think that's a very good point, yeah. So by boiling things down, in my sense, by boiling tasks in AI down too much to an objective, in machine learning, the objective is do well on the test set. By boiling things down too much to a single measurable objective, we are losing something, and we're losing underlying knowledge, the way in which the system actually achieves it.

We're losing an understanding, and we're losing the attention to that aspect of the system. That's why interpretability of deep nets has become sort of a, so it's definitely a hot area.

But it's trying to get back to some of that issue is, what's actually being learned here? What's actually in these systems? But if you focus just on the objective, and you get your papers published, you're actually not encouraged to think about that.

Lucas Perry: Right. And there's the sense, then, also, that human beings have many, many, many different objectives and values that are all simultaneously existing. So when you optimize for one, in a kind of unconstricted way, it will naturally exploit the freedom in the other areas of things that you care about, in order to maximize achieving that particular objective. That's when you begin to create lots of problems for everything else that you value and care about.

Bart Selman: Yeah, yeah. No, exactly. That's the single objective problems. Actually, you lay out, a potential path is saying, "Okay, I should not focus on a single objectives task. I actually have to focus on multiple objectives."

And I would say, go one step further. Once you start achieving objectives, or sets of objectives, and your system performs well, you actually should understand, to some extent, at least, what knowledge is underlying, what is the system doing, and what knowledge is it extracting or relying on, to achieve those objectives? So that's a useful path.

Lucas Perry: Given this risk of existential threat from AI, and also, the AI alignment problem as its own kind of issue, which, in the worst of all possible cases, leads to existential risk. What is your perspective on futures that you fear, or futures that have quite negative outcomes from AI, in particular, in the light of the risk of existential threat, and then, also, the reality of the alignment problem?

Bart Selman: Yeah, I think, so the risk is that we continue on a path of designing a system, with a single objective in mind, and just measuring the achievement there, and ignored yet alignment problem. People are starting to pay attention to it, but paying attention to it, and actually really solving it is, two different things. There is a risk that these systems just become so good and so useful, and commercially valuable, that the alignment problem gets sort of pushed to the background as being not so relevant, and that we don't have to worry about it.

So I think that's sort of the risks that AI is struggling with. And it's a little amplified by the commercial interest. I think you had a clear example, there is the whole social network world, and how that has spread fake news, and then got people into different groups of people to think totally different things, and to believe totally different facts. In that, I see a little warning sign there for AI. Those networks are driven by tremendous commercial interests. It's actually hard for society to say there's something wrong about these things, and maybe we should not do it this way. So that's a risk, it works too well to actually push back and say, "We have to take a step back and figure out how to do this well."

Lucas Perry: Right? So you have these commercial interests, which are aligned with profit incentives, and attention becomes the variable which is trying to be captured for profit maximization. So attention becomes this kind of single objective that these large tech companies are training their massive neural nets and algorithms to try and capture the most of, from people. You mentioned issues with information.

And so people are more and more becoming aware of the fact that if you have these algorithms, that are just trying to capture as much attention as possible, then things like fake news, or extremist news and advertising is quite attention capturing. I'm curious if you could explain more of your perspective on how the problem of social media algorithms attempting to capture, and also commodifying human attention, as a kind of single objective that commercial entities are interested in capturing, how that represents the alignment problem?

Bart Selman: Yeah, so I think it's a very nice analogy. First, I would say, to some extent the algorithms that try to maximize the time spent online, basically, are getting most attention. Those are not particularly sophisticated. Those are actually, very basic sort of, you can sample little TikTok videos. How often are they watched by some subgroup? And if they're watched a lot, you give them out more. If they're not watched, you start giving them out less. So the algorithms are actually not particularly sophisticated, but they do represent an example of what can go wrong with this single objective optimization.

What I find intriguing about it, it's not that easy to fix, I think. Because the companies, of course, their business model is user engagement, is advertising, which, you have to tell that the company's not to make as much money as they could. If there was an easy solution, that would have happened already. I think we're actually in the middle of trying to figure out, is there a balance between making profits from a particular objective and societal interests, and how can we align those ideas? And it's a value alignment problem between society and companies that profit from them. Now, I should stress, in the whole of social networking, and that's, I think, what makes the problem sound intriguing. There's an incredible positive aspects to social networks, and people exchanging stories, and interacting. Again, that I think is what makes it complex. It's not that that it's only negative, it's not. There's tremendous positive sides to having interesting social networks, and exchanges between people. People, in principle, could learn more from each other.

Of course, what we've seen is actually, strangely, people seem to listen less to each other. Maybe it's too easy to find people that think the same way as you do, and the algorithms encourage that. In many ways, the problems with the social networks and the single objective optimization are a good example of a value alignment challenge. It shows that the solution, finding a solution to that is probably, it will require way more than just technology. It will require society and governance companies to come together and find a way to manage these challenges. It will not be an AI researcher in an office that finds a better algorithm. So it is a good illustration of what can go wrong. To me, it's a good illustration, of what can go wrong. And in part, because, if people didn't expect this, actually. They saw the positive sides of these networks, and they're bringing people closer together, and that no one actually had thought of fake news, I think. It's something that emerged, and that shows how technology can surprise you. That's of course, in terms of AI, one of the things we have to watch out for, the unexpected things that we did not think would happen, yeah.

Lucas Perry: Yeah, so it sounds like the algorithms that are being used are simpler than I might have thought, but I guess maybe that seems like it accounts for the difficulty of the problem, if really simple algorithms are creating complete chaos for most of humanity.

Bart Selman: Yeah. No, no, exactly. I think that that's an excellent point. So yeah, you don't have to create very complicated ... You might think, "Oh, this is some deep net doing reinforcement learning."

Lucas Perry: It might be closer to statistics that gets labeled AI.

Bart Selman: Yeah. Yeah, it gets labeled AI, yeah. So it's actually just plain old simple algorithms, that now do some statistical sampling, and then amplify it. But you're right, that maybe the simplicity of the algorithm makes it so hard to say, "Don't do that."

It's like, if you run a social network, you would say, "Let's not do that. Let's spread the posts that don't get many likes." That's almost against your interests. But it is an example of, the power is partly also, of course, the scale on which these things happen.

With the social network, I think, what I find interesting is why it took awhile before people became aware of this phenomenon is, because everybody had their personalized content. There was no share to one news channel, or something like that. There's one news channel. Everybody watches it, and then you see what's on it.

I have no idea what's in my newsfeed of the person who's sitting next to me. So there was also certain things like, "Ah, I didn't know you got all your news articles with a certain slant."

So not knowing what other people would see and having a huge level of personalization was another factor in letting this phenomenon go unnoticed for quite awhile. But luckily people are now at least aware of the problem. I haven't solved it yet.

Lucas Perry: I think two questions come up for me. One thing that I liked, that Yuval Noah Harari has said is, he's highlighted the importance of knowledge and awareness and understanding in the 21st century, because you said this isn't going to be solved by someone in Big Tech creating an algorithm that will perfectly ... captures the collective value of all of the United States or planet earth and how it is that the content be ethically distributed to everyone. It also, it requires some governance, as you said, but then also some degree of self-awareness about how the technology works and like how your information is being biased and constrained and for what reasons. The first question is, I'm curious how you see the need for collective education on technology and AI issues in the 21st century, so that we're able to navigate it as people become increasingly displaced from their jobs and it begins to really take over. Let's just start there.

Bart Selman: So, I think that's a very important challenge that we're facing. And I think education of everyone is a key issue there. So, AI should not be, or these technologies should not be presented as some magic boxes. I think it's much better for people to get some understanding of these technologies. And, I think that's possible in, in our educational system. It has to start fairly early that people get some idea of how AI technologies were. And most importantly, perhaps people need to start understanding better what we can do and what we cannot do and what AI technologies are about. A good example to me is something like the data privacy initiative in Europe, which I think is a very good initiative.

But for example, there's a detail in it, is where you have a right. I think- I'm not sure whether it's part of the law, but there's definitely discussions and how you have a right to get an explanation of a decision by an AI system. So there's a right to an explanation. And what I find interesting about it, that sounds like, oh, that's a very good thing to get. Until you've worked with AI system and machine learning systems, and you realize, you can make up pseudo explanations pretty easily, and you can actually ask your systems to explain it without using the word gender or race, and they will come up with good explanation.

So the idea that a machine learning algorithm has sort of a crisp explanation, that is the true explanation of the decision is actually far from trivial and can actually easily be circumvented. So, it's an example to me, of policymakers coming up with regulations that sounds like they're making progress, but they're missing something about what AI system can and cannot do. Yeah. That's another reason why I think people need much better education and insights into AI technologies and at least hear from different perspectives about what's possible and what's not possible.

Lucas Perry: So, given the risk of AI and algorithms increasingly playing a role in society, but also playing this part of single objective optimization and then us as humanity, having to collectively face the negative outcomes and negative externalities from widely spread algorithms that are single objective maximizing. In light of this, what are futures that you most fear in the short term from 5, 10, 15, 20 years from now where we've really failed it at AI alignment and working on these ethics issues.

Bart Selman: Yeah, so one thing that I do fear is an increased income inequality, and it's as simple as that, that the companies that are the best at AI, that have the most data will get such an advantage over other organizations, that the benefits will be highly concentrated on a small group of people. And that, I think is real, because AI technology, in some sense, amplifies your ability to do things. So it is like in finance, if you have a good AI trading program that can mine text and a hundred or a thousand different indicators, you could build a very powerful financial trading firm. And of course trading firms are working very hard on that, but it concentrates a lot of the benefits in the hands of a small group of people. That I actually think is sort of- in my mind, sort of the biggest short-term risk of AI technology.

It's a risk any technology has, but I think AI sort of amplifies it. So that, has to be managed and that comes back to what I mentioned fairly early on, the benefits of AI. It has to be ensured that it will benefit everyone, and maybe not all to the same extent, but at least everyone should benefit to some extent, and that's not automatically going to happen. So that's a risk I see in development of AI and then more dramatic risks. I think short term cybersecurity issues, smart tax on our infrastructure. AI programs could be quite dangerous, deep fakes and so sophisticated, some deep fakes. There are some specific risks that we have to worry about because they are going to be accelerated with AI technology. And then there's of course the military autonomous weapon risk.

There's an enormous pressure... Since it's a competitive world of developing systems that use as much automation as possible. So, it's not so easy to tell a military or country not to develop autonomous weapon systems. And so I'm really hoping that people start to realize, and this is again, an educational issue, partly of people, the voters basically that there is a real risk there just like nuclear weapons was a real risk and we have to get together to make agreements about at least a management of nuclear weapons. So we have to have agreements, global agreements about autonomous weapons and smart weapons, and what can be developed or what should at least be controlled somehow that will benefit older players. And that's one of the short-term risks I see.

Lucas Perry: So if we imagine in the short term that there's just all of these algorithms, proliferating that are single objective maximizing, that are aligned with whatever corporation that is using them, there is a lack of international agreement on autonomous weapons systems. Income inequality is far higher due to the concentration of power in particular individuals who control vast amounts of AI. So, if you have the wealth to accumulate AI, you begin to accumulate most of the intelligence on earth, and you can use that to create robots or use robotics so that you're no longer dependent on human labor. So there's increase in income and power inequality, and lack of international governance and regulation. Is that as bad as the world gets in the short term? Or is there anything else that makes it even worse?

Bart Selman: No, I think that's about as bad as it gets. And I assumed I would be a very strong reaction in almost every country of the regular person as of the voter or the person in the street. There would be a strong reaction to that. And it's real.

Lucas Perry: So, is that reaction though, possible to be effective in any way if lethal autonomous weapons have proliferated?

Bart Selman: Well, so legal autonomous weapons, yeah. So there are two different aspects to sort of what- In one aspect is what sort of happens within a country. And do the people accept that extreme levels of inequality, income inequality, and power distributions, and I think people will push back and there will be backlash against that. Lethal autonomous weapons when they start proliferating, I think- So I just have some hope that countries will realize that that is in nobody's interest. So, that countries are able to manage risks that are unacceptable to everyone, I think. So I'm sort of hopeful that in the air of lethal autonomous weapons, that we will see a movement by countries to say that, "Hey, this is not going to be good for any one of us."

Now, I'm being a little optimistic here, but with nuclear weapons. We did see it's always a struggle and it remains a struggle today. But so far, countries have sort of managed these risks reasonably well. It's not easy, but it can be done. And I think it's partly done because everybody realizes nobody will be better off if we don't manage these risks. So legal autonomous weapons, I think there has to be first a better understanding that these are real risks. And if you let that get out of hand, like let small groups develop their own autonomous weapons, for example, that that could be very risky to the global system. I'm hoping that countries will realize this and start developing a strategy to manage it, but it's a real risk. Yeah.

Lucas Perry: So should things like this come to pass, or at least some of them, in the medium to long-term, what are futures that you fear in the time range of fifty to a hundred years or even longer?

Bart Selman: Yeah, so the lethal autonomous weapon risk would be, that could just be as bad as nuclear weapons being used at some point. So that sort of could wipe out humanity. So there is, I think that's sort of worst case scenario is that we would go down in flames. There are some other scenarios where I think, and this is more about the inequality issue. Where a relatively small group of people grabs most of the resources and is enabled to do so by AI technology and the rest can live reasonable lives, but are limited by their resources.

So that's, I think a somewhat dark scenario that I could see happen if we don't pay attention to it right now. That could play out in 20, 30 years. It's a little hard to... Again, one thing that's a little difficult to predict is how fast the technology will grow and you combine it with advances in biology and medicine. I'm always a little optimistic. We could be living in just a very different and very positive world too, if that's what I'm hoping that we'll choose. So I am staying away a little bit from too dark a scenario.

Lucas Perry: So, a little bit about AI alignment in particular. I'm curious, it seems like you've been thinking about this since at least 2008, perhaps. I mean, even earlier you can let us know how have your views shifted and evolved. I mean, it's been, what is that, about 13 years?

Bart Selman: Yeah. No, very good question. So yeah, in 2008 we had the, Eric Horvitz and I co-chaired a AAAI presidential panel on the risks of AI. It's very interesting because at that time, this was before the real deep learning revolution. People saw some concerns, but the general consensus of... And this was a group of about 30 or 40 AI researchers and a very, very good group of people. There was a sort of a general census that it was almost too early to worry about the value alignment and the risks of AI. And I think it was true that AI was still a very academic discipline and a theme talking about, oh, what if this AI system starts to work? And then people start using it and what's going to happen, seemed premature and was premature at the time, but it was good for, I think for people to get together and at least discuss the issue of what could happen. Now that really dramatically changed over the last 10 years, as, particularly the last five years.

And in part to people like Stuart Russell, Max Tegmark who basically brought to the forefront these concerns of AI systems, combined with the fact that we see the system starting to work, I guess yeah, if that hadn't happened. So now, we see these incredible investments and companies really going after AI capabilities and suddenly these questions that were quite academic early on are now very real, and we have to deal with them and we have to think about them. And you do, I mean, the good thing is if I look at, for example, NSF and the funding in the United States, but around the world, actually also in Europe and in China, people are starting to fund AI safety, AI ethics, work on value alignment. You see it in conferences and people start looking at those questions. So I think that's the positive side.

So I'm actually quite encouraged, but it was how much was achieved in a fairly short time. You know, FLI played a crucial role in that too, in bringing awareness to the AI safety issues. And now I think among most AI researchers, maybe not all, but most AI researchers, these are viewed as legitimate topics for study and legitimate challenges that we have to address. So it's not, sometimes I feel good about that aspect. Of course, the questions remain urgent and the challenges are real, but at least I think the research community has found the attention. And in Washington, I was actually quite pleased if I look at the significant investments being planned for AI and the development of the AI R&D in the United States and yeah, safety, fairness, a whole range of issues that touches on how AI will affect society are getting serious attention, so they are being funded. And that happened last five years, I would say. So, that's a very positive develop in this context.

Lucas Perry: So given all this perspective on the evolution of the alignment problem and the situation in which we find ourselves today, what are your plans or intentions as the president of AAAI?

Bart Selman: Yeah, so as part of AAAI, we've definitely stepped up our involvement, with Washington policy making process to try to inform policy makers better about the issues. And we've had, actually, we did a roadmap for AI research in the United States, and there was also of planning 20 years ahead of topics. And we proposed there to, I think what was a key component to us to build a national AI infrastructure, as we called it, that there's an infrastructure to do AI research and development that would be shared among institutions and be accessible to almost every organization. And the reason is that we don't want AI research and development to be concentrated just in a few big private companies. We actually would like to make it accessible to many more stakeholders and many more groups in society.

And to do that, you need an AI infrastructure where you have capabilities to store, curate large data sets, large facilities to cloud computing to give access to other groups in society, to build AI tools that are good for them, and that are useful for them. So as AAAI, we are pushing for this sort of generally making AI R&D generally available and to boost the level of funding. Keeping in mind, these issues of fairness, value alignment as valid research topics that should be part of anybody's research proposal. People who do research proposals should have a component of where they consider whether their work is relevant in that context. And if it is, what contributions they can make. So, that's what our society is doing and this is of course, a good time to be doing this because Washington is actually paying attention because not just the US, every country is developing AI R&D initiatives. So our goal is to provide input and to steer it in a positive way. Yeah and that's actually a good process to be part of.

Lucas Perry: So you mentioned alignment considerations being explicitly, at least covered in the publication of papers, is that right?

Bart Selman: So at least I think people are, yeah- so there are papers purely on the alignment problem, but people are- if I look at sort of like reinforcement learning world, people are aware that value alignment is an issue. And to me, it feels so closely related to interpretability and understanding. We talked about it a little bit before is, you are not just getting to a certain objective, quantitative objective, single objective not just optimizing for that, actually understanding the bounds of your system safety bounds, for example, in the work on cyber-physical systems and self-driving cars, as of course, a key issue is how do I guarantee that whatever policy has learned, how do I guarantee that that policy is safe? So it's getting more attention now. You know the pure value alignment problem like when it gets to ethics. I think there is- we talked about values or there's a whole issue of how you define values and what are the basic core values.

And these are partly ethical questions. There I think is still room for growth. But I also see that for example, at Cornell, there are ethics people in the philosophy departments are thinking about ethics, are starting to look at this problem again and looking at the way AI is going in these directions. So partly I'm encouraged by an increase of collaborations between different disciplines that traditionally have not collaborated much, but the fact that ethics is relevant to computer science students. I think now five years ago, nobody even thought of mentioning that. And now I think that most departments realize, yes, we actually should tell our students about ethical issues and we should educate them about algorithmic bias and value alignment is a more challenging thing because you have to know a little bit more about AI, but most AI courses will definitely cover that now.

So, I think there's great progress and I'm hoping that we just keep continuing to make those connections and make it clear that when we train students to be the next generation of AI engineers, that they're very aware, they should be very aware of these ethical components. And that's, I think is, is- it might even be somewhat unique in engineering. I don't think engineering normally would touch on ethics too much, but I think AI is forcing us to do so.

Lucas Perry: So you see understanding of, and a sense of taking seriously the AI alignment problem at for example, AAAI as increasing.

Bart Selman: Yes. Yes, yes. And definitely it's increasing and people are- yeah partly, there's always- it takes time for people to become familiar with the terminology, but people are much more familiar with the questions and yeah, we've even had job candidates talk about AI alignment. And so then the department has to learn about what that means. So it's partly an educational mission, it's you actually have to understand how reinforcement learning, optimize and decision-making, and you have to understand a little bit how things work, but I think we're starting to educate people and definitely people are much more aware of these problems so that's good.

Lucas Perry: Yeah. Does global catastrophic or existential risk from AI fit into AAAI.

Bart Selman: I would say that at this point, yeah. I don't think we got research. Well actually, it's hard to say because we, I think we have like 10,000 submissions and I think there's room at AAAI for those kind of papers. I just haven't actually- personally haven't actually seen them, but that's actually- as president of AAAI, I would definitely encourage us to branch out and if somebody has an interesting paper, this could be a position paper or some other types of papers now that we have that sort of say, okay, let's have a serious paper on existential risks because there is room for it. It just hasn't so far has not happened much, I think, but I think it fits into our missions. So I would encourage that.

Lucas Perry: So you mentioned that one of the things that AAAI was focusing on was collaborating with government and policy making decisions, offering comments on documents and suggestions or proposals. Do you have any particular policy recommendations for existing AI systems or the existing AI ecosystem that you might want to share?

Bart Selman: Yeah, I think my sense there is sort of more like a meta level comment is- I think what we want is people designing systems. There's a significant AI component and that the big tech companies, for example, our main input dare is we want people to pay serious attention to various things like bias, fairness, and these kind of criteria, AI safety. So it's not- I wouldn't have a particular recommendation for any particular system. But, with the AAAI submissions now we asked for sort of an impact statement and that's somebody who does research. And that's what we're asking from the researchers, is when you do research that touches on something like value alignment or AI safety that you should actually think about societal component and possible impact on work. So we're definitely asking people to do that.

In companies, I would say it's more we ask company, we encourage company to have those discussions and make their engineers aware of these issues. And there's one organization. Now the global partnership on AI that that's actually also now very actively trying to do this on an international scale. So it's a process. And it's partly, I think you mentioned earlier an educational process where people have to learn about these problems to start incorporating them into our daily work.

Lucas Perry: I'm curious about what you think of AI governance and the relationship needed between industry and government. And one facet of this is for example, we've had Andrew Critch on the podcast and he makes quite an interesting point that some number of subproblems in the overall alignment problem will be naturally solved via industry incentives. Whereas, some of them won't be. The ones that are, that will naturally be solved by industry incentives are those which align with whatever industry incentives are, so profit maximization. I'm curious, your view on the need for AI governance and how it is that we might cover these areas of the alignment problem that won't naturally be solved by industry.

Bart Selman: That's a good question. I think not all these problems will be solved by industry. So their objectives are sometimes a little too narrow to just solve them, a broad range of objects. So I really think it has to occur in a discussion, in a dialogue between policymakers, government, and public and private organizations. And it may require whether it requires regulation or at least form of self regulation that may be necessary to even level the playing field. Very early on, earlier we talked about social networks are spreading, fake news. You might actually need regulations to tell people not to do certain things because it will be profitable for them to do it. And so then you have to have regulations to limit that.

On the other hand, I do think a lot of things will be through self-regulation. So self-driving cars is a very circumscribed area. There's a clear interest of all the participants in all the companies working on self-driving cars to make them very safe. So for some kind of AI systems, the objectives are sort of self-reinforcing and you need safety. Otherwise, people will not accept them. Other areas, I'm thinking for example, finance industry and that's a big issue on- that it's actually the competitive advantages often in proprietary system. It's actually hard to know what these systems do. That I haven't- I don't have a good solution for that. One of my worries is that financial companies developed technologies that they will not want to share because that would be detrimental to the business, but actually expose risks that we don't even know of.

So society actually has to come to grips with, are risks being created by AI systems that we don't know of? So it has to be a dialogue and interaction between public and private organizations.

Lucas Perry: So in the current AI ecosystem, how do you view and think about narratives around a international race towards more and more powerful AI systems, particularly between the United States and China?

Bart Selman: Yeah. Yeah. So, yeah, I think that's a bit of an unfortunate situation right now. So in some sense, the competition between China and the US and also Europe is good from an AI perspective, in terms of investments in AI R&D which actually does address also some of the AI safety issues and issues of alignment. So in some sense that's a good benefit of these extra investments. And the competition aspect is less positive. And as AI scientists, we actually interact with AI scientists in China and we enjoy those interactions and a lot of good work comes out of that. When things become proprietary, people have data sets that other people don't have and other organization don't have and some countries do have, others don't, I think the competition is not as positive. And, again, my hope is that by bringing out potentially positive aspects of AI, much stronger in terms of how it can... To me, for example, AI can transform the healthcare system.

It can make it much more efficient, much more widely available with remote healthcare delivery and things like that and better diagnosis systems. So there's an enormous upside for developing AI for healthcare. I've actually interacted with people in China that work on healthcare for AI. Whether it gets developed in China or it gets developed here, that actually doesn't matter. It would benefit both countries. So I really hope that we can keep these channels open instead of totally separate developments in these two countries. But there is a bit of a risk because the situation has become so competitive, but, again, I'm hoping people see it will improve healthcare in both countries is probably the right way to do it and we shouldn't be too isolationist in this regard.

Lucas Perry: How do you feel the sense of these countries competing towards more and more powerful AI systems, how do you feel that that affects the chances of successful value alignment?

Bart Selman: Yeah, so that could be an issue. If the countries really start not sharing their technology and not sharing potential advances, it is harder, I think, to keep value alignment issues and AI safety issues under control. I think we should be open about the risk of countries going at it by themselves because the more people look at systems, the more researchers look at different AI systems from different angles, the better. And I guess a very odd example is, I always thought that it would be nice if AlphaZero was available to the AI research community to probe the brain of AlphaZero, but it's not. And so there are already systems in industry that would actually benefit from study by much broader group of researchers. And there's a risk there.

Lucas Perry: Do you think there's also a risk with sharing? It would seem that you would accelerate AGI timelines by sharing the most state-of-the-art systems with anyone, right? And then you can't guarantee that those people will use it in value-aligned ways.

Bart Selman: Yeah. No, no, no. That's a flip side. It's good you brought that up. Yeah. There is a flip side in sort of sharing even the latest deep learning code or something like that, that other people, that malicious actors could use it. In general, I think an openness is better in terms of keeping an eye on what gets developed. So in general, I think openness allows different researchers to develop common standards and common safety guards. So I see that risk of sharing, but I do think overall the international research community can set standard. We see that in synthetic biology and other areas where openness in general leads to better managing of risks. But you're right. There is effect that it accelerates progress, but the countries are big enough, even if China and the US would completely separate their AI developments, both countries would do very well in their development of technology, so.

Lucas Perry: So I'm curious, do you think that AI is a zero-sum game? And I'm curious how you view an understanding of AI alignment and existential risk at the highest levels of Chinese and US government affecting the extent to which there is international cooperation for the beneficial development of AI, right? So there's this sense of racing because we need to capture the resources and power but there's the trade-off with the risks of alignment and existential risk.

Bart Selman: So yeah. I firmly believe that it's not a zero-sum game. Absolutely not. I give the example of the healthcare system. Both China and the US have interest in better accessible healthcare, more available healthcare and lower cost healthcare. So actually the objectives are very similar there, and AI can make incredible difference for both countries. Similarly in education, you can improve education by having AI assisted education, adult education, continuous learning education. So there are incredible opportunities and both countries would benefit. So definitely AI is not a zero-sum game. So I hope countries realize that, when China declared they want to be a leading AI nation by 2030, I think there's room for several leading nations.

So I don't think one nation is better at AI that there will be the best outcome. The better outcome is if AI gets developed and used by many nations and shared. So I hope that politicians and governments see that shared interest. Now, as part of that shared interest, they may actually realize that the existential risks of bad actors, and that can be small groups of people, it could be a company or an organization, a bad actor using AI for negative goals that's a global risk that, again, should be managed by countries collaborating. So I'm hoping that there are actually some understanding of the global benefits and not a zero-sum game, we all can gain, and the risk is a global risk and we should actually have a dialogue about some of these risks. The one component that is a tricky one, I think, is always the military component. But even there, as I mentioned before, the risk of autonomous lethal weapons is, again, something that affects every nation. So I can see countries realizing it's better to collaborate and cooperate in these areas than to just take it as a pure competition.

Lucas Perry: So you said it's not a zero-sum game and that we can all benefit. How would you view the perspective that the relative benefits for me personally, for racing are still higher, even if it's not a zero-sum game, therefore I'm going to race anyway?

Bart Selman: Yes. I mean, yeah. There may be some of that, except that... I almost look at it a little different. I can see a race where we still share technology. So, the race is one of these strange... It's almost like we're competing with each other but we're trying to get better all together. You can have a race and that can still be beneficial for progress, as long as you don't want to keep everything to yourself. And I think what's interesting, and that's the story of scientific discovery and the way scientists operates, in some sense, scientists compete with each other because we all want to discover the next big thing in science. So there's some competition. There is also a sense that we have to share because if I don't share, I don't get the latest from what my colleague is doing. So there's a mutual understanding that yes, we should share because actually helps me, even individually. So that's how I see.

Lucas Perry: So how do you convince people to share the thing which is like the final invention? Do you know what I mean? If I need to share it because then I won't get the other thing that my colleague will make, but I've just made the last invention that means I will never have to look to my colleague again for another invention.

Bart Selman: Yeah. So that's a good one, but yeah. So in science, we don't think there's sort of an endpoint. There will always be something novel, so.

Lucas Perry: Yeah, of course there's always something novel, but you've made the thing that will more quickly discover every other new novel thing than any other agent on the planet. How do you get someone to share that?

Bart Selman: So, well, I think partly the story still is even if one person, if one country gets so dominant, there still is the question, is that actually beneficial for even the country? I mean, there are many different capabilities that we have. There are still nuclear weapons and things like that. So you might get the best AI and somebody might say, "Okay, I think it's time to terminate you." So there's a lot of different forces. So I think it's a sufficiently complex interaction game that I'm thinking that, to think of it as a single dimension issue is probably not quite the way the worlds will work. And I hope politicians are aware of that. I think they are.

Lucas Perry: Okay. So in the home stretch here, we've brought up lethal autonomous weapons a few times. What is your position on the international and national governance of lethal autonomous weapons? Do you think that a red line should be drawn at the fact that life or death decisions should not be delegated to machine systems?

Bart Selman: That's a reasonable goal. I do think there are practical issues to specify exactly in what sense and how the system should work. So, decisions that have to be made very quickly, how are you going to make those if there's no time for a human to be in the loop? So I like it as an objective that there should always be a human in the loop, but the actual implementation of system, I think, needs further work and it might even just come down to actual systems and somebody looking at those systems and say, "Okay, this has sufficient safeguards. And this system doesn't." Because there's this issue of how quickly do we have to react and can this be done?

And of course, that's partly you see that a defensive system may have to make a very quick decision, which could endanger the life of, I don't know, incoming pilots, for example. So there are some issues, but I'd like it as a principle that legal autonomous systems should not be developed and that there should always be this human decision making as part of it, but that it probably has to be figured out for each individual system.

Lucas Perry: So would you be in favor of, for example, international cooperation in limiting, I guess, having treaties and governance around autonomous weapons?

Bart Selman: Oh, definitely. Yeah. Yeah, definitely. And I think people are sort of sometimes skeptical or wonder whether it's possible, but I actually think it's one of those things that is probably possible because when militaries start to develop those systems because that's the real tricky part, when these systems are being developed or start being sold, they can be in the hands of any group. So I think countries actually have an interest in treaties and agreements on regulating or limiting any kind of development of such systems. So I'm a little hopeful that people will see this would be in nobody's interest to have countries competing on developing the most deadly lethal autonomous weapon. That would actually be a bad idea. And I'm hopeful that people will actually realize. That is partly again an educational thing. So people should be more aware of it and will directly ask their governments to get agreements.

Lucas Perry: Do you see the governance of lethal autonomous weapons as being a deeply important issue around the international regulation and governance of AI, kind of like first key issue in AI as we begin to approach AGI and superintelligence? So does our ability to regulate and come up with beneficial standards and regulation for autonomous weapons, is that really important for long-term beneficial outcomes from things like AGI and superintelligence?

Bart Selman: Yeah, I think it would be a good exercise, in some sense, of seeing what kind of agreements you can put in place. So lethal autonomous weapons, I think, is a useful starting place because I think it's fairly clear. I also think there are some complications. You can say, "Oh, we'd never do this." What about if you have to decide in a fraction of a second what to do? So there are things that have to be worked out, but in principle, I think countries can agree that it needs collaboration between countries and then that same kind of discussion, the same kind of channels, because these things take time to form the right channels and the right groups of people to discuss these issues, could then be put towards other risks that AI may pose. So I think it's a good starting point.

Lucas Perry: All right. A final question here, and this one is just a bit more fun. At Beneficial AGI 2019, I think you were on a panel that was about, do we want machines to be conscious. On that panel, you mentioned that you thought that AI consciousness was both inevitable and adaptive. I'm curious if you think about the science and philosophy of consciousness and if you have a particular view that you subscribe to.

Bart Selman: No, it's a fun topic. And actually when I thought about consciousness more and will it emerge, there's an area of AI that actually, because I've been in the field a long time, and the area generally is called knowledge representation and reasoning and it's about how knowledge is represented in an AI system and how an AI system can reason with it. And one big sub-area there was the notion of self-reflection, the notion in a multi-agent system. So self-reflection is not only you know certain things, you also know about what you know, and you know about what you don't know. And similarly in multi-agent systems, you have to know not only what you know, but you have to have some ideas of what others may know. And yes, you might have some ideas of what other agents don't know but that is to facilitate interactions with other agents.

So this whole notion of reflection on your own knowledge and other agents' knowledge, in my mind, is somewhat connected to consciousness of yourself and your environment, of course. So that led to my comment that if you build sufficiently complex systems that behave intelligently, they will have to develop those capabilities. They have to know about what they know, what they don't know and what others know and others don't know. And knowing about knowing what others might know about you, it actually goes arbitrary levels of interactions. So I think it's going to be a necessary part of developing intelligence system. And that's why my sense is that some notion of consciousness will emerge in such systems because it's part of this reflection mechanism.

And then what I think is exciting about it is, in consciousness research there's also a lot of research now on what is the neurological basis for consciousness. There's some neurological basis in the brain that points at consciousness. Well, now we have worked at that. We see how deep reinforcement learning interacts with neuroscience. And we're looking for analogies of deep reinforcement learning approaches in AI and what insights it gives in actual brains, in actual biological neurological systems. So perhaps when we see things like reflection and consciousness emerge in AI systems, we will get new insights in what happens potentially in the brain. So it's a very sort of interesting potential there.

Lucas Perry: My sense of it is that it may be possible to disentangle constructing a self model, like a model of both what I am and also what it is that I know and that I don't know and then also my world model and that these things seem to be correlated with consciousness, with the phenomenal experience of being alive. But it seems to me they would be able to come apart just because it seems conceivable to me that I could be a sentient being with conscious awareness that doesn't have a self model or a world model. You can imagine just like awareness of a wall that's the color green. The no sense of no duality there between self and object. So, it's a bit different the way philosophers come at the problem and computer scientists. There's the computational aspect, of course, the modeling that's happening, but it seems like the consciousness part can become disentangled from the modeling perhaps. And so I'm curious if you have any perspective or opinion on that and how we could ever know if an AI was conscious given that they may come apart.

Bart Selman: No, you raise an interesting possibility that maybe they can come apart. And then the question is, can we investigate that? Can we study that? And that's a question in itself. Yeah. So, I was more coming at it from the sense of that when the system gets complex enough and it starts having these reflections, it will be hard not to have it be conscious. But you're right. It probably could still be, although I would be a little surprised, but yeah. So my point in part is that the deep learning reinforcement approach or whatever deep learning framework we will use to get these reflective capabilities, I'm sort of hoping it might give us new insights into how to look at it from the brain perspective and a neural perspective because these things might carry over. And is consciousness a computational phenomena? My guess is it is, of course, but that still needs to be demonstrated.

Lucas Perry: Yeah. I would also be surprised if sophisticated self and world modeling didn't most of the time or all the time carry along conscious awareness with it. But even prior to that, as we have domain specific systems, it's a little bit sci-fi to think about, but there's the risk of proliferating machine suffering if we don't understand consciousness and we're running all of these kinds of machine learning algorithms that they don't have sophisticated self models or world models, but the phenomenal experience of suffering still exists then that could... We had factory farming of animals and then maybe later in the century, we have the running of painful, deep learning algorithms.

Bart Selman: No, that's indeed a possibility. It sort of argues we actually have to dig deeper into the questions of consciousness. And so far, I think, most AI researchers have not studied it. I'm just starting to see some possibility of studying it again. I'm starting to study it as AI researchers. And it just brought me back a little bit that this notion of reflection that topics go in and out of fashion, but that used to be actually quite seriously studied, including with philosophers about what it means to know what you know, what does it mean to know what you don't know, for example. And then there's the things you don't know that you don't know kind of thing. So, we thought about some of these issues and now consciousness brings in that new dimension and you're quite right. It could be quite separate, but it could also be related.

Lucas Perry: So as we wrap up here, is there a final comment you'd like to make or anything that you feel like is left unsaid or just a parting word for the audience about alignment and AI?

Bart Selman: So, comment to the audience is that the alignment question, value alignment, AI safety are super key important topics for AI researchers, there are many research challenges there that are far from solved. And in terms of the development of AI there are tremendous positive opportunities if things get done right, and that we should not... So one concern I have as an AI researcher is that if we get overwhelmed by the concerns and the risks and decide not to develop positive capabilities for AI. So we should keep in mind that can really benefit society if AI is done well and that we should take that as our primary challenge and manage the risk while doing so.

Lucas Perry: All right, Bart, thank you very much.

Bart Selman: Okay. Thanks so much. It was fun.

View transcript

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram