Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program.
Topics discussed in this episode include:
- The reason Future of Life Institute is offering AI Existential Safety Grants
- How receiving a grant changed Max’s career early on
- Details on the fellowships and future grant priorities
1:08 What inspired you to start this grants program?
4:16 Where would you rate AI technology in terms of its potential impact and power?
6:16 What kind of impact would you like the new FLI grants program to have on the development and outcomes of artificial intelligence?
8:25 How does your personal experience with grants inform this grants process at the Future of Life Institute.
13:41 Do you have any inspiring futures that speak to your heart that you’d be interested in sharing?
15:59 Do you have any final words for anyone who might be listening that’s considering applying to this grants program but isn’t quite sure
17:29 Could you tell us a little bit more about what the grants program is?
18:29 What are the details of the fellowships?
19:56 Is there a total amount that is on offer between these two programs?
21:20 What are FLI’s other grants-related priorities?
We hope that you will continue to join in the conversations by following us or subscribing to our podcasts on Youtube, Spotify, SoundCloud, iTunes, Google Play, Stitcher, iHeartRadio, or your preferred podcast site/application.
Lucas Perry: Welcome to the Future of Life Institute podcast. I’m Lucas Perry. This is a special episode with FLI president Max Tegmark, as well as with our grants team, Andrea Berman and Daniel Filan. We’re excited to announce a twenty-five million dollar multi-year grants program. The goal is to tip the balance towards the flourishing of life and away from extinction. This was made possible by the generosity of cryptocurrency pioneer, Vitalik Buterin, and the Shiba Inu cryptocurrency community. You can find more information at futureoflife.org/grant-programs or by tuning into the rest of this episode. And with that, I’m happy to have Max Tegmark introduce the meaning and purpose of the grants program and Andrea Berman and Daniel Filan will give you more of the details.
Thanks so much for coming on the podcast Max, I’m really excited to be getting your perspective and story on the FLI grants program that just launched. To start things off here, I’m curious if you could explain what inspired you to start this grants program?
Max Tegmark: Although there is a ton of money going into a rtificial intelligence research, almost all of it is going into making AI more powerful and almost none of it is going into making sure that we keep it safe and keep it beneficial. And I think people often have this misguided perspective where they think of AI either as good or evil and like to quibble about that when it’s pretty clear that artificial intelligence is a tool, just like a knife or a fire and the question isn’t whether it’s good or evil, it’s just morally neutral. Whether it’s good or bad, depends on how we use it. And I think we have the potential to create a really, really inspiring future for life.
If we win this wisdom race between the growing power of AI and the growing wisdom with which we manage it, I think so far, we’re not doing such a great job, we’re seeing AI increasingly manipulate people via social media. We’re seeing AI going into all sorts of increasingly sketchy uses, and there’s still quite the free-for- all on the legal side. And even on the technical side, so far, if you’re Roomba accidentally has a bug in it and then fall down the stairs, no big deal, but we’re putting AI in charge of ever more infrastructure and decisions that affect people’s lives, from courtrooms to electrical grids to things that involve life and death in hospitals, right?
So it’s crucial that we don’t just leave it to people outside of the technical fields to worry about this, but that those of us are really geeky, nerdy, AI researchers like myself also work hard on these technical questions. How can we build AI systems that actually do what we want them to do? And how can we make them actually trustworthy? Not because the sales representative said we should trust the topic because we understand enough about them that we actually can trust them. How can we make sure that they actually, when they make decisions that involve human lives, that they have been taught the appropr iate human values or goals. These are very difficult technical questions, even aside from the moral and ethical ramifications and we need your help if you are an aspiring AI researcher to help solve them.
Lucas Perry: And so of the technological tools in the 21st century, they’re going to have major impacts on human society as well as the future of humanity and life. Where would you rate AI technology in terms of its potential impact and power?
Max Tegmark: I would rate it as the unchallenged number one. All the other technologies were invented using intelligence, using human intelligence. So it’s a no brainer that if we can amplify human intelligence greatly with artificial intelligence, this is going to enable us to develop all these other technologies that we would have otherwise taken way, way longer to do, faster instead. And particularly if AI succeeds in its initial goal, which was not just to make robotic vacuum cleaners, but to ultimately do everything that the human mind can do, then the most intelligent entities on this planet are going to be machines. And it would be incredibly naive to think that we can just kind of build this bumble into this future without any planning and the things that are somehow magically going to go well.
The default is just disaster and I t hink most likely just human extinction. That’s why this program we’re doing is focused specifically on AI existential safety, safety of systems that are so powerful, so smart in the future that there’s not just a risk that they’re going to fall down the stairs or something, but they could lead to the end of human existence. When I was a kid, a lot of people thought that might be thousands of years away. Now, recent surveys show that most AI researchers think it’s decades away. So it’s high time to really turbocharge this research.
Lucas Perry: Given that this is likely to be one of the most impactful technologies of the 21st century, if not the most impactful technology, what kind of impact would you like the new FLI grants program to have on the development and outcomes of artificial intelligence?
Max Tegmark: The goal of this grants program is not to answer all these crucial technical questions we need answered, but rather to grow the talent pipeline, to bring a lot of talented idealistic people into this field, working on these technical issues. I find it quite ridiculous if you just zoom out a little bit and you think of this beautiful little blue spinning ball in space that we all live on here, almost 8 billion of us with all these opportunities and all these challenges, how few people are actually working on this arguably most important challenge we face, right? There are way, way more people who are working on AI just to optimize how you can get kids spending more time watching ads, how you can get more girls to become anorexic from watching unrealistic role models and so on. There are more people working on those things than on these incredibly fascinating foundational questions of how you make powerful AI systems actually safe and beneficial and that’s got to change.
And I would like to see a future where there’s at least as much talent going into working on AI safety as there is on medical safety, cancer research and issues like this. So we’re nowhere near there. So what we want to try to do is turbocharge this by creating a series of grants, we’re starting with grants that can attract talented undergrads to go in and do their PhD in computer science. And likewise take people who are finishing their PhD in computer science, go and do a really nice well-paid post-doc in AI safety in the hope that these people will soon become professors and work at companies where they can go in turn mentor and supervise a whole new round of talent so that this field can rapidly grow to the size that it needs to be.
Lucas Perry: So as a lifelong scientist and someone passionate about the mystery of the universe, you’ve long been exposed to and experienced talent pipelines since before you became a professor, but then also during your time as a professor. So you have quite a lot of experience with grants. So I’m curious if you could explain some experience in your lifetime where you received a grant that was really important and crucial and helpful to you to helping to work on problems that you found most exciting and important and how your experience with grants in general in your life informs this grants process at the future of life Institute.
Max Tegmark: Yeah. It’s amazing how much difference the grants can make and how much difference it made in fact in my life. I remember when I was a grad student, when I was a post-doc I had a pretty eclectic interest though, there was some things I felt were just really, really important to work on and many of my peers, especially most of my senior peers thought were just BS. And then I got this amazing grant, this particular one was from the Packard Foundation, it’s called the Packard Fellowship. And what was so amazing about it was it let me do exactly the research I was passionate about for five years and it had more impact on my career than any other funding ever and enabled me to just focus on doing what my heart was on fire about. And I happen to believe that not only is it more fun and fulfilling to work on something you really believe in. Hey, if we get this one shot to live on this planet, we should make it count and follow our heart, right?
But it also the case that we do much better work when we work on what we were passionate about. And my message to you if you are someone watching this, if you love AI, if you love computer science, you want to work on it but you’re concerned that it’s not going to be safe or beneficial and you want to make a difference, my message to you is you can really make a career out of this. This is not the case where you have to choose between a well-paid successful career on one hand and your heart on the other hand, you can really have both. Thanks to various other funders like the Open Philanthropy Project, there’s already a lot of grant money available for this kind of AI safety if you’re a professor. The problem is there are almost no professors who do this. That’s what this is trying to change.
If you go into this field now and become a world leading expert on AI existential safety, there will be an amazing career ahead of you and you can help mentor others to continue realizing a lot of the vision that you won’t have time to finish with all by yourself. So it’s a great career move and it’s incredibly rewarding because you know when you do this that you are actually working on what I believe is the single most crucial fork in the road that that humanity has ever faced. We’ve spent 13.8 billion years on this planet being basically a subject to the whims of nature, now there’s a drought, now there’s a hurricane, there’s nothing we can do about any of this, to the point where we have become so empowered by our technology that we can either use it to ruin our planet, finish chopping down the rest of the rain forest, mess up our climate, massacre other people, other species, or we can use it to create a future where life flourishes like never before.
It’s so obvious that technology has this innate ability to enable flourishing. Why is it that the life expectancy now is not 30 years anymore because of technology? Why is it that most of you watching this are not worried about starving to death in your life or dying of pneumonia? Because there’s technology, right? And technology today is very limited by our own intelligence as humans, our ability to invent the cure for cancer and many of the things. With artificial intelligence, it’s quite clear that if we get this right, we’re not going to be limited by our own abilities anymore. We’re going to be limited just eventually by the laws of nature because artificial intelligence that’s beneficial and aligned with our values will enable us to get through all these roadblocks and enable a truly inspiring future, not just for the next election cycle, but for billions of years, and maybe not just on earth either, but throughout much of this amazing cosmos. This grants program basically is a portal where you can go through and help bring about this inspiring future.
Lucas Perry: Do you have any inspiring futures that speak to your heart that you’d be interested in sharing?
Max Tegmark: I tried to be very humble about the question about exactly how the future should be and I would very much not like to micro-manage future generations, but I would very much like to give those future generations the opportunity to exist in the first place. Now we have been so reckless with our tech so far that we’ve almost obliterated Earth with an accidental nuclear war a bunch of times. And if we build artificial general intelligence without solving these crucial technical problems, it’s overwhelmingly likely that some small clique of humans is going to just use that to take power over the whole rest of the planet. If anyone watching this isn’t worried about that, I would encourage you to just take 30 seconds and just visualize the face of your least favorite leader on this planet. You don’t have to tell anyone know who it is, and just imagine that now they control everything.
If that doesn’t make you feel great, then I think you’re on board with this vision that this great power that AI can unleash should not be given to that person, it should be given to humanity. We should figure out a way of using this technology to empower everybody to create a good future. And we do not have the answers to how to do that yet. In order to be able to really answer your question Lucas is about how should our society be organized, or should we have a very pluralistic world where different people in different corners of the world can do things their own way and experiment as long as they don’t go kill everybody else, you got to respect the pluralism. How this all played out, I want to be humble and defer to others to help work it out, but a prerequisite to even be able to have that conversation is that we can control the technology itself again and have it safe, beneficial, and start thinking through these hard questions about how you can even make AI that can understand human values, learn them and retain them.
Lucas Perry: So as we wrap up here, do you have any final words for anyone who might be listening that’s considering applying to this grants program but isn’t quite sure or does AI but isn’t sure that AI alignment or AI existential risk research is really the right path. What is it that you might say or share with someone like that?
Max Tegmark: If you’re considering any kind of career and you’re not sure, you should go find people who are in the career already and just talk to them and see what it’s like. And we’ve created an AI existential safety community page which will be linked from this video where you can see a bunch of friendly faces of professors around the world who are working on this. Maybe one of them can be your mentor for your piece, for your post-doc. Reach out to people like that and talk to them, ask them what they do, ask them why they’re excited about it, ask them if they’re taking on students or post-docs, ask them if they would like to have you as a free post-doc or a free grad student, because that’s what it’s going to be like for them if you come with our fellowship.
Lucas Perry: Alright. Excellent Max, I think you did a really wonderful job of conveying how there are few, few issues which measure up to the impact and scale of this and so thank you for that.
Max Tegmark: Thank you, Lucas.
Lucas Perry: And with that, I’m happy to introduce Daniel and Andrea who will give you more details about the grants. Welcome to the podcast Andrea and Daniel, it’s great to have you here. Andrea, could you tell us a little bit about what the grants program is?
Andrea Berman: We were very excited to receive a twenty-five million dollar donation to build a grants program, specifically looking at supporting existential risk and ways to reduce existential risk. We are especially focused on supporting collaboration amongst people thinking about these topics and we are excited to collaborate with everyone. We are also excited about addressing the talent pipeline and supporting people early in their career to get into studying existential risk. We’re looking at supporting policy and advocacy, behavioral science, and the AI existential safety program, which has already launched and that’s what Daniel is leading.
Daniel Filan: And particularly in the AI existential safety aspect, we’re really interested in work that’s about analyzing ways in which AI could cause some kind of existential catastrophe for humanity, agendas for research that could reduce this existential risk, and of course people who are actually doing it, reducing existential risk, not causing it, we will not fund that.
Lucas Perry: Exactly.
Daniel Filan: Yeah, so we have two specific fellowships on offer right now. So the first is a PhD fellowship. So this is as we mentioned for people working on technical aspects of AI Existential Safety. We are in particular targeting it at people who are just starting their PhD in 2022. So applying this season to start next year. And we are also interested in funding people who are already in their PhD and want to be working on AI Existential Safety, but like perhaps don’t have the funding to work on that particular topic. We’re trying to, I don’t know, I think we’re being somewhat generous, we have a stipend of $40,000 for people in the US, UK or Canada. We are also going to if people are shortlisted, we are going to pay for some of their application fees to universities and also we’ll invite them to an information session about which places might be good to work on it. If people are shortlisted, we are going to pay for some of their application fees to universities and also we’ll invite them to an information session about which places might be good to work on it. The deadline is October 29th, 2021, including the day of October, the 29th, for the application and letters of recommendation and have to come in by November 5th. So that’s the PhD fellowship. We also have a postdoctoral fellowship. So if you’re, you know, if you just graduated from a PhD and want to do a postdoc or maybe you’re moving in from industry or a different field, I think this could be a pretty good option for you. In this case, it’s obviously a higher stipend, $80,000 for people in the USA or Canada. And the deadline for that one is November the fifth.
Lucas Perry: Is there a total amount that is on offer between these two programs?
Daniel Filan: Well, I guess we shouldn’t spend more than $25 million, but I actually don’t have a particular budget. I think we’re pretty excited to support as many people as it makes sense to. We’ll see how it goes.
Lucas Perry: So if you’re, if your application is excellent, then you should apply anyways.
Daniel Filan: Yeah. I don’t think we expect to run out of money for really good applicants.
Lucas Perry: Okay, great. It sounded like there’s also this AI existential risk community that’s also being developed. Could you also tell me a little bit about that?
Daniel Filan: One aspect of making work happen in this space is publicizing who’s interested in work particularly focused on reducing existential risk. So I think people have a good sense of who’s interested in natural language processing or who’s interested in reinforcement learning in AI. And you might have some sense of who’s interested in working on safety in general, but it can be a little bit less clear which professors are really interested in supervising work specifically on existential safety. So what we’ve done is we have this form that professors can fill out to tell us why they’re interested and we’ll feature some people on our website who we think they’re interested in supervising work in this field, we think they think good supervisors, just so that for one, students can find them, and for two, that other professors can know who’s in this space.
Lucas Perry: Yeah, I really love the idea of this community in terms of increasing the transparency of who’s working on what. I would have loved that in, in undergrad, because it seems like a lot of what you need here is to basically be already in the community or pretty adjacent to it, to get the transparency into seeing who’s working on what. So I really love the idea of this community. Here we’ve discussed the AI existential risk grant program. But as we mentioned, there’ll be phases or grant programs that are also focused on reducing existential risk. So Andrea, could you tell me a little bit about these future programs that will be created with the generous donation from Vitalik Buterin and the Shiba Inu cryptocurrency community.
Andrea Berman: We at FLI have a policy team which is currently in the midst of developing its new policy priorities that will inform what our grant making priorities will be as well. We hope that we will announce both our internal and grant making priorities in early 2022, at which point we anticipate making a range of grants, both some research grants and some fellowships to address the talent pipeline. We also anticipate grants addressing behavioral science, again most likely research grants as well as fellowship grants. And we also are in plans to announce soon some grants related to the Future of Life awards which will be a great opportunity to not only support the big award winners, but those people that may have helped support their work along the way, or may have helped highlight their work to us so that we could be able to celebrate them.
So we are excited about putting this large donation to good use and thinking about innovative and creative ways that we can make grants. And I think one of the threads that runs amongst all our grants is, as I mentioned earlier, we really want to collaborate with others. We’ve already been talking about talking with other funders about ways that we can collaborate with them on supporting individuals and organizations in this space and just forming better connections with all of the people that are working in this space or want to be working in this space is a great way to feed the ecosystem and expand it. So that is an exciting thing.
Lucas Perry: So speaking of exciting things, I’m curious what you’re both, you know, looking back over all of these different things that are on offer, what are you both most excited about and hopeful for these grants going into the future? Daniel, would you like to start off?
Daniel Filan: Yeah. If I had to pick one thing, I think the idea of somebody who’s like heard of this idea that maybe you can do technical work to reduce the chance that humanity goes extinct related to AI. They go to FLIs website, they see like, oh, here’s some professors who are interested in working on this topic too. They get some FLI fellowship and they go on to do amazing work. I think that really might happen. I think we’ve got a good shot of making that happen. And then if it did, I think that would be fantastic. So it’s probably number one exciting thing for me.
Andrea Berman: I am just a lover of learning new things. So I’m already in the last couple of months I’ve learned a lot about existential risk. I’ve been able to connect with a lot of applicants and prospective applicants to help them with their applications and thinking through how they’re going to answer all the questions that we have. We really do. Like Daniel said, there’s a lot of potential great people out there and we want to be able to support them. We want to be as accessible and helpful throughout the process as we can be.
Lucas Perry: If listeners are interested in applying or getting more information or checking out when the deadlines are, where can they do that?
Andrea Berman: They can visit our website at grants.futureoflife.org. There’s all the information about our current grant opportunities there. And it will be updated as the other opportunities I mentioned are rolled out. They can also always email us at firstname.lastname@example.org with any questions they have.
Lucas Perry: Awesome. Well, thank you so much, Andrea and Daniel for coming on. If you have any last parting words of encouragement for the listeners, maybe to help motivate them to apply?
Andrea Berman: You never know, you could save the world.