Skip to content
All Podcast Episodes

Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer

Published
September 29, 2017

If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer.

To learn more, I spoke with Rob Wiblin and Brenton Mayer of 80,000 Hours. The following are highlights of the interview, but you can listen to the full podcast above or read the transcript here.

To learn more, visit 80000hours.org and subscribe to Rob’s new podcast.

Transcript

Ariel: I'm Ariel Conn with the Future of Life Institute. The world is full of problems but each of us has only so much time available to make it better. If you wanted to improve the world as much as possible, what would you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group 80,000 Hours tries to answer. They try to figure out how talented graduates in their 20s can set themselves up to help as many people as possible in as big a way as possible.

To learn more about their research, I'm happy to have Rob Wiblin and Brenton Mayer of 80,000 Hours joining me today. Before taking on the role of director of research for 80,000 Hours, Rob was previously the research director and executive director of the Center for Effective Altruism. He also recently launched a new podcast for 80,000 Hours which I highly recommend checking out and which we'll link to on the site.

Brenton's background is in clinical medicine and he has cofounded two nonprofits including Effective Altruism Australia. At 80,000 Hours, he's a career coach where he gives people personalized advice about how to have those high-impact careers. Rob and Brenton, thank you so much for being here.

Rob: Thanks so much, Ariel. It's a pleasure to be on the show.

Brenton: Thanks for having us.

Ariel: First, can you give us just a bit more background about what 80,000 Hours is and how it started?

Rob: So, 80,000 Hours has been around for about five or six years now and it started when two colleagues of mine, Benjamin Todd and Will MacAskill who were the founders were finishing their undergraduate degrees at Oxford and Cambridge respectively. They wanted to figure out how they could do as much good as possible. Both of them had been studying philosophy, and had a real interest in ethics, and thinking about what is valuable in the world, and how can we contribute to making the world a more moral place, but when they looked into it they couldn't really find that much research to guide them on how … If you wanted to help a lot of people in a really big way, if you wanted to raise the welfare of humans, and I guess animals as well, what would you actually do?

They started doing some investigation of this, looking into things like what are the odds of getting into becoming an MP in the UK if you try to do that or if you became a doctor, how many lives would you save, and if you went into different professions, how much good would you do, and what are the most important problems that they could focus their career on. And pretty quickly, they were learning things just with a couple of months work that really no one else had written up because this whole research area just had barely been investigated at all or where it had been investigated, it was only done very indirectly and there was no one who was pulling it together into an actual guide to how you can do good with your career.

Having realized that they could make a whole lot of progress on these questions really quite quickly, they decided to actually start an organization, 80,000 Hours, which would conduct this research in a more systematic way and then share it with people who wanted to do more good with their career. And 80,000 Hours has also ultimately become part of the Effective Altruism community. And Effective Altruism, as many of our listeners would know is a social movement that's about using reason, and evidence, and analysis to figure out how you can do as much good as possible.

There's different groups who are taking different angles on this. There's people who are looking at how you can donate your money in ways that will do the greatest good  there’s other research groups who are looking at things like what kind of policy changes could you push for in government that would be most valuable. We're taking the angle of if you're a talented graduate in your 20s and you wanted to help as many people as possible in the biggest way as possible with your career, what kind of professions would you go into, what strategies would you adopt and what kind of problems would you be working to solve?

Ariel: And, so, real quick, 80,000 Hours is roughly how much you estimate the average person will spend in a lifetime on their careers, right?

Rob: That's it. 80,000 Hours is roughly the average number of hours that you'd work in a full-time professional career. I think it's 40 years times by 40 hours a week times by 50 weeks a year. On the one hand, that's an awful lot of time that you're potentially going to spend over the next 40 or 50 years of your career so it pays off to spend quite a while thinking about what you're actually going to do with all of that time when you're in your late teens or early 20s.

On the other hand, 80,000 Hours is just not that long relative to the scale of the problems that the world faces. Those would suggest that you actually have to be quite careful about where you're going to spend your time because you can't tackle everything. You've only got one career to spend, so you should be quite judicious about what problems you try to solve and how you go about solving them.

Ariel: How do you actually try to help people have more of an impact with their careers?

Brenton: We break this up into a couple of things. The main one is just the website. People can go to us at 80000hours.org. The second one is sometimes we do individualized coaching for people. On the first, the main product and the main thing that people read on the website is a career guide which is just a set of 12 articles which raise these considerations which are pretty important for you to have in mind when you're thinking about how to have a high impact career.

The career guide will go over just a bunch of ideas that people probably need in order to have high impact careers. We'll talk about how to have careers which are satisfying, how to have a career which is working on one of the world's most important problems, how to set yourself up early in your career so that later on you can have a really large impact as well. Then we also look into a bunch of different specific careers so we write career reviews on things like practicing as a doctor or trying to become a politician.

Then we look at problem profiles as well. Say the problem of climate change or say the problem of artificial intelligence. Then the second part that we do is do career coaching and try to apply advice to individuals. People can apply for this through a link that hopefully we'll be able to put into the show notes of this podcast. With this, we help people go through applying these considerations to their career specifically and try to point them in the best direction that we can for how they can have a really impactful career.

Ariel: Can you walk us through what the process is for someone who comes to you either through the website or directly to you for help choosing a career path?

Brenton: If someone came through career coaching for example, the first thing I try to do is figure out where people are coming from, so what causes they're interested and what their background is, what they could become good at. Then I'll try to think about what's standing between them and having a really high impact career. Sometimes this could be choice of cause area. Sometimes this could be a specific job opportunity that I'm aware of and sometimes it's a few small considerations which people probably don't think about.

An example of something like this is our advice that you want to do a bunch of exploration early on in your career and that it's better to do exploration of things before you do postgraduate study rather than after you do postgraduate study. Then hopefully, I give them some resources and some connections to people working in the relevant fields and just try to leave with a few actionable steps.

Ariel: You mentioned the exploration pre-graduate versus postgraduate, how does your advice change if someone comes to you, say, early in their undergraduate career versus at the end or if they're considering going into graduate school or as a recent graduate?

Rob: Right, so it varies quite a lot whether you're advising someone who's 20 years old versus 60 years old. Some of the big picture things that changes when you're 20, you don't yet really know what you're good at. It turns out that people are just actually very bad at anticipating what they're going to enjoy and where their strengths are. Also, you have so much time ahead of you in your career that if you manage to gain any skills then almost all of the benefit of gaining those skills comes far in the future.

You have 30 or 40 years ahead of you during which you can benefit from learning to write better or become a better public speaker or do better analysis. The things we tend to emphasize early on are: exploration ... so trying to collect information, firsthand information from having a wide range of different experiences to figure out what you flourish at and what you enjoy doing, and also building up career capital.

Career capital is this broader concept of anything that puts you in a better position to make a difference in the future and that includes skills, it includes your network and who you know and what kind of jobs you can get, includes credibility, things like having a degree, the ability to be taken seriously, and it also actually includes just having some money in the bank to potentially finance you changing what you're doing in your career so that you don't just get stuck because you're living paycheck to paycheck.

Now, if someone comes to us and they're 60, by that stage there's not that much time for them to benefit from gaining new skills or from exploring completely different areas. By that stage, we're focus mostly just on how can they use what they already have to have a really large impact right away. We'll be thinking what jobs can they go in right now given the skills that they have, given the network they have, where they can just immediately go and have a lot of impact.

Of course, in the middle of your career when you're 40, it's somewhere between these two things. You can still potentially go into other areas. You can specialize in different problems and especially if you've developed transferable skills like you're a good writer or you're a good speaker, or you're just generally well-informed then potentially you can apply those transferable generic skills to different kinds of problems but at the same time at that stage, you do want be starting to think, how can I actually have an impact now? How can I actually help people directly so you don't actually just run out of time by the time that you're retiring?

Ariel: I actually did go through and try some of the stuff you have on the website and one of the things I did was filled out the questions on your career quiz which I recommend because that I think was the shortest questionnaire on your site possibly. That was nice. Conveniently, the feedback I got was that I should pursue work at an effective nonprofit.

Brenton: Sounds like you're doing well.

Ariel: Yeah, thanks. I know there are other cases where it makes more sense for you to encourage people to earn to give. I was wondering if you could talk about the different between, say, working directly for a nonprofit, or earning to give, or if there's other options as well for people.

Rob: Earning to give for those who don't know is the career approach where you try to make a lot of money and then you give it to organizations that can use it to have a really large impact on the world, a really large positive impact. It's one of the ideas that 80,000 Hours had relatively early on. That was quite uncommon. It was also somewhat controversial because it sounded like we were saying maybe the most moral thing that you could do is to go into finance and make a ton of money and then give it away. Some people found this idea really captivating and interesting and other people found it bizarre and counterintuitive.

Either way, it got a lot of media attention. There's a significant number of people who are out there making money and then donating it and that's the main way that they have impact. Of course if we have people out there who are making money and donating it, there have to be people who are receiving that money as salaries and then doing really useful things. We can’t just have people making money and we can’t just have people doing direct work. You need both people and money in most cases in order to make a nonprofit function.

We think about this in terms of your comparative advantage and also how a funding constrained an area is. I'll just go through those in turn. There are some people who are much better placed to make money and give it away than they are to have a direct impact. For example, I know some people who've chosen to go earning to give, who are extremely good at maths. They are very good at solving mathematical puzzles and they have a massive personal passion about working in finance.

For them, they can potentially be making millions of dollars a year, doing the thing that they love and donating most of that money to effective nonprofits and supporting five, 10, 15, possible even 20 people to do direct work in that place. On the other hand, there's other people who are a much better place, perhaps you're an example of this, who are much better placed to do directly useful work, spreading really important ideas about how we can guide the future of humanity in a positive direction than they are to make a whole lot of money.

I don't think that I could really make six figures with the skills I've built. I'm probably much better placed to be doing directly useful research and promoting ideas than I am to make money and support other people to do that in my place. If you're someone who can make seven figures and donate more than a million dollars a year, then probably you should be seriously thinking about earning to give as a way of making a difference.

The other element here is depending on what problem you want to solve, there's some problems in the world that are flooded with money already that there's lots of other donors who want to support people to work on those problems. There's almost no one who they can find to hire. Then there's other problems where there's almost no money but lots of people who want to work in the area.

I think an example of the latter there might be animal welfare where there's a lot of people who'd really like to directly do work trying to improve the welfare of farmed animals and, I guess, pets as well, but there are relatively few wealthy funders who are backing them. You end up with this problem of having lots of nonprofessional volunteers working in that area and not really enough money, at least in the past, to support professionals to really take it to the next level.

On the other hand, there's other areas and I think artificial intelligence is a little bit like this where there's a lot of really wealthy people who have realized that there's significant risks out there from artificial intelligence, especially superhuman artificial intelligence which might come in the future, but they're struggling to find people who have the necessary technical abilities to actually solve the problem.

If you're someone who has a background in machine learning and is potentially able to have really valuable insights that no one else can have about how to make artificial intelligence safe then probably we need your expertise more than we need extra money because there's just not that much that money can buy right now.

Ariel: I definitely want to come back to all of this and I'm going to here in just a minute but before we go too far into some of the different areas that people can go into, Brenton, I know you have a background in medicine. I know one of the examples that 80,000 Hours gives, people misunderstanding how they can do the most good is how many people choose to become doctors, though there are other paths that they could take that might actually help people more. I was hoping you could talk a little bit about the paradox there since obviously we do need doctors. We don't want to discourage everyone from becoming a doctor.

Brenton: I suppose what's going on here is that we need a bunch more of lots of different things. We do need more doctors in the developed world but also we need lots of people working on lots of problems and the question is where you want the next additional talented and altruistic person to go. We actually looked into how much good we expect a doctor in the developed world will do. The answer is something like produce about five to 10 years of quality life per year that they work. When I say quality life, I mean this measure that takes into account the amount you extend people's lives as well as the amount that you improve their lives through improving their health in some way.

Ariel: Sorry, is that 10 years per person that they see or just 10 years of life in general?

Brenton: 10 years per person that they see would be amazing. It is about 10 years over the course of a year of working in a hospital in the job world, in the rich world.

Ariel: Okay.

Brenton: This is pretty good. I mean most professions can't beat this. This is much better than most people will be able to do but this said you can probably do better than this. As a doctor who's giving away 10% of their income, you can probably do several times better than this if you make sure that you give it to the best-evidenced organizations that are out there like those evaluated by GiveWell. Then on top of this, you could work in other areas so if I had a choice between a talented altruistic person going into medicine or a talented altruistic person working on some area to influence the long-run future such as reducing existential risk then I would prefer to have them working on the latter, making it just so that we have more academic papers that are released on reducing risk of human extinction than there are on dung beetles every year.

Ariel: I assume though that if someone came to you and said, "But what I really want to do is help people directly," would you still try to encourage them to go into an existential risk research or would you say, "Okay. Then you're probably a better fit to be a doctor"?

Brenton: I mean probably not. Some people just feel like they only want to be a doctor and this is the only thing that they're going to do. I went to uni with a bunch of these people. For them, it very possibly is the best thing to do to become a doctor. I mean I would encourage them to make the best use out of these qualifications and they could do this through things like working in the developing world.

Say in Rwanda, there are something like six doctors per 100,000 people and it would be better to work there than in Australia where I work where there are something like 270 doctors per 100,000 people. Or they could use the money that they earned to give, or they could work in public health where I expect the impact per doctor is significantly better. I suppose the general point is that if someone is really passionate about something and this is the only thing they could see themselves doing in order to be happy, then they should probably just do that thing.

Ariel: You've also been doing research into what the most pressing problems facing the world are, presumably like existential risks. Can you talk about what those issues are and why 80,000 Hours has taken an interest in doing this research yourselves?

Rob: Sure. We've done some of this ourselves but the reality is we've been drawing a lot of work that's been done by other groups in the Effective Altruism community including, I guess, GiveWell and the Open Philanthropy Project, and the Future of Humanity Institute at Oxford. I can talk a little bit about the journey that our idea has taken over the last five or six years.

Ariel: Yeah, please.

Rob: One of the firsts things that we realized as Brenton mentioned is that it seems like if you're just trying to help people alive today, your money can go an awful lot further if you spend it in the developing world rather than the developed world because it's just so much less money being spent per person in those countries trying to solve problems that they have. Also, the issues that you find in rural Kenya, by and large issues that have been partially almost completely solved in developed countries.

The issues are neglected and we also know that they're solvable. We basically just need to scale up solutions to basic health problems and economic issues that have been resolved elsewhere in the world. You already have a pretty good blueprint. I think that that's basically that's just correct that if you focus on the developing world rather than the developed world, you would increase your impact maybe 10 or 100 fold unless you're doing something unusually valuable in a rich country.

Then moving beyond that, we thought what about looking at other groups in the world that are extremely neglected other than just people living in rural areas in variable countries? If we look at that question then factory farmed animals really stand out. There's basically billions of animals in factory farms that are killed every year and they're treated just really overwhelmingly horrifically. There's a small handful of farmed animals in the world that are raised humanely or at least somewhat humanely but 99 point something percent are raised in horrific conditions where they're extremely confined. They can't engage in actual activities, they have body parts cut off just for the convenience of the farmers.

They’re treated in ways that if you treated your pet like that, it would send you to jail. They're very numerous as well. Potentially, you could even have larger impact again by working on trying to improve the welfare of animals and human attitudes towards animals. That issue is also extremely neglected.  There's actually very little funding and there's very few charities globally that are focused on improving farm animal welfare.

The next big idea that we have was thinking of all of the people that we could help, of all of the groups that we could help, what fraction of them are actually alive today? We think that it’s only basically a small fraction that there's every reason to think that humanity could live for another 10 generations or 100 generations on Earth and possibly even have humans in our descendants alive on other planets. There's a long time period in the future in which humans and animals could have good lives.

There could just be so many of them at any point in time. That gets you to thinking about what can you do that will have very persistent impacts that don't just help this generation but also help our children, our grandchildren and potentially generations in thousands of years time. That brings us to the kind of things that the Future of Life Institute works on. We worry a lot about existential risks and ways that civilization can go off track and never be able to recover.

I think actually that broad class of things, thinking about the long-term future of humanity, is where just a lot of our attention goes these days and where I think that people can have the largest impact with their career. If their main goal is to have a large impact with their career then I think thinking about long-term impacts is the way to have the largest impact basically.

Ariel: How does that get measured? If you're trying to think that far out into the future, how do you figure out if your career is actually that impactful versus, say, a doctor being able to say, "I've saved X number of lives this year"?

Rob: The outcome obviously depends on what problem you're working on. If you're focused on developing world health then you can talk about lives saved. If you're thinking about farm animals then you can think about how many horrible lives did you prevent from existing. When it comes to long-term future, that's the hardest thing to measure because you're not going to be around to see whether your actions actually prevented a disaster and a disaster was unlikely anyway.

If you try to do things to reduce the risk of nuclear war and the next year there isn't a nuclear war, I mean that's only a very weak signal that you succeeded. It's basically no signal at all. I don't think we have a really snappy answer to this. Basically, we try to have people who are experts in these different problems so people who understand or think that they have a good understanding of what kinds of things could precipitate a nuclear war. They've spoken to the actors that are involved so they know what the Chinese or what the Russians are thinking and under what circumstances they might start a nuclear war.

Then try to change the conditions on the ground today to make everyone a bit less skittish or to make the number of nuclear weapons that are hair-trigger alert fewer and hope that that is going to lower the risk of a global catastrophe. This is one of the ways in which this problem area is less promising than others. It's quite hard to measure success and it's hard to know whether the things that you're doing are really helping and how much. That makes it harder to do a really good job.

Ariel: I would stay on this topic for a little bit, not the measurements, but these issues because obviously they're very important to the Future of Life. I think most of our listeners are aware that we're very concerned about artificial intelligence safety. We're very concerned about nuclear weapons. We also worry about biotechnology and climate change. I was hoping you could take each of those areas maybe individually and consider different ways that people could pursue either careers or earn to give options for these various fields?

Rob: Sure. Basically there's just so much to say about each of these and that's one reason why I've started our podcast where we have conversations between one and three hours with experts in these areas to really pick their brains about all the different ways you could try to make a difference and compare the pros and cons and offer really concrete advice to people. If this is a topic that's interesting to you, if you're considering pursuing a career to reduce the risk of a global catastrophe and ensure the future of humanity is good then I definitely recommend subscribing to the 80,000 Hours podcast.

We've got a lot of really long episodes coming out about these topics. That's my pitch, but I'll try to just give you a taste of the different options within these. Artificial intelligence safety, we'd had a couple of interviews about this one and we've got quite a lot of articles on the side. Broadly speaking, there's two different classes of careers that you can take here. I suppose actually three. One would be to specialize in machine learning or some other kind of technical artificial intelligence work and then use those skills to figure out not so much how can we make artificial intelligence more powerful but figure out how can we make artificial intelligence aligned with human interests.

I had a conversation with Dr. Dario Amodei at OpenAI. He's a really top machine learning expert and he spends his time working on machine learning basically in a similar way to how other AI experts do but his angle is how do we make the AI do what we want and not things that we don't intend? I think that's one of the most valuable things that really anyone can do at the moment.

Then there's the policy and strategy side. I have a conversation with Miles Brundage, a researcher at the Future of Humanity Institute at Oxford, about this class of careers. This is basically trying to answer questions like how do we prevent an arms race between different governments or different organizations where they all want to be the first one to develop a really intelligent artificial intelligence so they scrimp on safety. They scrimp on making it do exactly what we want so that they can be the first one to get there. How would we prevent that from happening?

Do we want artificial intelligence running military robots or is that a bad idea? I guess is this probably a bad idea? There's some people who don't agree with that. Do we want the government to be more involved in regulating artificial intelligence or less involved? Do we want it to be regulated in some different way? All of these kinds of policy and strategy questions ... It's helpful to have a good understanding of machine learning technical side if you're doing that.

You can also approach this just if you have a good understanding of politics, and of policy, and political economy and economics, and lore and that kind of thing. That's the second class of careers where you can potentially work in government or military or think tanks or nonprofits that kind of thing. Then as you said, the third class is working in earning to give. Trying to make as much money as you can and then support other people to do these other kinds of work.

I think at the moment that wouldn't be my top suggestion for people who wanted to work on artificial intelligence because there's already quite a lot of large earners who are interested in supporting this kind of work, but there are some niches in artificial intelligence safety work that don’t have a whole lot of money in them yet, so you could potentially make a difference by earning to give there.

Actually, I forgot. There is a fourth class which is doing supporting roles for everyone else. Doing things like communications, and marketing, and organization, and project management, and fundraising operations. All of those kinds of things can actually be quite hard to find, skilled reliable people for. If you have one of those other kinds of skills, possibly even web design, right, then you can just find an organization that's making a big difference, and then throw your skills behind them.

Ariel: I mean, that's definitely something that I personally have found a need for are more people who can help communicate, especially visually.

Rob: Yeah, definitely. Our web designer left a couple of years ago and then we spent about six months trying to find someone who was suitable to do that well and we didn't find them. Fortunately, the original web designer came back. It can be surprisingly hard to find people who have really good comms ability or can handle media or can do art and design. If you have one of those skills, you should seriously consider just about applying to whatever organizations you admire because you can potentially be much better than the alternative candidate that they’ll hire or potentially they wouldn't hire anyone. Shall we move on to nuclear weapons and biotech and climate change?

Ariel: Yes. Especially nuclear weapons. I'm really interested what you have to say there because we are an organization that is attempting to do stuff with nuclear weapons and even for us it's difficult to have an impact. I'm curious how you suggest individuals can also help.

Rob: Unfortunately, this is probably the one that we know the least about. It's kind of upcoming. Something that want to look at more over the next six months. I've got an interview with someone who works on nuclear anti-proliferation from the Nuclear Threat Initiative in a couple of weeks and I was hoping to get a lot of information there. I guess broadly speaking, it's going to the military if you have a shot at getting into the strategic side of nuclear weapons control. Then there are groups like the Nuclear Threat Initiative, the Skoll Global Threats Initiative. It is very tricky.

I guess I would say a lot of people who work on nuclear weapons stuff are in my view too focused on preventing terrorist from getting nuclear weapons and perhaps also smaller states like North Korea. The thing that I'm most worried about is an accidental all out nuclear war between the US and Russia or the US and China because that has a potential to just be much more destructive than a single nuclear weapon going off in a city.

I guess that's not a super strong view because you have to weigh out which of these things is more likely to happen, but I'm very interested in anything that can promote peace between the United States and Russia or United States and China because that seems like the most likely. A war between those groups or an accidental nuclear incident seems like the most likely thing to throw us back to the stone age or even pre-stone age.

Ariel: Well in that case, I will give a couple quick plugs for the International Campaign to Abolish Nuclear Weapons (ICAN) which played a huge role in the treaty that was just passed to ban nuclear weapons at the UN. And also, we've done a lot of work with the Union of Concerned Scientists and they're a bigger organization, so they might have opportunities for people. But they focused a lot on things like accidental nuclear war and hair-trigger alert.

Rob: Yeah. To be honest, I'm not that convinced by total disarmament as a way of dealing with the threat from nuclear weapons. The problem is, even if you get the US and China and Russia to destroy all of their nuclear weapons today, they would be always within a few months of being able to recreate them. And the threat that another country might rebuild their nuclear arsenal before you do might actually make the situation even more unstable.

The things that I would focus on are ensuring that they don't get false alarms that other things trigger their warnings that they're suffering a nuclear attack; trying to increase the amount of trust between the countries in general and the communication lines so that if there are ever false alarms, they can communicate it as quickly as possible and diffuse the situation. Actually the other one is making sure that these countries always can retaliate even at a delay.

Russia is in a tricky situation actually at the moment because this nuclear technology is quite antiquated, where they are at the risk of the US basically destroying all of their land-based nuclear weapons. They have a very short period of time between when they are notified about a potential nuclear attack and when the nuclear weapons would hit their own nuclear silos on land. And they don't have many nuclear submarines that they could use to then fight back at a delay, because, of course, you can hit nuclear silos on the ground and just disable them so they can't retaliate. But it's much harder to disable nuclear submarines. They can basically always retaliate even weeks or months later.

One interesting thing that would be really helpful would just be to give the Russian Federation nuclear submarines. I mean they'll never accept nuclear submarines taken from the United States but I'd be really happy if they would actually build some because then they would always know that they could offer a full retaliation even weeks later, and so they don't have to always be on hair-trigger alert.

They don't always have to retaliate within a few minutes of receiving a warning to make sure that they can retaliate at any point. The other thing would be to improve their monitoring ability to give them more satellites and better radar so that they can see nuclear weapons incoming to Russia sooner, and so they have a longer lead time before they have to decide whether to retaliate. It's interesting. I mean this stuff is not so much about disarmament but about having, in a way, better nuclear technology, and I think that's another direction that you can go. I don't think that the US or China or Russia are going to disarm and I'm not sure even if they did that it would be that helpful. So I would focus on other approaches.

Ariel: Yeah, I would still advocate for decreasing the number of nuclear weapons.

Rob: Yeah, I mean they should do that anyway or at least they should decrease the number of land-based nuclear weapons because it's basically just a waste of money. They have far more than they actually need. As far as I can tell, at least in the United States, it's just a boon to the nuclear industry that wants to build more and more of this nuclear machinery and just cost the taxpayer a lot of money without any increase in security.

I certainly agree, it would be fantastic if we could get nuclear weapons taken off of hair-trigger alert. I think China, at least historically, has had far fewer nuclear weapons that are able to respond really quickly. Their approach is that if they're attacked with nuclear weapons they will potentially spend days considering what their response is going to be under a mountain and then retaliate with some delay once they've fully figured out who attacked them, and why, and what they should do. That's a much safer situation than having nuclear weapons that can be fired with a few minutes notice.

Ariel: The Union of Concerned Scientists has come out with some reports though that indicate they think that the Chinese may be changing their policy.

Rob: I have heard that as well, that they're modernizing, which in this case means making it worse, but at least historically there has been another way of dealing with this. Again, I'm not sure about the political practicalities of that. To be honest, this isn't so much my area of expertise. Maybe you should get someone from the Nuclear Threat Initiative on to talk about these careers, but it's a very interesting topic.

Ariel: Yes. It's a really interesting challenge; a depressing challenge but an interesting one.

Rob: Yeah. So then we’ve got biotech and climate change?

Ariel: Yes.

Rob: Ok, so biotech. The risks here are primarily that we would either deliberately or accidentally produce new diseases using synthetic biology or disease breeding. I have a two-and-a-half hour long conversation with Howie Lempel who was a project officer working on these kinds of risks at the Open Philanthropy Project, and so if you're interested in this, I strongly recommend listening to that episode and then applying for coaching so we can give you more information.

Broadly speaking, I think the best opportunities here are in early surveillance of new diseases so at the moment, if there's a new disease coming out, a new flu for example, it takes us quite a long time to figure out that that's what's happened, because obviously people come into the hospital with flu symptoms, that happens all the time. We don’t typically take assays from them to figure out whether it's a new strain of flu or an old strain of flu. It takes a long time for enough people to be dying or showing unusual symptoms for us to realize that there's something unusual going on here and then start actually testing.

And just when it comes to controlling new diseases, time is really of the essence. If you can pick it up within a few days or weeks, then you have a reasonable shot at quarantining the people and following up with everyone that they've met and containing it. And we have successfully done that in a couple of cases. Older people among your audience might remember SARS, the Sudden Acute Respiratory Syndrome that spread through Hong Kong and Singapore, I think in around 2003, 2004.

The authorities there were pretty on the ball and they caught it early enough and they managed to do follow up with everyone that the people who had this disease had met, and to contain it. Even though it was a very deadly disease, it didn't end up killing that many people. But if it had taken them an extra month to find out that there was this new disease that was spreading, it might have just reached too many people for it to be practical to do follow-up with all of them and to bring them all into hospital, because there won't be enough beds for them.

At that point really, it's like once a fire gets out of control, it just becomes massively harder to contain. You need to catch a fire when it's only just in one part of a room before it spreads to a whole building. Any technologies that we can invent or any policies that we can make that will allow us to identify new diseases before they've spread to too many people is going to help with both natural pandemics where there’s significant risk every year that were just going to have a new strain of flu or other kinds of new diseases that could create big problems, and also any kind of synthetic biology risks, or accidental releases of diseases from biological researchers.

Ariel: Those are the risks but perhaps a lesser known among people is that FLI is also looking at existential hope, and I think biotech offers good opportunities for that as well. Are there career paths you recommend for people who want to do the most good that way, health-wise or anything else?

Rob: Interesting. This isn't as much my area. The suggestions that I've heard there, I guess there's research into longevity, so trying to slow down aging so that people might hope to live significantly longer, potentially hundreds of years. I guess in the very long term, maybe even living for thousands of years.

That is maybe good in itself because people don't want to die. Most people would rather live longer than they're going to. It's also good in that if people expect to live hundreds of years then it will make them more cautious about the future and more concerned about where humanity is going because they might actually benefit themselves from it. So there's some indirect effects there that could be positive in my view.

There's also other things I've heard, human enhancement. You could try to use biotechnology to maybe make people more moral, to make them less selfish and less violent and less cruel. I don't know how practical that is, whether that's something that's going to come anytime soon but if it were possible to make the next generation more moral than the present generation, that seems like it would be really helpful in terms of guiding humanity in a positive direction in the long-term. But of course there's pretty big problems that you could immediately see there where, for example, if you're the Chinese government and you can just tweak the knob on the next generation's personalities, then you can just make them very compliant and unwilling to ever rebel against existing system. There's both potentially big upsides and big downsides.

Ariel: Okay.

Rob: Then climate change?

Ariel: Yeah, climate change.

Rob: I think Brenton wanted to chime in.

Ariel: Yes, please do.

Brenton: Climate change seems like quite a big problem. In this framework where we try to assess which problems are the most important to work on, as Rob alluded to, we try to think about how solvable they are, we try to think how large they are in scale, like how big of a problem it is, and finally how neglected they are.

Of the ones that we've listed here, climate change actually does the worst when you look at the standard case. This is probably because it just does bad on neglectedness. There's something like $300 billion or so spent on this problem per year because it is such a large problem. However, a much more neglected case and one that we are really worried about is these extreme risks of climate change. If you look at not just the median outcome but some of the worst forecasts, what you get is situations where most of the damages is coming from.

There's a Wagner and Weitzman paper which suggests that there's about a 10% chance of us being headed for warming which is larger than 4.8 degrees Celsius, or a 3% chance of us headed for warming that it's more than 6 degrees Celsius. These are really disastrous outcomes. This is about 560 ppm which it seems like there's a pretty decent chance of us getting to.

I suppose our take on this is that if you're interested in working on climate change, then we're pretty excited about you working on these very bad but non-median case scenarios. How do you do this? The first answer is it's a bit hard, and sensible things to do would be improving our ability to forecast; it would be thinking about the positive feedback loops that might be inherent in Earth's climate; it might be thinking about how to enhance international corporation. Then another angle on this is doing research into geoengineering. If it turned out that we were in a disastrous scenario and we were getting warning of something like 5 degrees, what could we do about that?

There are a few situations that are pretty scary to think about, like trying to throw up a dust of calcium carbonate into the stratosphere and try to reduce the amount of sunlight that's getting through to the Earth, but might be the kind of things that we need to consider, and the kind of things where it would be really good to have good research on it now before we're in a situation where we’ve got this very badly warmed Earth, we maybe have problems with political systems, we have countries not seriously taking into account how bad it could be to do things like engineering that could seriously mess with the world's climate even more. Getting that research done now seems sensible.

Ariel: Is there a timeline estimates for when these potentially catastrophic temperatures could be hit?

Rob: I'm trying to think. I think we're talking 50 to 150 years here unless we get extremely unlucky and we hit some intense feedback loops really fast. This is more towards the later part of this century. I'll just add some other comments on climate change. It's one that we know a bit less about because as Brenton said, because there's already hundreds of billions of dollars being spent on tackling climate change every year, it doesn't seem as extremely neglected as some of these other issues.

I worry a bit about geoengineering. I think it could end up being extremely helpful in cases where maybe climate change turns out to be much worse than we thought and we can try to slow it down or contain it a bit. But it also creates serious problems itself. Geoengineering actually is disturbingly cheap such that any medium-sized country could actually run an almost global-scale climate engineering project itself. That means that there's a real risk that will be done too quickly and too recklessly because South Korea could do it, Australia could do it and they don't really need anyone else to agree. They can just go ahead and start releasing these chemicals into the atmosphere themselves.

And inasmuch as we develop this technology and it becomes acceptable, I'm not sure whether it reduces the risk from normal climate change more than it increases the risk by creating the possibility that a single country will somewhat foolishly do it just because one leader thinks that it's a good idea. If you want to do other stuff that's potentially high impact on climate change that seems a bit less likely to backfire, of course you could do research into the risks of geoengineering, thinking we might use it at some point in the future anyways so better to be prepared and to be aware of the downsides.

But it also does just seem like solar power and storage of energy from solar power is the stuff that's going to have the biggest impact on emissions over the next 100 years or at least the next 50 years. It's already having a pretty large impact. It's getting close to cost parity in a lot of cases. Every year the cost of batteries gets cheaper, the cost of solar panels get cheaper and just in more and more places in the world, it becomes more sensible to build solar panels than to build coal plants.

Anything that can speed up that transition, any new technologies that you can invent, that mean that it's just profitable to replace coal plants with solar panels, I think it makes a pretty big contribution. And the investments in solar R&D in the past look like really good investments today.

Ariel: We looked a lot at suggested career paths. I'm wondering, especially when you mentioned things like coal plants, are there career paths that you discourage?

Brenton: Yeah. There are career paths we discourage. They used to be in a talk we did this line up of several careers that we didn't encourage, and it was this really depressing game of career Bingo is you’d  go through them and various people would be upset. I mean I suppose the answer is that we discourage quite a lot of careers. I mean we think that people should be trying to help other people with their career basically because it’s a good thing to do and basically because there are pretty good reasons to think that it increases how satisfied you are with your career.

A lot of careers just don't help people and that's almost a good enough reason in of itself for us to not be excited about it. Then on top of that, most jobs that are out there aren't working on any of our priority areas or they're working on things where people haven't tried to think about how large the problem is that they're trying to tackle. And therefore I encourage people to try to work on problems that are quite neglected. Then on top of this, it seems like there are a few careers that are just dominated by similar options. An example of this is that you could be thinking about earning to give, and I think in this case, consulting beats investment banking in almost every case like you might be particularly excited about investment banking but otherwise the earning is similar to consulting and in consulting you learn a bunch of other skills which you can then take to careers later on which might be higher impact than consulting is.

Another example like this is going into corporate law for earning to give. Again, it takes quite a long time to get there, the skills that you get aren't very transferable. This is the case with investment banking and instead consulting just seems to be better there. Then as well as on top of this, there a bunch of careers that we just think just do harm and we've written an article on this called “What are the 10 Most Harmful Jobs” because we're certainly not above clickbait.

All of these seem pretty bad, so just scanning through it now: We've got marketing R&D for compulsive behaviors, factory farming, homeopathy, patent trolls, lobbying for rent-seeking, weapons research, fundraising for a charity that achieves nothing or does harm, forest clearing and tax minimization for the super rich, all seem pretty robustly bad.

Rob: So if any of your listeners were planning into going into tobacco marketing then probably don't do that.

Brenton: Please hold off.

Rob: But I would imagine they probably have more positive intentions than that.

Ariel: Probably. I want to go back to something you were saying earlier comparing consulting to these other careers. If you think investment banking sounds interesting, do you then go into consulting for finance in general or what type of consulting work are you talking about because that can be broad?

Brenton: Sure. The case that I've got in mind is someone that's interested in earning to give, looks at a bunch of different careers and how much you can earn at various points, and then concludes that investment banking would be a sensible thing to go into. In this particular case, I think even consulting and investment banking would look similar. I think consulting just strictly dominates on these other things that we care about like your ability to take the skills somewhere else. The kind of things that I've got in mind are just for-profit strategy consulting or obviously for-profit investment banking.

Ariel: I have a science background but I actually started out doing advertising and marketing and I worked as a writer and I did a lot of creative things. It was honestly a lot of fun. I was wondering what advice do you give people who are interested in more creative pursuits or who … I guess a second part of the question I have is, I have also found entertainment can be hit or miss. Sometimes it's just a mind-sucking waste of time but sometimes it actually does help you escape from reality for a little while or it helps spread ideas that can later help society. I was just curious how you give advice to people who are interested in more creative fields?

Rob: I have a podcast episode or at least it was pre-podcast episode, just an interview I did with one of our freelance artists about a year ago. I think there's a couple of different ways you can think about this. I'd say there's three broad ways that you can try to do good through creative arts. One is to just try to make money and give it away. Just take the earning to give path. There, it just depends on whether actually you do have opportunities to make any significant amount of money. Most artists of course are not getting rich but a few of them, the most talented ones, do have reasonable prospects of making money.

The second one would be to try to just entertain people and try to do good directly. You know, I watch Game of Thrones and I love Game of Thrones. I guess it's good that I have a good time watching Game of Thrones, and that will be another angle, just to try to make people happy. That one, I think we're a bit more skeptical of because it doesn't seem like it has very long-term impacts on human well-being.

You make some good music, and I listen to music all the time but someone makes a great house remix, and I listen to it, and I enjoy it. I get immediate happiness in my brain but then it doesn't seem like this is helping future generations. It doesn't seem like it's helping the worst-off people in the world in particular or anything like that. It doesn't seem very highly leveraged. Then you've got a third angle where you try to do good through art somewhat indirectly.

You could try to make, for example, documentaries that promote really important ideas and that change people's attitudes. You could try to tell stories that open people's eyes to injustice in the world that they weren't previously aware of or you could just produce marketing materials for a really important organization that's doing valuable work; perhaps a nonprofit like the Future of Life Institute or you can run this podcast which has a creative element to it that helps to spread important ideas and draw attention to organizations that are doing good work.

I think there we’re actually quite positive because at least when it comes to the Effective Altruism community, we're fairly short on creative skills. We don't tend to have people who are great at producing music or songs or visual design or making beautiful pieces of art or even just basically beautiful functional websites. I'm always really enthusiastic when I find someone who can do visual design or has a creative streak to them because I think that's a very important complementary skill that almost any campaign or organization is going to need at some point to some extent.

Maria, the freelance artist who I interviewed about this, she produces lots of designs for the 80,000 Hours website and it means that more people are interested in reading our content, more people come to the site, it looks more professional, it increases our audience, and that's all extremely valuable.

Ariel: Excellent. I'm going to completely switch gears here. Rob, I know you're especially interested in these long-term multigenerational indirect effects. Can you talk about, first just what that means?

Rob: Sure. As I was saying earlier, we think that one of the highest leverage opportunities to do good is to think about how we can help future generations. If you're trying to help people and animals hundreds of years, thousands of years in the future, it's not really possible to help them directly because they don't exist yet. You have to help them through a causal chain that involves helping or changing the behavior of someone today and then that’ll help the next generation and then that’ll help the next generation and so on.

I have a long talk on YouTube where I think about this framework about long term indirect effects and wonder what stuff could we do today that will really change the trajectory of civilization in the long term. It's quite a tricky issue. I can already feel myself slightly tying myself in knots with this answer trying to make sense of it. But I’ll try to give just a couple of lessons. I'll just try to run through some of the thinking that people have about this issue of long-term indirect effects and some of the lessons that have come out of it.

One way that people might try to improve the long term future of humanity is just to do very broad things that improve human capabilities like reducing poverty, improving people's health, making schools better and so on. I think that kind of thing is likely to be very effective if the main threats that humanity faces are from nature. We've got diseases, asteroids, super volcanoes, that kind of thing.

In that case, if we improve our science and technology and if we improve our education system then we’re better able to tackle those problems as a species. But I actually think we live in a world where most of the threats that humanity faces are from humanity itself. We face threats from nuclear weapons. We face threats from climate change which is caused by us. We face threats from diseases that we might ourselves invent and spread either deliberately or unintentionally.

And in a world where the more science and technology we develop, the more power we have to harm ourselves, to basically destroy our own civilization, it becomes less clear that just broadly improving human capabilities is such a great way to make the future go better. Because if you improve science and technology, you both improve our ability to solve problems but it also means that we're creating new problems for ourselves more quickly. We're inventing more and more powerful ways of potentially just ending the human story.

For that reason I tend to focus on differential technological change. So I think about what technologies can we invent as soon as possible that disproportionately make the world safer rather than more risky. For example, I think it's great to improve the technology to discover new diseases quickly and to produce vaccines for them quickly, but I'm perhaps less excited about just generically pushing forward all of the life sciences because I think there's a lot of potential downsides there as well.

I also think a lot about human values because it's harder to see how these backfire. I think it's really useful if we can make people care about the welfare of people in other countries or about animals in factory farms or about the welfare of future generations so that we're more likely to act in a prudent and responsible way that shows concern for the welfare not just of ourselves, and our family, and our friends, but all beings as a whole. That seems fairly robustly a valuable thing to do that could improve the long term especially if those values are then passed down to future generations as well.

Ariel: Alright, so that's pretty abstract. How would you suggest that people actually get involved or can take action here?

Rob: Yeah, that’s fair. It's a pretty abstract talk and a lot of thinking in this area is pretty abstract. To try to make it more concrete: earlier we were talking about specific careers that you might pursue to work on global catastrophic risks and risks from new technologies. We've got the podcast as I mentioned which go into more detail with specific causes that you might study, PhDs you might do, places you might work. Another option here, another way that we can, I think, robustly prepare humanity to deal with the long-term future is just to have better foresight about the problems that we’re going to face in the future.

There are really good people in psychology and in the intelligent services and in government trying to figure out how we can get reliable intelligence about what threats we're going to face next year and in five years, and in 10 years, and what things we might do to tackle them and whether those actually are going to work, and not just have them be silly individuals opinions but actually rather robustly good answers.

Some people in your audience might have heard of Professor Philip Tetlock who spent decades studying people's predictions about the future and figuring out under which circumstances are they accurate. What is it for a prediction to be accurate? How do you even measure that? What kind of errors do people systematically make? He's being funded lately by the IARPA which is the intelligence R&D funding service.

That had thousands of people on online participate in these prediction contests. Then they study basically whose judgment is reliable and on what kinds of questions can we reliably predict the future. We have a profile coming out actually later this week where we describe how you can pursue careers of this kind so like what kinds of things would you study at undergrad, post-grad level? What kind of labs would you go to try to improve human decision-making and human foresight?

I think that's a very concrete thing that you can do that puts humanity in a better position to tackle problems in the future just being able to anticipate those problems well ahead of time so that we can actually dedicate some resources to averting those problems.

Ariel: Okay, so I'll give another plug. I don't know if you've visited Anthony Aguirre's site Metaculus?

Rob: I have. Only briefly though.

Ariel: I know that's one of the things that's he's working on doing with site. That's definitely a fun one if you want to practice your predictive skills. I'm pretty sure it's just an opportunity to try to either prove that you're good at predicting things or test yourself, compare yourself to other people and just try to improve the skill.

Rob: Professor Tetlock's version of this is called the Good Judgment Project and I've signed up to that. You can go into their interface and make predictions about all kinds of things, about elections, geopolitical events, economic events. And I'll tell you what, it's a chastening experience to actually to go in there and give concrete percentage probabilities and all kinds of different outcomes. It makes you realize just how ignorant you are.

If you don't realize that upfront then you'll realize that once you start getting the answers to the various things that you've predicted. Once you know it's on the record, it's in a database and you're going to be told and you were wrong, I think some of the bravado that people naturally have, it goes out the window.

Ariel: Yeah, I haven’t tried that one but I've opened Metaculus quite a few times and not had the courage to actually make a prediction.

Rob: I'll tell you what, I do that but then I still go down to the pub and offer very strong opinions about everything.

Ariel: Exactly. Is there anything else that either of you want to add?

Brenton: Yeah. I suppose one plug to give is just that a whole bunch of the thinking that we do is on our website and you can just see lots of it there. As I said, there's our career guide which just goes through a bunch of considerations that you need to have in your mind when you're thinking about how to have an impactful career. There are these problem profiles where we think about various problems and how important they are to have another person or another dollar working on. And we think about specific careers, and a particular career path that you might be interested might be on there as well. So it’s worth looking at from that angle.

Rob: We've got an article about almost every question that we discussed here so if you feel like we bungled a lot our answers and tied ourselves in knots, then we've almost certainly written an article where we have our considered view expressed properly, which you can find. Check out 80000hours.org. We've got the main career guide which goes through all of our key content, and then if you're still keen to get more personalized advice, so you can figure out, how am I going to reduce the risk from artificial intelligence by working in policy? That's the kind of question that is really good to deal with one-on-one once we understand your specific opportunities and your specific skills so definitely go out there and apply for coaching.

Of course, I've got lots of interviews on very similar topics to what the Future of Life Institute works on. We've got upcoming episodes on risks from biotechnology, risks from artificial intelligence, nuclear security, climate change as well. Subscribe to the 80,000 Hours podcast and you can enjoy a couple hours of my voice every week.

Ariel: We'll definitely add a link to that as well, and Rob, Brenton, thank you so much for joining us today.

Rob: It's been a great pleasure.

Brenton: Thanks for having us.

View transcript
Podcast

Related episodes

If you enjoyed this episode, you might also like:
All episodes

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram