The Effective Altruism Movement
Edit: The following article has been updated to include more highlights as well as links to videos of the talks.
How can we more effectively make the world a better place? Over 1,000 concerned altruists converged at the Effective Altruism Global conference this month in Berkeley, CA to address this very question. For two and a half days, participants milled around the Berkeley campus, attending talks, discussions, and workshops to learn more about efforts currently underway to improve our ability to not just do good in the world, but to do the most good.
Those who arrived on the afternoon of Friday, August 5 had the opportunity to mingle with other altruists and attend various workshops geared toward finding the best careers, improving communication, and developing greater self-understanding and self-awareness.
But the conference really kicked off on Saturday, August 6, with talks by Will MacAskill and Toby Ord, who both helped found the modern effective altruistism movement. Ord gave the audience a brief overview of the centuries of science and philosophy that provided the base for effective altruism. “Effective altruism is to the pursuit of good as the scientific revolution is to the pursuit of truth,” he explained. Yet, as he pointed out, effective altruism has only been a real “thing” for five years.
MacAskill took the stage after Ord to highlight the movement’s successes over the past year, including coverage by such papers as the New York Times and the Washington Post. And more importantly, he talked about the significant increase in membership they saw this year, as well as in donations to worthwhile causes. But he also reminded the audience that a big part of the movement is the process of effective altruism. He said:
“We don’t know what the best way to do good is. We need to figure that out.”
For the rest of the two days, participants considered past charitable actions that had been most effective, problems and challenges altruists face today, and how the movement can continue to grow. There were too many events to attend them all, but there were many highlights.
Highlights From the Conference
When FLI cofounder, Jaan Tallin, was asked why he chose to focus on issues such as artificial intelligence, which may or may not be a problem in the future, rather than mosquito nets, which could save lives today, he compared philanthropy to investing. Higher risk investments have the potential for a greater payoff later. Similarly, while AI may not seem like much of threat to many people now, ensuring it remains safe could save billions of lives in the future. Tallin spoke as part of a discussion on Philanthropy and Technology.
Martin Reese, a member of FLI’s Science Advisory Board, argued that we are in denial of the seriousness of our risks. At the same time, he said that minimizing risks associated with technological advances can only be done “with great difficulty.” He encouraged EA participants to figure out which threats can be dismissed as science fiction and which are legitimate, and he encouraged scientists to become more socially engaged.
As if taking up that call to action, Kevin Esvelt talked about his own attempts to ensure gene drive research in the wild is accepted and welcomed by local communities. Gene drives could be used to eradicate such diseases as malaria, schistosomiasis, Zika, and many others, but fears of genetic modification could slow research efforts. He discussed his focus on keeping his work as open and accessible as possible, engaging with the public to allow anyone who might be affected by his research to have as much input as they want. “Closed door science,” he added, “is more dangerous because we have no way of knowing what other people are doing.” A single misstep with this early research in his field could imperil all future efforts for gene drives.
That same afternoon, Cari Tuna, President of the Open Philanthropy Project, sat down with Will McAskill for an interview titled, “Doing Philosophy Better,” which focused on her work with OPP and Effective Altruism and how she envisions her future as a philanthropist. She highlighted some of the grants she’s most excited about, which include grants to Give Directly, Center for Global Development, and Alliance for Safety and Justice. When asked about how she thought EA could improve, she emphasized, “We consider ourselves a part of the Effective Altruism community, and we’re excited to help it grow.” But she also said, “I think there is a tendency toward overconfidence in the EA community that sometimes undermines our credibility.” She mentioned that one of the reasons she trusted GiveWell was because of their self reflection. “They’re always asking, ‘how could we be wrong?'” she explained, and then added, “I would really love to see self reflection become more of a core value of the effective altruism community.”
The next day, FLI President, Max Tegmark, highlighted the top nine myths of AI safety, and he discussed how important it is to dispel these myths so researchers can focus on the areas necessary to keep AI beneficial. Some of the most distracting myths include arguments over when artificial general intelligence could be created, whether or not it could be “evil,” and goal-oriented issues. Tegmark also added that the best thing people can do is volunteer for EA groups.
During the discussion about the risks and benefits of advanced artificial intelligence, Dileep George, cofounder of Vicarious, reminded the audience why this work is so important. “The goal of the future is full unemployment so we can all play,” he said. Dario Amodei of OpenAI emphasized that having curiosity and trying to understand how technology is evolving can go a long way toward safety. And though he often mentioned the risks of advanced AI, Toby Ord, a philosopher and research fellow with the Future of Humanity Institute, also added, “I think it’s more likely than not that AI will contribute to a fabulous outcome.” Later in the day, Chris Olah, an AI researcher at Google Brain and one of the lead authors of the paper, Concrete Problems in AI Safety, explained his work as trying to build a bridge to futuristic problems by doing empirical research today.
FLI’s Richard Mallah gave a talk on mapping the landscape of AI safety research threads. He showed how there are many meaningful dimensions along which such research can be organized, how harmonizing the various research agendas into a common space allows us to reason about different kinds of synergies and dependencies, and how consideration of the white space in such representations can help us find both unknown knowns and unknown unknowns about the space.
Tara MacAulay, COO at the Centre for Effective Altruism, spoke during the discussion on “The Past, Present, and Future of EA.” She talked about finding the common values in the movement and coordinating across skill sets rather than splintering into cause areas or picking apart who is and who is not in the movement. She said, “The opposite of effective altruism isn’t ineffective altruism. The opposite of effective altruism is apathy, looking at the world and not caring, not doing anything about it . . . It’s helplessness. . . . throwing up our hands and saying this is all too hard.”
MacAulay also moderated a panel discussion called, Aggregating Knowledge, which was significant, not only for its thoughtful content about accessing, understanding, and communicating all of the knowledge available today, but also because it was an all-woman panel. The panel included Sarah Constantin, Amanda Askell, Julia Galef, and Heidi McAnnaly, who discussed various questions and problems the EA community faces when trying to assess which actions will be most effective. MacAulay summarized the discussion at the end when she said, “Figuring out what to do is really difficult but we do have a lot of tools available.” She concluded with a challenge to the audience to spend five minutes researching some belief they’ve always had about the world to learn what the evidence actually says about it.
Prominent government leaders also took to the stage to discuss how work with federal agencies can help shape and impact the future. Tom Kalil, Deputy Director for Technology and Innovation highlighted how much of today’s technology, from cell phones to Internet, got its start in government labs. Then, Jason Matheny, Director of IARPA, talked about how delays in technology can actually cost millions of lives. He explained that technology can make it less costly to enhance moral developments and that, “ensuring that we have a future counts a lot.”
Robin Hanson, author of The Age of Em, talked about his book and what the future will hold if we continue down our current economic path while the ability to create brain emulation is developed. He said that if creating ems becomes cheaper than paying humans to do work, “that would change everything.” Ems would completely take over the job market and humans would be pushed aside. He explained that some people might benefit from this new economy, but it would vary, just as it does today, with many more people suffering from poverty and fewer gaining wealth.
Applying EA to Real Life
Lucas Perry, also with FLI, was especially impressed by the career workshops offered by 80,000 Hours during the conference. He said:
“The 80,000 Hours workshops were just amazing for giving new context and perspective to work. 80,000 Hours gave me the tools and information necessary to reevaluate my current trajectory and see if it really is best of all possible paths for me and the world.
In the end, I walked away from the conference realizing I had been missing out on something so important for most of my life. I found myself wishing that effective altruism, and organizations like 80,000 Hours, had been a part of my fundamental education. I think it would have helped immensely with providing direction and meaning to my life. I’m sure it will do the same for others.”
In total, 150 people spoke over the course of those two and a half days. MacAskill finally concluded the conference with another call to focus on the process of effective altruism, saying:
“Constant self-reflection, constant learning, that’s how we’re going to be able to do the most good.”