Listen, like & follow:
FLI’s Volunteer of the Year
FLI’s 2018 News Highlights
What We Were Up to in 2018
Anthony Aguirre was featured in an Atlantic article titled “Why Earth’s History Appears So Miraculous,” where he discussed existential risks and our lucky evolutionary path. Anthony attended Effective Altruism Global in San Francisco this June.
Jessica Cussins participated in the Global Governance of AI Roundtable in Dubai this February as part of the World Government Summit. In March, she gave a talk about preventing lethal autonomous weapons to a delegation of European policymakers. Jessica participated in the Partnership on AI’s Fair, Transparent, and Accountable AI working group in May, August and November. Also in May, she attended the “Cognitive Revolution Symposium” organized by Swissnex in San Francisco. In June, she attended the Foresight Institute’s “AI Coordination: Great Powers” strategy meeting in San Francisco; the “US-China AI Technology Summit” organized by The AI Alliance in Half Moon Bay; an “AI and Human Decisionmaking” workshop led by Technology for Global Security in Palo Alto; and “Effective Altruism Global” in San Francisco. In August, Jessica gave testimony on behalf of FLI at a State Senate Committee Hearing in Sacramento, California. In December, she spoke on an AI panel at the Foresight Institute’s Vision Weekend, “Toward Futures of Existential Hope”.
Ariel Conn kicked off 2018 speaking on a panel at a side event at the World Economic Forum in Davos, Switzerland, as part of the Global Challenges Foundation’s (GCF) effort to consider new forms of global governance to address catastrophic risks. She later participated in GCF’s New Shape Forum in Stockholm on the same subject. Ariel was also a keynote speaker talking about AI’s impact on society at the GEN Summit in Lisbon. This summer, she gave a statement on the UN floor on behalf of FLI regarding lethal autonomous weapons systems (LAWS), and she spoke at a UN-CCW side event hosted by the Campaign to Stop Killer Robots. Her comments at the UN were later incorporated into a resolution passed by the European Parliament in support of a ban on LAWS. She also organized and helped moderate a two-day meeting in Pretoria, South Africa for local and regional UN delegates, regarding the nuclear ban treaty and LAWS.
Victoria Krakovna kicked off 2018 with an AI strategy retreat co-run by the Future of Humanity Institute and the Center for Effective Altruism. Later in the spring, Victoria attended a workshop on AI safety at the Center for Human-Compatible AI (CHAI) at UC Berkeley. In the fall she ran a session at EA Global London on the machine learning approach to AI safety with Jan Leike, where they explored different research agendas.
Richard Mallah participated in Effective Altruism Global in Boston earlier this year, co-hosting a brief workshop on x-risk related skills. He also participated in two of the Partnership on AI’s working groups: one on AI, Labor, and the Economy, and another on Safety-Critical AI. Richard attended the Foresight Institute’s “AI Coordination: Great Powers” strategy meeting in San Francisco, and was interviewed by George Gantz and the Long Now Boston audience on FLI and the future of AI alignment with Lucas Perry. Richard gave a talk on technical AI safety at the Partnership on AI’s “All Partners Meeting” in November.
Lucas Perry gave a talk at the Cambridge “Don’t Bank on the Bomb” forum in January, where he presented opportunities and strategies for individuals and institutions to divest from nuclear weapons-producing companies. In February, he participated in an Individual Outreach Forum with the Center for Effective Altruism, and in April he participated in Effective Altruism Global in Boston, co-hosting a workshop on x-risk related skills with Richard Mallah. In October, Lucas was interviewed by George Gantz and the Long Now Boston audience on FLI and the future of AI Alignment with Richard Mallah.
Max Tegmark started off 2018 giving talks on AI safety and his book, Life 3.0. Max spoke a the American Museum of Natural History, gave the 2018 Beyond Annual Lecture at Arizona State University, and spoke with Sam Harris and Rebecca Goldstein on the Waking Up podcast about the foundations of human knowledge and morality. Max spoke at Effective Altruism Global about AGI and nuclear disarmament, he debated the future of AI with Andrew Ng and Jim Breyer at the Microsoft CEO Summit, and he gave an invited talk titled “Beneficial Intelligence & Intelligible Intelligence” at the IJCAI/ECAI AI conference in Stockholm, Sweden. Lastly, Max spent much of the fall on a book tour throughout Southeast Asia, giving talks on AI Safety in South Korea, Japan and China.
As a nonprofit, FLI relies on donations from people like you to keep doing the awesome things we get to do. If you’d like to help out, please check out our Get Involved page.