Posts in this category appear in the left sidebar (column 3).
The following article and infographic were originally posted on Futurism.
The most destructive device that humanity ever created is the nuclear bomb. It’s a technology that is capable of unparalleled devastation; it’s a technology that The United Nations classifies as “the most dangerous weapon on Earth.”
One bomb can destroy a whole city in seconds, and in so doing, end the lives of millions of people (depending on where it is dropped). If that’s not enough, it can throw the natural environment into chaos. We know this because we’ve used them before.
The first device of this kind was unleashed at approximately 8:15 am on August 6th, 1945. At this time, a US B-29 bomber dropped an atomic bomb on the Japanese city of Hiroshima. It killed around 80,000 people instantly. Over the coming years, many more would succumb to radiation sickness. All-in-all, it is estimated that over 200,000 people died as a result of the nuclear blasts in Japan.
How far have we come since then? How many bombs do we have at our disposal? Here’s a look at our legacy.
1,000 nuclear weapons are plenty enough to deter any nation from nuking the US, but we’re hoarding over 7,000, and a long string of near-misses have highlighted the continuing risk of an accidental nuclear war which could trigger a nuclear winter, potentially killing most people on Earth. Yet rather than trimming our excess nukes, we’re planning to spend $4 million per hour for the next 30 years making them more lethal.
Although I’m used to politicians wasting my tax dollars, I was shocked to realize that I was voluntarily using my money for this nuclear boondoggle by investing in the very companies that are lobbying for and building new nukes: some of the money in my bank account gets loaned to them and my S&P500 mutual fund invests in them. “If you want to slow the nuclear arms race, then put your money where your mouth is and don’t bank on the bomb!”, my physics colleague Stephen Hawking told me. To make it easier for others to follow his sage advice, I made an app for that together with my friends at the Future of Life Institute, and launched this “Brief History of Nukes” that’s 3.14 long in honor of Hawking’s fascination with pi.
Our campaign got off to an amazing start this weekend at an MIT conferencewhere our Mayor Denise Simmons announced that the Cambridge City Council has unanimously decided to divest their billion dollar city pension fund from nuclear weapons production. “Not in our name!”, she said, and drew a standing ovation. “It’s my hope that this will inspire other municipalities, companies and individuals to look at their investments and make similar moves”.
“In Europe, over 50 large institutions have already limited their nuclear weapon investments, but this is our first big success in America”, said Susi Snyder, who leads the global nuclear divestment campaign dontbankonthebomb.com. Boston College philosophy major Lucas Perry, who led the effort to persuade Cambridge to divest, hoped that this online analysis tool will create a domino effect: “I want to empower other students opposing the nuclear arms race to persuade their own towns and universities to follow suit.”
Many financial institutions now offer mutual funds that cater to the growing interest in socially responsible investing, including Ariel, Calvert, Domini, Neuberger, Parnassuss, Pax World and TIAA-CREF. “We appreciate and share Cambridge’s desire to exclude nuclear weapons production from its pension fund. Pension funds are meant to serve the long-term needs of retirees, a service that nuclear weapons do not offer”, said Julie Fox Gorte, Senior Vice President for Sustainable Investing at Pax World.
“Divestment is a powerful way to stigmatize the nuclear arms race through grassroots campaigning, without having to wait for politicians who aren’t listening”, said conference co-organizer Cole Harrison, Executive Director of Massachusetts Peace Action, the nation’s largest grassroots peace organization. “If you’re against spending more money making us less safe, then make sure it’s not your money.”
WHEREAS: Nations across the globe still maintain over 15,000 nuclear weapons, some of which are hundreds of times more powerful than those that obliterated Hiroshima and Nagasaki, and detonation of even a small fraction of these weapons could create a decade-long nuclear winter that could destroy most of the Earth’s population; and
WHEREAS: The United States has plans to invest roughly one trillion dollars over the coming decades to upgrade its nuclear arsenal, which many experts believe actually increases the risk of nuclear proliferation, nuclear terrorism, and accidental nuclear war; and
WHEREAS: In a period where federal funds are desperately needed in communities like Cambridge in order to build affordable housing, improve public transit, and develop sustainable energy sources, our tax dollars are being diverted to and wasted on nuclear weapons upgrades that would make us less safe; and
WHEREAS: Investing in companies producing nuclear weapons implicitly supports this misdirection of our tax dollars; and
WHEREAS: Socially responsible mutual funds and other investment vehicles are available that accurately match the current asset mix of the City of Cambridge Retirement Fund while excluding nuclear weapons producers; and
WHEREAS: The City of Cambridge is already on record in supporting the abolition of nuclear weapons, opposing the development of new nuclear weapons, and calling on President Obama to lead the nuclear disarmament effort; now therefore be it
ORDERED: That the City Council go on record opposing investing funds from the Cambridge Retirement System in any entities that are involved in or support the production or upgrading of nuclear weapons systems; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Cambridge Peace Commissioner and other appropriate City staff to organize an informational forum on possibilities for Cambridge individuals and institutions to divest their pension funds from investments in nuclear weapons contractors; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Board of the Cambridge Retirement System and other appropriate City staff to ensure divestment from all companies involved in production of nuclear weapons systems, and in entities investing in such companies, and the City Manager is requested to report back to the City Council about the implementation of said divestment in a timely manner.
The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.
The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”
Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.
He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.
Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.
Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”
Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”
As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.
The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.
While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.
Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.
Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.
The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.
Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?
Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.
After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.
We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”
Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…
In the beginning
2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.
At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.
On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.
On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.
Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.
This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.
By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.
Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.
Other major events
Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.
Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.
In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).
Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.
Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.
September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.
Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.
Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.
Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events. The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.
In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.
In the Press
We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:
The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent, The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.
Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.
Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.
Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.
Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.
In addition we had a few extra special articles on our new website:
Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove.
On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.
A New Beginning
2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!
Happy New Year!
Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.
This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence.” The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?
Although challenges in developing AI and concerns about human-computer interaction were both expressed, in the celebratory spirit of the Nobel Prize, let’s focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.
Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.
Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.
Stuart Russell, professor at UC Berkeley, highlighted AI’s potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.
As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind.”
We are excited to present our new xrisk news site! With improved layout and design, it aims to provide you with daily technology news relevant to the long-term future of civilization, covering both opportunities and risks. This will, of course, include news about the projects we and our partner organizations are involved in to help prevent these risks. We’re also developing a section of the site that will provide more background information about the major risks, as well as what people can do to help reduce them and keep society flourishing.
Some investments in nuclear weapons systems might increase the risk of accidental nuclear war and are arguably done primarily for profit rather than national security. Illuminating these financial drivers provides another opportunity to reduce the risk of nuclear war. FLI is pleased to support financial research about who invests in and profits from the production of new nuclear weapons systems, with the aim of drawing attention to and stigmatizing such productions.
On November 12, Don’t Bank on the Bomb released their 2015 report on European financial institutions that have committed to divesting in any companies related to the manufacture of nuclear weapons. The report also highlights financial groups who have made positive steps toward divestment, and it provides a detailed list of companies that are still heavily invested in nuclear weapons. With the Cold War long over, many people don’t realize that the risk of nuclear war still persists and that many experts believe it to be increasing.Here is FLI’s assessment and position of the nuclear weapons situation.
Here are some other interesting things we and our partners have done in the last few months:
- On September 1, FLI and CSER co-organized an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety in front of a veritable who’s who of the scientifically minded in Westminster, including many British members of parliament.
- Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety.
- Stephen Hawking answered the AMA questions about artificial intelligence.
- Our co-founder, Meia Chita-Tegmark wrote a spooky Halloween op-ed that was featured on the Huffington Post about the man who saved the world from nuclear apocalypse in 1962.
- Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars.
- FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists.
- And two of our partner organizations have published their newsletters. The Machine Intelligence Research Institute (MIRI) published an October and November newsletter, and the Global Catastrophic Risk Institute released newsletters inSeptember and October.
The Future of AI: Opportunities and Challenges
This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls (see this open letter and this list of research priorities). To facilitate candid and constructive discussions, there was no media present and Chatham House Rules: nobody’s talks or statements will be shared without their permission.
Where? San Juan, Puerto Rico
When? Arrive by evening of Friday January 2, depart after lunch on Monday January 5 (see program below)
Scientific organizing committee:
- Erik Brynjolfsson, MIT, Professor at the MIT Sloan School of Management, co-author of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
- Demis Hassabis, Founder, DeepMind
- Eric Horvitz, Microsoft, co-chair of the AAAI presidential panel on long-term AI futures
- Shane Legg, Founder, DeepMind
- Peter Norvig, Google, Director of Research, co-author of the standard textbook Artificial Intelligence: a Modern Approach.
- Francesca Rossi, Univ. Padova, Professor of Computer Science, President of the International Joint Conference on Artificial Intelligence
- Stuart Russell, UC Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.
- Bart Selman, Cornell University, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures
- Murray Shanahan, Imperial College, Professor of Cognitive Robotics
- Mustafa Suleyman, Founder, DeepMind
- Max Tegmark, MIT, Professor of physics, author of Our Mathematical Universe
Anthony Aguirre, Meia Chita-Tegmark, Viktoriya Krakovna, Janos Kramar, Richard Mallah, Max Tegmark, Susan Young
Friday January 2:
1600-late: Registration open
1930-2130: Welcome reception (Las Olas Terrace)
Saturday January 3:
0900-1200: Overview (one review talk on each of the four conference themes)
• Ryan Calo (Univ. Washington): AI and the law
• Erik Brynjolfsson (MIT): AI and economics (pdf)
• Richard Sutton (Alberta): Creating human-level AI: how and when? (pdf)
• Stuart Russell (Berkeley): The long-term future of (artificial) intelligence (pdf)
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Optimizing the economic impact of AI
(A typical 3-hour session consists of a few 20-minute talks followed by a discussion panel where the panelists who haven’t already given talks get to give brief introductory remarks before the general discussion ensues.)
What can we do now to maximize the chances of reaping the economic bounty from AI while minimizing unwanted side-effects on the labor market?
• Andrew McAfee, MIT (pdf)
• James Manyika, McKinsey (pdf)
• Michael Osborne, Oxford (pdf)
Panelists include Ajay Agrawal (Toronto), Erik Brynjolfsson (MIT), Robin Hanson (GMU), Scott Phoenix (Vicarious)
Sunday January 4:
0900-1200: Creating human-level AI: how and when?
Short talks followed by panel discussion: will it happen, and if so, when? Via engineered solution, whole brain emulation, or other means? (We defer until the 4pm session questions regarding what will happen, about whether machines will have goals, about ethics, etc.)
• Demis Hassabis, Google/DeepMind
• Dileep George, Vicarious (pdf)
• Tom Mitchell, CMU (pdf)
Panelists include Joscha Bach (MIT), Francesca Rossi (Padova), Richard Mallah (Cambridge Semantics), Richard Sutton (Alberta)
1300-1515: Free play/breakout sessions on the beach
1515-1545: Coffee & snacks
1545-1600: Breakout session reports
1600-1900: Intelligence explosion: science or fiction?
If an intelligence explosion happens, then what are likely outcomes? What can we do now to maximize the probability of a positive outcome? Containment problem? Is “friendly AI” possible? Feasible? Likely to happen?
• Nick Bostrom, Oxford (pdf)
• Bart Selman, Cornell (pdf)
• Jaan Tallinn, Skype founder (pdf)
• Elon Musk, SpaceX, Tesla Motors
Panelists include Shane Legg (Google/DeepMind), Murray Shanahan (Imperial), Vernor Vinge (San Diego), Eliezer Yudkowsky (MIRI)
1930: banquet (outside by beach)
Monday January 5:
0900-1200: Law & ethics: Improving the legal framework for autonomous systems
How should legislation be improved to best protect the AI industry and consumers? If self-driving cars cut the 32000 annual US traffic fatalities in half, the car makers won’t get 16000 thank-you notes, but 16000 lawsuits. How can we ensure that autonomous systems do what we want? And who should be held liable if things go wrong? How tackle criminal AI? AI ethics? AI ethics/legal framework for military systems & financial systems?
• Joshua Greene, Harvard (pdf)
• Heather Roff Perkins, Univ. Denver (pdf)
• David Vladeck, Georgetown
Panelists include Ryan Calo (Univ. Washington), Tom Dietterich (Oregon State, AAAI president), Kent Walker (General Counsel, Google)
1200: Lunch, depart
You’ll find a list of participants and their bios here.
Back row, from left to right: Tom Mitchell, Seán Ó hÉigeartaigh, Huw Price, Shamil Chandaria, Jaan Tallinn, Stuart Russell, Bill Hibbard, Blaise Agüera y Arcas, Anders Sandberg, Daniel Dewey, Stuart Armstrong, Luke Muehlhauser, Tom Dietterich, Michael Osborne, James Manyika, Ajay Agrawal, Richard Mallah, Nancy Chang, Matthew Putman
Other standing, left to right: Marilyn Thompson, Rich Sutton, Alex Wissner-Gross, Sam Teller, Toby Ord, Joscha Bach, Katja Grace, Adrian Weller, Heather Roff-Perkins, Dileep George, Shane Legg, Demis Hassabis, Wendell Wallach, Charina Choi, Ilya Sutskever, Kent Walker, Cecilia Tilli, Nick Bostrom, Erik Brynjolfsson, Steve Crossan, Mustafa Suleyman, Scott Phoenix, Neil Jacobstein, Murray Shanahan, Robin Hanson, Francesca Rossi, Nate Soares, Elon Musk, Andrew McAfee, Bart Selman, Michele Reilly, Aaron VanDevender, Max Tegmark, Margaret Boden, Joshua Greene, Paul Christiano, Eliezer Yudkowsky, David Parkes, Laurent Orseau, JB Straubel, James Moor, Sean Legassick, Mason Hartman, Howie Lempel, David Vladeck, Jacob Steinhardt, Michael Vassar, Ryan Calo, Susan Young, Owain Evans, Riva-Melissa Tez, János Kramár, Geoff Anders, Vernor Vinge, Anthony Aguirre
Seated: Sam Harris, Tomaso Poggio, Marin Soljačić, Viktoriya Krakovna, Meia Chita-Tegmark
Behind the camera: Anthony Aguirre (and also photoshopped in by the human-level intelligence on his left)
Click for a full resolution version.
Thursday January 15, 2015
We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.
There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk’s donation aims to support precisely this type of research: “Here are all these leading AI researchers saying that AI safety is important”, says Elon Musk. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”
Musk’s announcement was welcomed by AI leaders in both academia and industry:
“It’s wonderful, because this will provide the impetus to jump-start research on AI safety”, said AAAI president Tom Dietterich. “This addresses several fundamental questions in AI research that deserve much more funding than even this donation will provide.”
“Dramatic advances in artificial intelligence are opening up a range of exciting new applications”, said Demis Hassabis, Shane Legg and Mustafa Suleyman, co-founders of DeepMind Technologies, which was recently acquired by Google. “With these newfound powers comes increased responsibility. Elon’s generous donation will support researchers as they investigate the safe and ethical use of artificial intelligence, laying foundations that will have far reaching societal impacts as these technologies continue to progress”.
Elon Musk and AAAI President Thomas Dietterich comment on the announcement
The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy (a detailed list of examples can be found here). “Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere”, says FLI co-founder Viktoriya Krakovna.
“This donation will make a major impact”, said UCSC professor and FLI co-founder Anthony Aguirre: “While heavy industry and government investment has finally brought AI from niche academic research to early forms of a potentially world-transforming technology, to date relatively little funding has been available to help ensure that this change is actually a net positive one for humanity.”
“That AI systems should be beneficial in their effect on human society is a given”, said Stuart Russell, co-author of the standard AI textbook “Artificial Intelligence: a Modern Approach”. “The research that will be funded under this program will make sure that happens. It’s an intrinsic and essential part of doing AI research.”
Skype-founder Jaan Tallinn, one of FLI’s founders, agrees: “Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to to focus on steering.”
Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories.
“Hopefully this grant program will help shift our focus from building things just because we can, toward building things because they are good for us in the long term”, says FLI co-founder Meia Chita-Tegmark.
Contacts at Future of Life Institute:
- Max Tegmark: email@example.com
- Meia Chita-Tegmark: firstname.lastname@example.org
- Jaan Tallinn: email@example.com
- Anthony Aguirre: firstname.lastname@example.org
- Viktoriya Krakovna: email@example.com
Contacts among AI researchers:
- Prof. Tom Dietterich, President of the Association for the Advancement of Artificial Intelligence (AAAI), Director of Intelligent Systems: firstname.lastname@example.org
- Prof. Stuart Russell, Berkeley, Director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach: email@example.com
- Prof. Bart Selman, co-chair of the AAAI presidential panel on long-term AI futures: firstname.lastname@example.org
- Prof. Francesca Rossi, Professor of Computer Science, University of Padova and Harvard University, president of the International Joint Conference on Artificial Intelligence (IJCAI): email@example.com
- Prof. Murray Shanahan, Imperial College: firstname.lastname@example.org
Max Tegmark interviews Elon Musk about his life, his interest in the future of humanity and the background to his donation
Here is a short summary of the Future of AI session organized at SciFoo by Nick Bostrom, Gary Marcus, Jaan Tallinn, Max Tegmark and Murray Shanahan.
Our Scientific Advisory Board member Stephen Hawking’s long-awaited Reddit AMA answers on Artificial Intelligence just came out, and was all over today’s world news, including MSNBC, Huffington Post, The Independent and Time.
Read the Q&A below and visit the official Reddit page for the full discussion:
Professor Hawking- Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and I end up having what I call “The Terminator Conversation.” My point in this conversation is that the dangers from AI are overblown by media and non-understanding news, and the real danger is the same danger in any complex, less-than-fully-understood code: edge case unpredictability. In my opinion, this is different from “dangerous AI” as most people perceive it, in that the software has no motives, no sentience, and no evil morality, and is merely (ruthlessly) trying to optimize a function that we ourselves wrote and designed. Your viewpoints (and Elon Musk’s) are often presented by the media as a belief in “evil AI,” though of course that’s not what your signed letter says. Students that are aware of these reports challenge my view, and we always end up having a pretty enjoyable conversation. How would you represent your own beliefs to my class? Are our viewpoints reconcilable? Do you think my habit of discounting the layperson Terminator-style “evil AI” is naive? And finally, what morals do you think I should be reinforcing to my students interested in AI?
You’re right: media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.
Hello Doctor Hawking, thank you for doing this AMA. I am a student who has recently graduated with a degree in Artificial Intelligence and Cognitive Science. Having studied A.I., I have seen first hand the ethical issues we are having to deal with today concerning how quickly machines can learn the personal features and behaviours of people, as well as being able to identify them at frightening speeds. However, the idea of a “conscious” or actual intelligent system which could pose an existential threat to humans still seems very foreign to me, and does not seem to be something we are even close to cracking from a neurological and computational standpoint. What I wanted to ask was, in your message aimed at warning us about the threat of intelligent machines, are you talking about current developments and breakthroughs (in areas such as machine learning), or are you trying to say we should be preparing early for what will inevitably come in the distant future?
The latter. There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.
Hello, Prof. Hawking. Thanks for doing this AMA! Earlier this year you, Elon Musk, and many other prominent science figures signed an open letter warning the society about the potential pitfalls of Artificial Intelligence. The letter stated: “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.” While being a seemingly reasonable expectation, this statement serves as a start point for the debate around the possibility of Artificial Intelligence ever surpassing the human race in intelligence.
My questions: 1. One might think it impossible for a creature to ever acquire a higher intelligence than its creator. Do you agree? If yes, then how do you think artificial intelligence can ever pose a threat to the human race (their creators)? 2. If it was possible for artificial intelligence to surpass humans in intelligence, where would you define the line of “It’s enough”? In other words, how smart do you think the human race can make AI, while ensuring that it doesn’t surpass them in intelligence?
It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
I’m rather late to the question-asking party, but I’ll ask anyway and hope. Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done? Thank you for your time and your contributions. I’ve found research to be a largely social endeavor, and you’ve been an inspiration to so many.
If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
Hello Professor Hawking, thank you for doing this AMA! I’ve thought lately about biological organisms’ will to survive and reproduce, and how that drive evolved over millions of generations. Would an AI have these basic drives, and if not, would it be a threat to humankind? Also, what are two books you think every person should read?
An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.
Thanks for doing this AMA. I am a biologist. Your fear of AI appears to stem from the assumption that AI will act like a new biological species competing for the same resources or otherwise transforming the planet in ways incompatible with human (or other) life. But the reason that biological species compete like this is because they have undergone billions of years of selection for high reproduction. Essentially, biological organisms are optimized to ‘take over’ as much as they can. It’s basically their ‘purpose’. But I don’t think this is necessarily true of an AI. There is no reason to surmise that AI creatures would be ‘interested’ in reproducing at all. I don’t know what they’d be ‘interested’ in doing. I am interested in what you think an AI would be ‘interested’ in doing, and why that is necessarily a threat to humankind that outweighs the benefits of creating a sort of benevolent God.
You’re right that we need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.
* $7M in AI research grants announced: We were delighted to announce the selection of 37 AI safety research teams which we plan to award a total of $7 million in funding. The grant program is funded by Elon Musk and the Open Philanthropy Project.
Max Tegmark, along with FLI grant recipients Manela Veloso and Thomas Dietterich, were interviewed on NPR’s On Point Radio for a lively discussion about our new AI safety research program.
* Open letter about autonomous weapons: FLI recently published an open letter advocating a global ban on offensive autonomous weapons development. Thousands of prominent scientists and concerned individuals are signatories, including Stephen Hawking, Elon Musk, the team at DeepMind, Yann LeCun (Director of AI Research, Facebook), Eric Horvitz (Managing Director, Microsoft Research), Noam Chomsky and Steve Wozniak.
* Open letter about economic impacts of AI: Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders have launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.
* ITIF AI policy panel: Stuart Russell and MIRI Executive Director Nate Soares participated in a panel discussion about the risks and policy implications of AI (video here). The panel was hosted by the Information Technology & Innovation Foundation (ITIF), a Washington-based think tank focusing on the intersection of public policy & emerging technology.
* IJCAI 15: Stuart Russell presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina.
* EA Global conferences: FLI co-founders Viktoriya Krakovna and Anthony Aguirre spoke at the Effective Altruism Global (EA Global) conference at Google headquarters in Mountain View, California. FLI co-founder Jaan Tallinn spoke at the EA Global Oxford conference on August 28-30.
* Stephen Hawking AMA: Professor Hawking is hosting an “Ask Me Anything” (AMA) conversation on Reddit. Users recently submitted questions here; his answers will follow in the near future.
* FLI anniversary video: FLI co-founder Meia Tegmark created an anniversary video highlighting our accomplishments from our first year.
I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela Veloso and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.
After a grueling expert review of almost 300 grant proposals from around the world, we are delighted to announce the 37 research teams that have been recommended for funding to help keep AI beneficial. We plan to award these teams a total of about $7M from Elon Musk and the Open Philanthropy Project over the next three years, with most of the research projects starting by September 2015. The winning teams will research a host of questions in computer science, law, policy, economics, and other fields relevant to coming advances in AI.
Today we are celebrating one year since our launch event. It’s been an amazing year, full of wonderful accomplishments, and we would like to express our gratitude to all those who supported us with their advice, hard work and resources. Thank you – and let’s make this year even better!
Here’s a video with some of the highlights of our first year. You’ll find many familiar faces here, perhaps including your own!
An excellent piece about existential risks by FLI co-founder Jaan Tallinn on Edge.org:
“The reasons why I’m engaged in trying to lower the existential risks has to do with the fact that I’m a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about — in the pallet of actions that you have — what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn’t make a significant difference in these areas.”
From the introduction by Max Tegmark:
“Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning.”
We were quite curious to see how many applications we’d get for our Elon-funded grants program on keeping AI beneficial, given the short notice and unusual topic. I’m delighted to report that the response was overwhelming: about 300 applications for a total of about $100M, including a great diversity of awesome teams and projects from around the world. Thanks to hard work by a team of expert reviewers, we’ve now invited roughly the strongest quarter of the applicants to submit full proposals due May 17. There are so many exciting project proposals that the review panel will face some very difficult choices! Our goal is to announce the winners by July 1.
In the News
* Top AI researchers from industry and academia have signed an FLI-organized open letter arguing for timely research to make AI more robust and beneficial. Check out our research priorities and supporters on our website.
+ The open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent , The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.
* We are delighted to report that Elon Musk has donated $10 million to FLI to create a global research program aimed at keeping AI beneficial to humanity. Read more about the program on our website.
Projects and Events
* FLI recently organized its first-ever conference, entitled “The Future of AI: Opportunities and Challenges.” The conference took place on January 2-5 in Puerto Rico, and brought together top AI researchers, industry leaders, and experts in economics, law, and ethics to discuss the future of AI. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Many of the speakers have posted their talks, which can be found on our website.
* The application for research funds opens Thursday, January 22. Grants are available to AI researchers and to AI-related research involving other fields such as economics, law, ethics and policy. You can find the application on our website.
* We are happy to announce Francesca Rossi has joined our scientific advisory board! Francesca Rossi is a professor of computer science, with research interests within artificial intelligence. She is the president of the International Joint Conference on Artificial Intelligence (IJCAI), as well as the associate editor in chief for the Journal of AI Research (JAIR). You can find our entire advisory board on our website.
We are delighted to report that Elon Musk has decided to donate $10M to FLI to run a global research program aimed at keeping AI beneficial to humanity.
You can read more about the pledge here.
We organized our first conference, The Future of AI: Opportunities and Challenges, Jan 2-5 in Puerto Rico. This conference brought together many of the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Most of the speakers have posted their talks.