Posts in this category appear in the left sidebar (column 3).

$50,000 Award to Stanislav Petrov for helping avert WWIII – but US denies visa

Click here to see this page in other languages:  Russian 

To celebrate that today is not the 35th anniversary of World War III, Stanislav Petrov, the man who helped avert an all-out nuclear exchange between Russia and the U.S. on September 26 1983 was honored in New York with the $50,000 Future of Life Award at a ceremony at the Museum of Mathematics in New York.

Former United Nations Secretary General Ban Ki-Moon said: “It is hard to imagine anything more devastating for humanity than all-out nuclear war between Russia and the United States. Yet this might have occurred by accident on September 26 1983, were it not for the wise decisions of Stanislav Yevgrafovich Petrov. For this, he deserves humanity’s profound gratitude. Let us resolve to work together to realize a world free from fear of nuclear weapons, remembering the courageous judgement of Stanislav Petrov.”

Stanislav Petrov’s daughter Elena holds the 2018 Future of Life Award flanked by her husband Victor. From left: Ariel Conn (FLI), Lucas Perry (FLI), Hannah Fry, Victor, Elena, Steven Mao (exec. producer of the Petrov film “The Man Who Saved the World”), Max Tegmark (FLI)

Although the U.N. General Assembly, just blocks away, heard politicians highlight the nuclear threat from North Korea’s small nuclear arsenal, none mentioned the greater threat from the many thousands of nuclear weapons in the United States and Russian arsenals that have nearly been unleashed by mistake dozens of times in the past in a seemingly never-ending series of mishaps and misunderstandings.

One of the closest calls occurred thirty-five years ago, on September 26, 1983, when Stanislav Petrov chose to ignore the Soviet early-warning detection system that had erroneously indicated five incoming American nuclear missiles. With his decision to ignore algorithms and instead follow his gut instinct, Petrov helped prevent an all-out US-Russian nuclear war, as detailed in the documentary film “The Man Who Saved the World”, which will be released digitally next week. Since Petrov passed away last year, the award was collected by his daughter Elena. Meanwhile, Petrov’s son Dmitry missed his flight to New York because the U.S. embassy delayed his visa. “That a guy can’t get a visa to visit the city his dad saved from nuclear annihilation is emblematic of how frosty US-Russian relations have gotten, which increases the risk of accidental nuclear war”, said MIT Professor Max Tegmark when presenting the award. Arguably the only recent reduction in the risk of accidental nuclear war came when Donald Trump held a summit with Vladimir Putin in Helsinki earlier this year, which was, ironically, met with widespread criticism.

In Russia, soldiers often didn’t discuss their wartime actions out of fear that it might displease their government, and so, Elena only first heard about her father’s heroic actions in 1998 – 15 years after the event occurred. And even then, Elena and her brother only learned of what her father had done when a German journalist reached out to the family for an article he was working on. It’s unclear if Petrov’s wife, who died in 1997, ever knew of her husband’s heroism. Until his death, Petrov maintained a humble outlook on the event that made him famous. “I was just doing my job,” he’d say.

But most would agree that he went above and beyond his job duties that September day in 1983. The alert of five incoming nuclear missiles came at a time of high tension between the superpowers, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. Earlier in the month the Soviet Union shot down a Korean Airlines passenger plane that strayed into its airspace, killing almost 300 people, and Petrov had to consider this context when he received the missile notifications. He had only minutes to decide whether or not the satellite data were a false alarm. Since the satellite was found to be operating properly, following procedures would have led him to report an incoming attack. Going partly on gut instinct and believing the United States was unlikely to fire only five missiles, he told his commanders that it was a false alarm before he knew that to be true. Later investigations revealed that reflections of the Sun off of cloud tops had fooled the satellite into thinking it was detecting missile launches.

Last years Nobel Peace Prize Laureate, Beatrice Fihn, who helped establish the recent United Nations treaty banning nuclear weapons, said,“Stanislav Petrov was faced with a choice that no person should have to make, and at that moment he chose the human race — to save all of us. No one person and no one country should have that type of control over all our lives, and all future lives to come. 35 years from that day when Stanislav Petrov chose us over nuclear weapons, nine states still hold the world hostage with 15,000 nuclear weapons. We cannot continue relying on luck and heroes to safeguard humanity. The Treaty on the Prohibition of Nuclear Weapons provides an opportunity for all of us and our leaders to choose the human race over nuclear weapons by banning them and eliminating them once and for all. The choice is the end of us or the end of nuclear weapons. We honor Stanislav Petrov by choosing the latter.”

University College London Mathematics Professor  Hannah Fry, author of  the new book “Hello World: Being Human in the Age of Algorithms”, participated in the ceremony and pointed out that as ever more human decisions get replaced by automated algorithms, it is sometimes crucial to keep a human in the loop – as in Petrov’s case.

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. It is given by the Future of Life Institute (FLI), a non-profit also known for supporting AI safety research with Elon Musk and others. “Although most people never learn about Petrov in school, they might not have been alive were it not for him”, said FLI co-founder Anthony Aguirre. Last year’s award was given to the Vasili Arkhipov, who singlehandedly prevented a nuclear attack on the US during the Cuban Missile Crisis. FLI is currently accepting nominations for next year’s award.

Stanislav Petrov around the time he helped avert WWIII

$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust

$2 million has been allocated to fund research that anticipates artificial general intelligence (AGI) and how it can be designed beneficially. The money was donated by Elon Musk to cover grants through the Future of Life Institute (FLI). Ten grants have been selected for funding.

Said Tegmark, “I’m optimistic that we can create an inspiring high-tech future with AI as long as we win the race between the growing power of AI and the wisdom with which the manage it. This research is to help develop that wisdom and increasing the likelihood that AGI will be best rather than worst thing to happen to humanity.”

Today’s artificial intelligence (AI) is still quite narrow. That is, it can only accomplish narrow sets of tasks, such as playing chess or Go, driving a car, performing an Internet search, or translating languages. While the AI systems that master each of these tasks can perform them at superhuman levels, they can’t learn a new, unrelated skill set (e.g. an AI system that can search the Internet can’t learn to play Go with only its search algorithms).

These AI systems lack that “general” ability that humans have to make connections between disparate activities and experiences and to apply knowledge to a variety of fields. However, a significant number of AI researchers agree that AI could achieve a more “general” intelligence in the coming decades. No one knows how AI that’s as smart or smarter than humans might impact our lives, whether it will prove to be beneficial or harmful, how we can design it safely, or even how to prepare society for advanced AI. And many researchers worry that the transition could occur quickly.

Anthony Aguirre, co-founder of FLI and physics professor at UC Santa Cruz, explains, “The breakthroughs necessary to have machine intelligences as flexible and powerful as our own may take 50 years. But with the major intellectual and financial resources now being directed at the problem it may take much less. If or when there is a breakthrough, what will that look like? Can we prepare? Can we design safety features now, and incorporate them into AI development, to ensure that powerful AI will continue to benefit society? Things may move very quickly and we need research in place to make sure they go well.”

Grant topics include: training multiple AIs to work together and learn from humans about how to coexist, training AI to understand individual human preferences, understanding what “general” actually means, incentivizing research groups to avoid a potentially dangerous AI race, and many more. As the request for proposals stated, “The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.”

FLI hopes that this round of grants will help ensure that AI remains beneficial as it becomes increasingly intelligent. The full list of FLI recipients and project titles includes:

Primary Investigator Project Title Amount Recommended Email
Allan Dafoe, Yale University Governance of AI Programme $276,000 allan.dafoe@yale.edu
Stefano Ermon, Stanford University Value Alignment and Multi-agent Inverse Reinforcement Learning $100,000 ermon@cs.stanford.edu
Owain Evans, Oxford University Factored Cognition: Amplifying Human Cognition for Safely Scalable AGI $225,000 owain.evans@philosophy.ox.ac.uk
The Anh Han, Teesside University Incentives for Safety Agreement Compliance in AI Race $224,747 t.han@tees.ac.uk
Jose Hernandez-Orallo, University of Cambridge Paradigms of Artificial General Intelligence and Their Associated Risks $220,000 jorallo@dsic.upv.es
Marcus Hutter, Australian National University The Control Problem for Universal AI: A Formal Investigation $276,000 marcus.hutter@anu.edu.au
James Miller, Smith College Utility Functions: A Guide for Artificial General Intelligence Theorists $78,289 jdmiller@smith.edu
Dorsa Sadigh, Stanford University Safe Learning and Verification of Human-AI Systems $250,000 dorsa@cs.stanford.edu
Peter Stone, University of Texas Ad hoc Teamwork and Moral Feedback as a Framework for Safe Robot Behavior $200,000 pstone@cs.utexas.edu
Josh Tenenbaum, MIT Reverse Engineering Fair Cooperation $150,000 jbt@mit.edu

 

Some of the grant recipients offered statements about why they’re excited about their new projects:

“The team here at the Governance of AI Program are excited to pursue this research with the support of FLI. We’ve identified a set of questions that we think are among the most important to tackle for securing robust governance of advanced AI, and strongly believe that with focused research and collaboration with others in this space, we can make productive headway on them.” -Allan Dafoe

“We are excited about this project because it provides a first unique and original opportunity to explicitly study the dynamics of safety-compliant behaviours within the ongoing AI research and development race, and hence potentially leading to model-based advice on how to timely regulate the present wave of developments and provide recommendations to policy makers and involved participants. It also provides an important opportunity to validate our prior results on the importance of commitments and other mechanisms of trust in inducing global pro-social behavior, thereby further promoting AI for the common good.” -The Ahn Han

“We are excited about the potentials of this project. Our goal is to learn models of humans’ preferences, which can help us build algorithms for AGIs that can safely and reliably interact and collaborate with people.” -Dorsa Sadigh

This is FLI’s second grant round. The first launch in 2015, and a comprehensive list of papers, articles and information from that grant round can be found here. Both grant rounds are part of the original $10 million that Elon Musk pledged to AI safety research.

FLI cofounder, Viktoriya Krakovna, also added: “Our previous grant round promoted research on a diverse set of topics in AI safety and supported over 40 papers. The next grant round is more narrowly focused on research in AGI safety and strategy, and I am looking forward to great work in this area from our new grantees.”

Learn more about these projects here.

AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons

Leading AI companies and researchers take concrete action against killer robots, vowing never to develop them.

Stockholm, Sweden (July 18, 2018) After years of voicing concerns, AI leaders have, for the first time, taken concrete action against lethal autonomous weapons, signing a pledge to neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.

The pledge has been signed to date by over 160 AI-related companies and organizations from 36 countries, and 2,400 individuals from 90 countries. Signatories of the pledge include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI (EurAI), the Swedish AI Society (SAIS), Demis Hassabis, British MP Alex Sobel, Elon Musk, Stuart Russell, Yoshua Bengio, Anca Dragan, and Toby Walsh.

Max Tegmark, president of the Future of Life Institute (FLI) which organized the effort, announced the pledge on July 18 in Stockholm, Sweden during the annual International Joint Conference on Artificial Intelligence (IJCAI), which draws over 5,000 of the world’s leading AI researchers. SAIS and EurAI were also organizers of this year’s IJCAI.

Said Tegmark, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect. AI has huge potential to help the world – if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

Lethal autonomous weapons systems (LAWS) are weapons that can identify, target, and kill a person, without a human “in-the-loop.” That is, no person makes the final decision to authorize lethal force: the decision and authorization about whether or not someone will die is left to the autonomous weapons system. (This does not include today’s drones, which are under human control. It also does not include autonomous systems that merely defend against other weapons, since “lethal” implies killing a human.)

The pledge begins with the statement:

“Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”

Another key organizer of the pledge, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, points out the thorny ethical issues surrounding LAWS. He states:

“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way.”

Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, has long been a strong opponent of lethal autonomous weapons. He says:

“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful. Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”

In addition to the ethical questions associated with LAWS, many advocates of an international ban on LAWS are concerned that these weapons will be difficult to control – easier to hack, more likely to end up on the black market, and easier for bad actors to obtain –  which could become destabilizing for all countries, as illustrated in the FLI-released video “Slaughterbots”.

In December 2016, the Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion regarding LAWS at the UN. By the most recent meeting in April, twenty-six countries had announced support for some type of ban, including China. And such a ban is not without precedent. Biological weapons, chemical weapons, and space weapons were also banned not only for ethical and humanitarian reasons, but also for the destabilizing threat they posed.

The next UN meeting on LAWS will be held in August, and signatories of the pledge hope this commitment will encourage lawmakers to develop a commitment at the level of an international agreement between countries. As the pledge states:

“We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. … We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

 

As seen in the press

Stephen Hawking in Memoriam

As we mourn the loss of Stephen Hawking, we should remember that his legacy goes far beyond science. Yes, of course he was one of the greatest scientists of the past century, discovering that black holes evaporate and helping found the modern quest for quantum gravity. But he also had a remarkable legacy as a social activist, who looked far beyond the next election cycle and used his powerful voice to bring out the best in us all. As a founding member of FLI’s Scientific Advisory board, he tirelessly helped us highlight the importance of long-term thinking and ensuring that we use technology to help humanity flourish rather than flounder. I marveled at how he could sometimes answer my emails faster than my grad students. His activism revealed the same visionary fearlessness as his scientific and personal life: he saw further ahead than most of those around him and wasn’t afraid of controversially sounding the alarm about humanity’s sloppy handling of powerful technology, from nuclear weapons to AI.

On a personal note, I’m saddened to have lost not only a long-time collaborator but, above all, a great inspiration, always reminding me of how seemingly insurmountable challenges can be overcome with creativity, willpower and positive attitude. Thanks Stephen for inspiring us all!

2018 International AI Safety Grants Competition

I. THE FUTURE OF AI: REAPING THE BENEFITS WHILE AVOIDING PITFALLS

For many years, artificial intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant recent success, and great future promise. This recent success has raised an important question: how can we ensure that the growing power of AI is matched by the growing wisdom with which we manage it? In an open letter in 2015, a large international group of leading AI researchers from academia and industry argued that this success makes it important and timely to research also how to make AI systems robust and beneficial, and that this includes concrete research directions that can be pursued today. In early 2017, a broad coalition of AI leaders went further and signed the Asilomar AI Principles, which articulate beneficial AI requirements in greater detail.

The first Asilomar Principle is that The goal of AI research should be to create not undirected intelligence, but beneficial intelligence, and the second states that Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies…”  The aim of this request for proposals is to support research that serves these and other goals indicated by the Principles.

The focus of this RFP is on technical research or other projects enabling development of AI that is beneficial to society and robust in the sense that the benefits have some guarantees: our AI systems must do what we want them to do.

II. EVALUATION CRITERIA & PROJECT ELIGIBILITY

This 2018 grants competition is the second round of the multi-million dollar grants program announced in January 2015, and will give grants totaling millions more to researchers in academic and other nonprofit institutions for projects up to three years in duration, beginning September 1, 2018. Results-in-progress from the first round are here. Following the launch of the first round, the field of AI safety has expanded considerably in terms of institutions, research groups, and potential funding sources entering the field.  Many of these, however, focus on immediate or relatively short-term issues relevant to extrapolations of present machine learning and AI systems as they are applied more widely.  There are still relatively few resources devoted to issues that will become crucial if/when AI research attains its original goal: building artificial general intelligence (AGI) that can (or can learn to) outperform humans on all cognitive tasks (see Asilomar Principles 19-23).

For maximal positive impact, this new grants competition thus focuses on Artificial General Intelligence, specifically research for safe and beneficial AGI. Successful grant proposals will either relate directly to AGI issues, or clearly explain how the proposed work is a necessary stepping stone toward safe and beneficial AGI.

As with the previous round, grant applications will be subject to a competitive process of confidential expert peer review similar to that employed by all major U.S. scientific funding agencies, with reviewers being recognized experts in the relevant fields.

Project Grants (approx. $50K-$400K per project) will each fund a small group of collaborators at one or more research institutions for a focused research project of up to three years duration. Proposals will be evaluated according to how topical and impactful they are:

TOPICAL: This RFP is limited to research that aims to help maximize the societal benefits of AGI, explicitly focusing not on the standard goal of making AI more capable, but on making it more robust and/or beneficial. In consultation with other organizations, FLI has identified a list of relatively specific problems and projects of particular interest to the AGI safety field. These will serve both as examples and as topics for special consideration.

In our RFP examples, we give a list of research topics and questions that are germane to this RFP. We also refer proposers to FLI’s landscape of AI safety research and its accompanying literature survey, as well as the 2015 research priorities and the associated survey.

The relative amount of funding for different areas is not predetermined, but will be optimized to reflect the number and quality of applications received. Very roughly, the expectation is ~70% computer science and closely related technical fields, ~30% economics, law, ethics, sociology, policy, education, and outreach.

IMPACTFUL: Proposals will be rated according to their expected positive impact per dollar, taking all relevant factors into account, such as:

  1. Intrinsic intellectual merit, scientific rigor and originality
  2. A high product of likelihood for success and importance if successful (i.e., high-risk research can be supported as long as the potential payoff is also very high.)
  3. The likelihood of the research opening fruitful new lines of scientific inquiry
  4. The feasibility of the research in the given time frame
  5. The qualifications of the Principal Investigator and team with respect to the proposed topic
  6. The part a grant may play in career development
  7. Cost effectiveness: Tight budgeting is encouraged in order to maximize the research impact of the project as a whole, with emphasis on scientific return per dollar rather than per proposal.
  8. Potential to impact the greater community as well as the general public via effective outreach and dissemination of the research results
  9. Engagement of appropriate communities (e.g. engaging research collaborators [or policymakers] in AI safety outside of North America and Europe)

Strong proposals will make it easy for FLI to evaluate their impact by explicitly stating what they aim to produce (publications, algorithms, software, events, etc.) and when (after 1st, 2nd and 3rd year, say). Preference will be given to proposals whose deliverables are made freely available (open access publications, open source software, etc.) where appropriate.

To maximize its impact per dollar, this RFP is intended to complement, not supplement, conventional funding. We wish to enable research that, because of its long-term focus or its non-commercial, speculative or non-mainstream nature would otherwise go unperformed due to lack of available resources. Thus, although there will be inevitable overlaps, an otherwise scientifically rigorous proposal that is a good candidate for an FLI grant will generally not be a good candidate for funding by the NSF, DARPA, corporate R&D, etc. – and vice versa. To be eligible, research must focus on making AI more robust/beneficial as opposed to the standard goal of making AI more capable, and it must be AGI-relevant.

Acceptable use of grant funds for Project Grants include:

  • Student/postdoc/researcher salary and benefits
  • Summer salary and teaching buyout for academics
  • Support for specific projects during sabbaticals
  • Assistance in writing or publishing books or journal articles, including page charges
  • Modest allowance for justifiable lab equipment, computers, and other research supplies
  • Modest travel allowance
  • Development of workshops, conferences, or lecture series for professionals in the relevant fields
  • Overhead of at most 15% (Please note that if this is an issue with your institution, or if your organization is not nonprofit, you can contact FLI to learn about other organizations that can help administer an FLI grant for you.)

Subawards are discouraged but possible in special circumstances.

III. APPLICATION PROCESS

To save time for both you and the reviewers, applications will be accepted electronically through a standard form on our website (click here for the application) and evaluated in a two-part process, as follows:

INITIAL PROPOSAL — DUE FEBRUARY 25 2018, 11:59 PM Eastern Time — must include:

  • A 200-500 word summary of the project, explicitly addressing why it is topical and impactful.
  • A draft budget description not exceeding 200 words, including an approximate total cost over the life of the award and explanation of how funds would be spent.
  • A PDF Curriculum Vitae for the Principal Investigator, including
    • Education and employment history
    • Full publication list
    • Optional: if the PI has any previous publications relevant to the proposed research, they may list to up to five of these as well, for a total of up to 10 representative and relevant publications. We do wish to encourage PIs to enter relevant research areas where they may not have had opportunities before, so prior relevant publications are not required.

A review panel assembled by FLI will screen each initial proposal according to the criteria in Section II. Based on their assessment, the principal investigator (PI) may be invited to submit a full proposal, on or about MARCH 23 2018, perhaps with feedback from reviewers for improving the proposal. Please keep in mind that however positive reviewers may be about a proposal at any stage, it may still be turned down for funding after full peer review.

FULL PROPOSAL — DUE MAY 20 2018 — Must Include:

  • Cover sheet
  • A 200-word project abstract, suitable for publication in an academic journal
  • A project summary not exceeding 200 words, explaining the work and its significance to laypeople
  • A detailed description of the proposed research, of between 5 and 15 single-spaced 11-point pages, including a short statement of how the application fits into the applicant’s present research program, and a description of how the results might be communicated to the wider scientific community and general public
  • A detailed budget over the life of the award, with justification and utilization distribution (preferably drafted by your institution’s grant officer or equivalent)
  • A list, for all project senior personnel, of all present and pending financial support, including project name, funding source, dates, amount, and status (current or pending)
  • Evidence of tax-exempt status of grantee institution, if other than a US university. For information on determining tax-exempt status of international organizations and institutes, please review the information here.
  • Optional: names of three recommended referees
  • Curricula Vitae for all project senior personnel, including:
    • Education and employment history
    • A list of references of up to five previous publications relevant to the proposed research, and up to five additional representative publications
    • Full publication list

Completed full proposals will undergo a competitive process of external and confidential expert peer review, evaluated according to the criteria described in Section III. A review panel of scientists in the relevant fields will be convened to produce a final rank ordering of the proposals, which will determine the grant winners, and make budgetary adjustments if necessary. Public award recommendations will be made on or about JULY 31, 2018.

FUNDING PROCESS

The peer review and administration of this grants program will be managed by the Future of Life Institute. FLI is an independent, philanthropically funded nonprofit organization whose mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

FLI will direct these grants through a Donor Advised Fund (DAF) at the Silicon Valley Community Foundation. FLI will solicit grant applications and have them peer reviewed, and on the basis of these reviews, FLI will advise the DAF on what grants to make. After grants have been made by the DAF, FLI will work with the DAF to monitor the grantee’s performance via grant reports. In this way, researchers will continue to interact with FLI, while the DAF interacts mostly with their institutes’ administrative or grants management offices.

RESEARCH TOPIC LIST

We have solicited and synthesized suggestions from a number of technical AI safety researchers to provide a list of project requests.  Proposals on the requested topics are all germane to the RFP, but the list is not meant to be either comprehensive or exclusive: proposals on other topics that similarly address long-term safety and benefits of AI are also welcomed. We also refer the reader to FLI’s AI safety landscape and its accompanying paper as a more general summary of relevant issues as well as definitions of many key terms.

TO SUBMIT AN INITIAL PROPOSAL, CLICK HERE.

IV. An International Request for Proposals – Timeline

December 20, 2017: RFP is released

February 25, 2018 (by 11:59 PM EST): Initial Proposals due

March 23, 2018: Full Proposals invited

May 20, 2018 (by 11:59 PM EST): Full Proposals (invite only) due

July 31, 2018: Grant Recommendations are publicly announced; FLI Fund conducts due diligence on grants

September 1, 2018: Grants disbursed; Earliest date for grants to start

August 31, 2021: Latest end date for multi-year Grants

TO SUBMIT AN INITIAL PROPOSAL, CLICK HERE.

An International Request for Proposals – Frequently Asked Questions

Does FLI have particular agenda or position on AI and AI safety?

FLI’s position is well summarized by the open letter that FLI’s founders and many of its advisory board members have signed, and by the Asilomar Principles.

Who is eligible for grants?

Researchers and outreach specialists working in academic and other nonprofit institutions are eligible, as well as independent researchers. Grant awards are sent to the PI’s institution and the institution’s administration is responsible for disbursing the awards to the PI. When submitting your application, please make sure to list the appropriate grant administrator that we should contact at your institution.

If you are not affiliated with a research institution, there are many organizations that will help administer your grant. If you need suggestions, please contact FLI. Applicants are not required to be affiliated with an institution for the Initial Proposal, only for the Full Proposal.

Can researchers from outside the U.S. apply?

Yes, applications will be welcomed from any country. Please note that the US Government imposes restrictions on the types of organizations to which US nonprofits (such as FLI) can give grants. Given this, if you are awarded a grant, your institution must a) prove their equivalency to a nonprofit institution by providing the institution’s establishing law or charter, list of key staff and board members, and a signed affidavit for public universities and, b) comply with the U.S. Patriot Act. Please note that this is included to provide information about the equivalency determination process that will take place if you are awarded a grant. If there are any issues with your granting institution proving its equivalency, FLI can help provide a list of organizations that can act as a go-between to administer the grant. More detail about international grant compliance is available on our website here. Please contact FLI if you have any questions about whether your institution is eligible, to get a list of organizations that can help administer your grant, or if you want to review the affidavit that public universities must fill out.

Can I submit an application in a language other than English?

All proposals must be in English. Since our grant program has an international focus, we will not penalize applications by people who do not speak English as their first language. We will encourage the review panel to be accommodating of language differences when reviewing applications. All applications must be coherent.

How and when do we apply?

Apply online here. Please submit an Initial Proposal by February 25, 2018. After screening, you may then be invited to submit a Full Proposal, due May 20, 2018. Please see Section IV for more information.

What kinds of programs and requests are eligible for funding?

Acceptable use of grant funds for Project Grants include:

  • Student/postdoc/researcher salary and benefits
  • Summer salary and teaching buyout for academics
  • Support for specific projects during sabbaticals
  • Assistance in writing or publishing books or journal articles, including page charges
  • Modest allowance for justifiable lab equipment, computers, cloud computing services, and other research supplies
  • Modest travel allowance
  • Development of workshops, conferences, or lecture series for professionals in the relevant fields
  • Overhead of at most 15% (Please note if this is an issue with your institution, or if your organization is not nonprofit, you can contact FLI to learn about other organizations that can help administer an FLI grant for you.)
  • Subawards are discouraged but possible in special circumstances.

What is your policy on overhead?

The highest allowed overhead rate is 15%. (As mentioned before, if this is an issue with your institution, you can contact FLI to learn about other organizations that can help administer FLI grants.)

How will proposals be judged?

After screening of the Initial Proposal, applicants may be asked to submit a Full Proposal. All Full Proposals will undergo a competitive process of external and confidential expert peer review. An expert panel will evaluate and rank the reviews according to the criteria described in Section III of the RFP overview (see above).

Will FLI provide feedback on initial proposals?

FLI will generally not provide significant feedback on initial Project Proposals, but may in some cases. Please keep in mind that however positive FLI may be about a proposal at any stage, it may still be turned down for funding after peer review.

Can I submit multiple proposals?

We will consider multiple Initial Proposals from the same PI; however, we will invite at most one Full Proposal from each PI or closely associated group of applicants.

What if I am unable to submit my application electronically?

Only applications submitted through the form on our website are accepted. If you encounter problems, please contact FLI.

Is there a maximum amount of money for which we can apply?

No. You may apply for as much money as you think is necessary to achieve your goals. However, you should carefully justify your proposed expenditure. Keep in mind that projects will be assessed on potential impact per dollar requested; an inappropriately high budget may harm the proposal’s prospects, effectively pricing it out of the market. Referees are authorized to suggest budget adjustments. As mentioned in the RFP overview above, there may be an opportunity to apply for greater follow-up funding.

What will an average award be?

We expect that Project awards will typically be in the range of $50,000-$400,000 total over the life of the award (usually two to three years).

What are the reporting requirements?

Grantees will be asked to submit a progress report (if a multi-year Grantee) and/or annual report consisting of narrative and financial reports. Renewal of multi-year grants will be contingent on satisfactory demonstration in these reports that the supported research is progressing appropriately, and continues to be consistent with the spirit of the original proposal. (see below question regarding renewal.)

How are multi-year grants renewed?

This program has been formulated to maximize impact by re-allocating (and potentially adding) resources during each year of the grant program. Decisions regarding the renewal of multi-year grants will be made by a review committee on the basis of the annual progress report. This report is not pro-forma. The committee is likely to recommend that some grants not be renewed, some be renewed at reduced level, some renewed at the same level, and that some be offered the opportunity for increased funding in later years.

What are the qualifications for a Principal Investigator?

A Principal Investigator can be anyone – there are no qualification requirements (though qualifications will be taken into account during the review process). Lacking conventional academic credentials or publications does not disqualify a P.I. We encourage applications from industry and independent researchers. Please list any relevant experience or achievements in the attached resume/CV.

As noted above, Principal Investigators need not even be affiliated with a university or nonprofit. If a PI is affiliated with an academic institution, then their Principal Investigator status must be allowed by their institution. Should they be invited to submit a Full Proposal, they must obtain co-signatures on the proposal from the department head, as well as a department host with a post exceeding the duration of the grant.

My colleague(s) and I would like to apply as co-PIs. Can we do this?

Yes. For administrative purposes, however, please select a primary contact for the life of the award. The primary contact, which must be a Principal Investigator, will be the reference for your application(s) and all future correspondence, documents, etc.

Will the grants pay for laboratory or computational expenses?

Yes, however due to budgetary limitations FLI cannot fund capital-intensive equipment or computing facilities. Also, such expenses must be clearly required by the proposed research.

I have a proposal for my usual, relatively mainstream AI research program that I may be able to repackage as an appropriate proposal for this FLI program. Sound OK?

FLI is very sensitive to the problem of “fishing for money”—that is, the re-casting of an existing research program to make it appear to fit the overall thematic nature of this Request For Proposals. Such proposals will not be funded, nor renewed if erroneously funded initially.

Do proposals have to be as long as possible?

Please note that the 15-page limit is an upper limit, not a lower limit. You should simply write as much as you feel that you need in order to explain your proposal in sufficient detail for the review panel to understand it properly.

What are the “referees” in the instructions?

If there are specific reviewers whom you feel are particularly qualified to evaluate your proposal, please feel free to list them (this is completely optional)

Who are FLI’s reviewers?

FLI follows the standard practice of protecting the identities of our external reviewers and selecting them based on expertise in the relevant research areas. For example, the external reviewers in the first-round of this RFP were highly qualified experts in AI, law and economics, mostly professors and also some industry experts.

TO SUBMIT AN INITIAL PROPOSAL, CLICK HERE.

If you have additional questions that were not answered above, please email us.

AI Researchers Create Video to Call for Autonomous Weapons Ban at UN

In response to growing concerns about autonomous weapons, a coalition of AI researchers and advocacy organizations released a fictitious video on Monday that depicts a disturbing future in which lethal autonomous weapons have become cheap and ubiquitous.

The video was launched in Geneva, where AI researcher Stuart Russell presented it at an event at the United Nations Convention on Conventional Weapons hosted by the Campaign to Stop Killer Robots.

Russell, in an appearance at the end of the video, warns that the technology described in the film already exists and that the window to act is closing fast.

Support for a ban has been mounting. Just this past week, over 200 Canadian scientists and over 100 Australian scientists in academia and industry penned open letters to Prime Minister Justin Trudeau and Malcolm Turnbull urging them to support the ban. Earlier this summer, over 130 leaders of AI companies signed a letter in support of this week’s discussions. These letters follow a 2015 open letter released by the Future of Life Institute and signed by more than 20,000 AI/Robotics researchers and others, including Elon Musk and Stephen Hawking.

These letters indicate both grave concern and a sense that the opportunity to curtail lethal autonomous weapons is running out.

Noel Sharkey of the International Committee for Robot Arms Control explains, “The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.”

Drone technology today is very close to having fully autonomous capabilities. And many of the world’s leading AI researchers worry that if these autonomous weapons are ever developed, they could dramatically lower the threshold for armed conflict, ease and cheapen the taking of human life, empower terrorists, and create global instability. The US and other nations have used drones and semi-automated systems to carry out attacks for several years now, but fully removing a human from the loop is at odds with international humanitarian and human rights law.

A ban can exert great power on the trajectory of technological development without needing to stop every instance of misuse. Max Tegmark, MIT Professor and co-founder of the Future of Life Institute, points out, “People’s knee-jerk reaction that bans can’t help isn’t historically accurate: the bioweapon ban created such a powerful stigma that, despite treaty cheating, we have almost no bioterror attacks today and almost all biotech funding is civilian.”

As Toby Walsh, an AI professor at the University of New South Wales, argues: “The academic community has sent a clear and consistent message. Autonomous weapons will be weapons of terror, the perfect tool for those who have no qualms about the terrible uses to which they are put. We need to act now before this future arrives.”

More than 70 countries are participating in the meeting taking place November 13 – 17 organized by the 2016 Fifth Review Conference at the UN, which established a Group of Governmental Experts on lethal autonomous weapons. The meeting is chaired by Ambassador Amandeep Singh Gill of India, and the countries will continue negotiations of what could become an historic international treaty.

For more information about autonomous weapons, see the following resources:

55 Years After Preventing Nuclear Attack, Arkhipov Honored With Inaugural Future of Life Award

Click here to see this page in other languages: Russian

London, UK – On October 27, 1962, a soft-spoken naval officer named Vasili Arkhipov single-handedly prevented nuclear war during the height of the Cuban Missile Crisis. Arkhipov’s submarine captain, thinking their sub was under attack by American forces, wanted to launch a nuclear weapon at the ships above. Arkhipov, with the power of veto, said no, thus averting nuclear war.

Now, 55 years after his courageous actions, the Future of Life Institute has presented the Arkhipov family with the inaugural Future of Life Award to honor humanity’s late hero.

Arkhipov’s surviving family members, represented by his daughter Elena and grandson Sergei, flew into London for the ceremony, which was held at the Institute of Engineering & Technology. After explaining Arkhipov’s heroics to the audience, Max Tegmark, president of FLI, presented the Arkhipov family with their award and $50,000. Elena and Sergei were both honored by the gesture and by the overall message of the award.

Elena explained that her father “always thought that he did what he had to do and never consider his actions as heroism. … Our family is grateful for the prize and considers it as a recognition of his work and heroism. He did his part for the future so that everyone can live on our planet.”

Elena and Sergei with the Future of Life Award

The Future of Life Award seeks to recognize and reward those who take exceptional measures to safeguard the collective future of humanity. Arkhipov, whose courage and composure potentially saved billions of lives, was an obvious choice for the inaugural event.

“Vasili Arkhipov is arguably the most important person in modern history, thanks to whom October 27 2017 isn’t the 55th anniversary of World War III,” FLI president Max Tegmark explained. “We’re showing our gratitude in a way he’d have appreciated, by supporting his loved ones.”

The award also aims to foster a dialogue about the growing existential risks that humanity faces, and the people that work to mitigate them.

Jaan Tallinn, co-founder of FLI, said: “Given that this century will likely bring technologies that can be even more dangerous than nukes, we will badly need more people like Arkhipov — people who will represent humanity’s interests even in the heated moments of a crisis.”

FLI president Max Tegmark presenting the Future of Life Award to Arkhipov’s daughter, Elena, and grandson, Sergei.

 

Arkhipov’s Story

On October 27 1962, during the Cuban Missile Crisis, eleven US Navy destroyers and the aircraft carrier USS Randolph had cornered the Soviet submarine B-59 near Cuba, in international waters outside the US “quarantine” area. Arkhipov was one of the officers on board. The crew had had no contact with Moscow for days and didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them which, unbeknownst to the crew, they’d informed Moscow were merely meant to force the sub to surface and leave.

“We thought – that’s it – the end”, crewmember V.P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”

What the Americans didn’t know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. As the depth charges intensified and temperatures onboard climbed above 45ºC (113ºF), many crew members fainted from carbon dioxide poisoning, and in the midst of this panic, Captain Savitsky decided to launch their nuclear weapon.

“Maybe the war has already started up there,” he shouted. “We’re gonna blast them now! We will die, but we will sink them all – we will not disgrace our Navy!”

The combination of depth charges, extreme heat, stress, and isolation from the outside world almost lit the fuse of full-scale nuclear war. But it didn’t. The decision to launch a nuclear weapon had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no.

Amidst the panic, the 34-year old Arkhipov remained calm and tried to talk Captain Savitsky down. He eventually convinced Savitsky that these depth charges were signals for the Soviet submarine to surface, and the sub surfaced safely and headed north, back to the Soviet Union.

It is sobering that very few have heard of Arkhipov, although his decision was perhaps the most valuable individual contribution to human survival in modern history. PBS made a documentary, The Man Who Saved the World, documenting Arkhipov’s moving heroism, and National Geographic profiled him as well in an article titled – You (and almost everyone you know) Owe Your Life to This Man.

The Cold War never became a hot war, in large part thanks to Arkhipov, but the threat of nuclear war remains high. Beatrice Fihn, Executive Director of the International Campaign to Abolish Nuclear Weapons (ICAN) and this year’s recipient of the Nobel Peace Prize, hopes that the Future of Life Award will help draw attention to the current threat of nuclear weapons and encourage more people to stand up to that threat. Fihn explains: “Arkhipov’s story shows how close to nuclear catastrophe we have been in the past. And as the risk of nuclear war is on the rise right now, all states must urgently join the Treaty on the Prohibition of Nuclear Weapons to prevent such catastrophe.”

Of her father’s role in preventing nuclear catastrophe, Elena explained: “We must strive so that the powerful people around the world learn from Vasili’s example. Everybody with power and influence should act within their competence for world peace.”

An Open Letter to the United Nations Convention on Certain Conventional Weapons

An Open Letter to the United Nations Convention on Certain Conventional Weapons

As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies. We regret that the GGE’s first meeting, which was due to start today (August 21, 2017), has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

Translations: Chinese  Japanese   Russian 

FULL LIST OF SIGNATORIES TO THE OPEN LETTER

To add your company, please contact Toby Walsh at tw@cse.unsw.edu.au.

Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.
Charles Gretton, founder of Hivery, Australia.
Brad Lorge, founder & CEO of Premonition.io, Australia
Brenton O’Brien, founder & CEO of Microbric, Australia.
Samir Sinha, founder & CEO of Robonomics AI, Australia.
Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
Peter Turner, founder & MD of Tribotix, Australia.
Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.
Geoffrey Hinton, founder of DNNResearch Inc, Canada.
James Chow, founder & CEO of UBTECH Robotics, China.
Robert Li, founder & CEO of Sankobot, China.
Marek Rosa, founder & CEO of GoodAI, Czech Republic.
Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
Markus Järve, founder & CEO of Krakul, Estonia.
Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
Raul Bravo, founder & CEO of DIBOTICS, France.
Ivan Burdun, founder & President of AIXTREE, France.
Raphael Cherrier, founder & CEO of Qucit, France.
Alain Garnier, founder & CEO of ARISEM (acquired by Thales), founder & CEO of Jamespot, France.
Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.
Charles Ollion, founder & Head of Research at Heuritech, France.
Anis Sahbani, founder & CEO of Enova Robotics, France.
Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
Marcus Frei, founder & CEO of NEXT.robotics, Germany.
Kristinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
Fahad Azad, founder of Robosoft Systems, India.
Debashis Das, Ashish Tupate & Jerwin Prabu, founders (incl. CEO) of Bharati Robotics, India.
Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
Pranay Kishore, founder & CEO of Phi Robotics Research, India.
Shahid Memom, founder & CTO of Vanora Robots, India.
Krishnan Nambiar & Shahid Memon, founders, CEO & CTO of Vanora Robotics, India.
Achu Wilson, founder & CTO of Sastra Robotics, India.
Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.
Parsa Ghaffari, founder & CEO of Aylien, Ireland.
Alan Holland, founder & CEO of Keelvar Systems, Ireland.
Alessandro Prest, founder & CTO of LogoGrab, Ireland.
Frank Reeves, founder & CEO of Avvio, Ireland.
Alessio Bonfietti, founder & CEO of MindIT, Italy.
Angelo Sudano, founder & CTO of ICan Robotics, Italy.
Domenico Talia, founder and R&D Director of DtoK Labs, Italy.
Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot Corporation, Japan.
Andrejs Vasiljevs, founder and CEO of Tilde, Latvia.
Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
Rob Brouwer, founder and Director of Operatins, Aeronavics, New Zealand.
Philip Solaris, founder and CEO of X-Craf Enterprises, New Zealand.
Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at BlueEye Robotics, Norway.
Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.
Jasper Horrell, founder of DeepData, South Africa.
Onno Huyser and Mark van Wyk, founders of FlyH2 Aerospace, South Africa.
Toni Ferrate, founder & CEO of RO-BOTICS, Spain.
José Manuel del Río, founder & CEO of Aisoy Robotics, Spain.
Victor Martin, founder & CEO of Macco Robotics, Spain.
Angel Lis Montesinos, founder & CTO of Neuronalbite, Spain.
Timothy Llewellynn, founder & CEO of nViso, Switzerland.
Francesco Mondada, founder of K-Team, Switzerland.
Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders, President & CEO of Nnaisense, Switzerland.
Satish Ramachandran, founder of AROBOT, United Arab Emirates.
Silas Adekunle, founder & CEO of Reach Robotics, UK.
Steve Allpress, founder & CTO of FiveAI, UK.
John Bishop, founder and Director of Tungsten Centre for Intelligent Data Analytis, UK.
Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
Daniel Hulme, founder & CEO of Satalia, UK.
Bradley Kieser, founder & Director of SMS Speedway, UK.
Charlie Muirhead & Tabitha Goldstaub, founders & CEO of CognitionX, UK.
Geoff Pegman, founder & MD of R U Robots, UK.
Demis Hassabis & Mustafa Suleyman, founders, CEO & Head of Applied AI, DeepMind, UK.
Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.
Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
Steve Cousins, founder & CEO of Savioke, USA.
Brian Gerkey, founder & CEO of Open Source Robotics, USA.
Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.
John Hobart, founder & CEO of Coria, USA.
Henry Hu, founder & CEO of Cafe X Technologies, USA.
Zaib Husain, founder and CEO of Makerarm, Inc.
Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
Kris Kitchen, founder & Chief Data Scientit at Qieon Research, USA.
Justin Lane, founder of Prospecture Simulation, USA.
Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
Brian Mingus, founder & CTO of Latently, USA.
Mohammad Musa, founder & CEO at Deepen AI, USA.
Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
Erik Nieves, founder & CEO of PlusOne Robotics, USA.
Steve Omohundro, founder & President of Possibility Research, USA.
Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
Greg Phillips, founder & CEO, ThinkIt Data Solutins, USA.
Dan Reuter, founder & CEO of Electric Movement, USA.
Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA.
Dan Rubins, founder & CEO of Legal Robot, USA.
Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
Andrew Schroeder, founder of WeRobotics, USA.
Stanislav Shalunov, founder & CEO of Clostra, USA
Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.
Martin Spencer, founder & CEO of GeckoSystems, USA.
Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
Michael Stuart, founder & CEO of Lucid Holdings, USA.
Madhuri Trivedi, founder & CEO of OrangeHC, USA.
Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.
Reza Zadeh, founder & CEO of Matroid, USA.

Superintelligence survey

Click here to see this page in other languages:  Chinese  French  German Japanese  Russian

The Future of AI – What Do You Think?

Max Tegmark’s new book on artificial intelligence, Life 3.0: Being Human in the Age of Artificial Intelligence, explores how AI will impact life as it grows increasingly advanced, perhaps even achieving superintelligence far beyond human level in all areas. For the book, Max surveys experts’ forecasts, and explores a broad spectrum of views on what will/should happen. But it’s time to expand the conversation. If we’re going to create a future that benefits as many people as possible, we need to include as many voices as possible. And that includes yours! Below are the answers from the first 14,866 people who have taken the survey that goes along with Max’s book. To join the conversation yourself, please take the survey here.


How soon, and should we welcome or fear it?

The first big controversy, dividing even leading AI researchers, involves forecasting what will happen. When, if ever, will AI outperform humans at all intellectual tasks, and will it be a good thing?

Do you want superintelligence?

Everything we love about civilization is arguably the product of intelligence, so we can potentially do even better by amplifying human intelligence with machine intelligence. But some worry that superintelligent machines would end up controlling us and wonder whether their goals would be aligned with ours. Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level?

What Should the Future Look Like?

In his book, Tegmark argues that we shouldn’t passively ask “what will happen?” as if the future is predetermined, but instead ask what we want to happen and then try to create that future.  What sort of future do you want?

If superintelligence arrives, who should be in control?
If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)?
What should a future civilization strive for?
Do you want life spreading into the cosmos?

The Ideal Society?

In Life 3.0, Max explores 12 possible future scenarios, describing what might happen in the coming millennia if superintelligence is/isn’t developed. You can find a cheatsheet that quickly describes each here, but for a more detailed look at the positives and negatives of each possibility, check out chapter 5 of the book. Here’s a breakdown so far of the options people prefer:

You can learn a lot more about these possible future scenarios — along with fun explanations about what AI is, how it works, how it’s impacting us today, and what else the future might bring — when you order Max’s new book.

The results above will be updated regularly. Please add your voice by taking the survey here, and share your comments below!

United Nations Adopts Ban on Nuclear Weapons

Today, 72 years after their invention, states at the United Nations formally adopted a treaty which categorically prohibits nuclear weapons.

With 122 votes in favor, one vote against, and one country abstaining, the “Treaty on the Prohibition of Nuclear Weapons” was adopted Friday morning and will open for signature by states at the United Nations in New York on September 20, 2017. Civil society organizations and more than 140 states have participated throughout negotiations.

On adoption of the treaty, ICAN Executive Director Beatrice Fihn said:

“We hope that today marks the beginning of the end of the nuclear age. It is beyond question that nuclear weapons violate the laws of war and pose a clear danger to global security. No one believes that indiscriminately killing millions of civilians is acceptable – no matter the circumstance – yet that is what nuclear weapons are designed to do.”

In a public statement, Former Secretary of Defense William Perry said:

“The new UN Treaty on the Prohibition of Nuclear Weapons is an important step towards delegitimizing nuclear war as an acceptable risk of modern civilization. Though the treaty will not have the power to eliminate existing nuclear weapons, it provides a vision of a safer world, one that will require great purpose, persistence, and patience to make a reality. Nuclear catastrophe is one of the greatest existential threats facing society today, and we must dream in equal measure in order to imagine a world without these terrible weapons.”

Until now, nuclear weapons were the only weapons of mass destruction without a prohibition treaty, despite the widespread and catastrophic humanitarian consequences of their intentional or accidental detonation. Biological weapons were banned in 1972 and chemical weapons in 1992.

This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and does not consider them legitimate tools of war. The repeated objection and boycott of the negotiations by many nuclear-weapon states demonstrates that this treaty has the potential to significantly impact their behavior and stature. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviors, even in states not party to the treaty.

“This is a triumph for global democracy, where the pro-nuclear coalition of Putin, Trump and Kim Jong-Un were outvoted by the majority of Earth’s countries and citizens,” said MIT Professor and FLI President Max Tegmark.

“The strenuous and repeated objections of nuclear armed states is an admission that this treaty will have a real and lasting impact,” Fihn said.

The treaty also creates obligations to support the victims of nuclear weapons use (Hibakusha) and testing and to remediate the environmental damage caused by nuclear weapons.

From the beginning, the effort to ban nuclear weapons has benefited from the broad support of international humanitarian, environmental, nonproliferation, and disarmament organizations in more than 100 states. Significant political and grassroots organizing has taken place around the world, and many thousands have signed petitions, joined protests, contacted representatives, and pressured governments.

“The UN treaty places a strong moral imperative against possessing nuclear weapons and gives a voice to some 130 non-nuclear weapons states who are equally affected by the existential risk of nuclear weapons. … My hope is that this treaty will mark a sea change towards global support for the abolition of nuclear weapons. This global threat requires unified global action,” said Perry.

Fihn added, “Today the international community rejected nuclear weapons and made it clear they are unacceptable.It is time for leaders around the world to match their values and words with action by signing and ratifying this treaty as a first step towards eliminating nuclear weapons.”

 

Images courtesy of ICAN.

 

WHAT THE TREATY DOES

Comprehensively bans nuclear weapons and related activity. It will be illegal for parties to undertake any activities related to nuclear weapons. It bans the use, development, testing, production, manufacturing, acquiring, possession, stockpiling, transferring, receiving, threatening to use, stationing, installation, or deploying of nuclear weapons.  [Article 1]

Bans any assistance with prohibited acts. The treaty bans assistance with prohibited acts, and should be interpreted as prohibiting states from engaging in military preparations and planning to use nuclear weapons, financing their development and manufacture, or permitting the transit of them through territorial waters or airspace. [Article 1]

Creates a path for nuclear states which join to eliminate weapons, stockpiles, and programs. It requires states with nuclear weapons that join the treaty to remove them from operational status and destroy them and their programs, all according to plans they would submit for approval. It also requires states which have other country’s weapons on their territory to have them removed. [Article 4]

Verifies and safeguards that states meet their obligations. The treaty requires a verifiable, time-bound, transparent, and irreversible destruction of nuclear weapons and programs and requires the maintenance and/or implementation of international safeguards agreements. The treaty permits safeguards to become stronger over time and prohibits weakening of the safeguard regime. [Articles 3 and 4]

Requires victim and international assistance and environmental remediation. The treaty requires states to assist victims of nuclear weapons use and testing, and requires environmental remediation of contaminated areas. The treaty also obliges states to provide international assistance to support the implementation of the treaty. The text requires states to join the Treaty, and to encourage others to join, as well as to meet regularly to review progress. [Articles 6, 7, and 8]

NEXT STEPS

Opening for signature. The treaty will be open for signature on 20 September at the United Nations in New York. [Article 13]

Entry into force. Fifty states are required to ratify the treaty for it to enter into force.  At a national level, the process of ratification varies, but usually requires parliamentary approval and the development of national legislation to turn prohibitions into national legislation. This process is also an opportunity to elaborate additional measures, such as prohibiting the financing of nuclear weapons. [Article 15]

First meeting of States Parties. The first Meeting of States Parties will take place within a year after the entry into force of the Convention. [Article 8]

SIGNIFICANCE AND IMPACT OF THE TREATY

Delegitimizes nuclear weapons. This treaty is a clear indication that the majority of the world no longer accepts nuclear weapons and do not consider them legitimate weapons, creating the foundation of a new norm of international behaviour.

Changes party and non-party behaviour. As has been true with previous weapon prohibition treaties, changing international norms leads to concrete changes in policies and behaviours, even in states not party to the treaty. This is true for treaties ranging from those banning cluster munitions and land mines to the Convention on the law of the sea. The prohibition on assistance will play a significant role in changing behaviour given the impact it may have on financing and military planning and preparation for their use.

Completes the prohibitions on weapons of mass destruction. The treaty completes work begun in the 1970s, when Chemical weapons were banned, and the 1990s when biological weapons were banned.

Strengthens International Humanitarian Law (“Laws of War”). Nuclear weapons are intended to kill millions of civilians – non-combatants – a gross violation of International Humanitarian Law. Few would argue that the mass slaughter of civilians is acceptable and there is no way to use a nuclear weapon in line with international law. The treaty strengthens these bodies of law and norm.

Remove the prestige associated with proliferation. Countries often seek nuclear weapons for the prestige of being seen as part of an important club. By more clearly making nuclear weapons an object of scorn rather than achievement, their spread can be deterred.

FLI sought to increase support for the negotiations from the scientific community this year. We organized an open letter signed by over 3700 scientists in 100 countries, including 30 Nobel Laureates. You can see the letter here and the video we presented recently at the UN here.

This post is a modified version of the press release provided by the International Campaign to Abolish Nuclear Weapons (ICAN).

Hawking, Higgs and Over 3,000 Other Scientists Support UN Nuclear Ban Negotiations

Click here to see this page in other languages: Chinese  

Delegates from most UN member states are gathering in New York to negotiate a nuclear weapons ban, where they will also receive a letter of support that has been signed by thousands of scientists from around over 80 countries – including 28 Nobel Laureates and a former US Secretary of Defense. “Scientists bear a special responsibility for nuclear weapons, since it was scientists who invented them and discovered that their effects are even more horrific than first thought”, the letter explains.

The letter was delivered at a ceremony at 1pm on Monday March 27 in the UN General Assembly Hall to Her Excellency Ms. Elayne Whyte Gómez from Costa Rica, who is presiding over the negotiations.

Despite all the attention to nuclear terrorism and nuclear rogue states, one of the greatest threats from nuclear weapons has always been mishaps and accidents among the established nuclear nations. With political tensions and instability increasing, this threat is growing to alarming levels: “The probability of a nuclear calamity is higher today, I believe, that it was during the cold war,” according to former U.S. Secretary of Defense William J. Perry, who signed the letter.

“Nuclear weapons represent one of the biggest threats to our civilization. With the unpredictability of the current world situation, it is more important than ever to get negotiations about a ban on nuclear weapons on track, and to make these negotiations a truly global effort,” says neuroscience professor Edvard Moser from Norway, 2014 Nobel Laureate in Physiology/Medicine.

Professor Wolfgang Ketterle from MIT, 2001 Nobel Laureate in Physics, agrees: “I see nuclear weapons as a real threat to the human race and we need an international consensus to reduce this threat.”

Currently, the US and Russia have about 14,000 nuclear weapons combined, many on hair-trigger alert and ready to be launched on minutes notice, even though a Pentagon report argued that a few hundred would suffice for rock-solid deterrence. Yet rather than trim their excess arsenals, the superpowers plan massive investments to replace their nuclear weapons by new destabilizing ones that are more lethal for a first strike attack.

“Unlike many of the world’s leaders I care deeply about the future of my grandchildren. Even the remote possibility of a nuclear war presents an unconscionable threat to their welfare. We must find a way to eliminate nuclear weapons,” says Sir Richard J. Roberts, 1993 Nobel Laureate in Physiology or Medicine.

“Most governments are frustrated that a small group of countries with a small fraction of the world’s population insist on retaining the right to ruin life on Earth for everyone else with nuclear weapons, ignoring their disarmament promises in the non-proliferation treaty”, says physics professor Max Tegmark from MIT, who helped organize the letter. “In South Africa, the minority in control of the unethical Apartheid system didn’t give it up spontaneously on their own initiative, but because they were pressured into doing so by the majority. Similarly, the minority in control of unethical nuclear weapons won’t give them up spontaneously on their own initiative, but only if they’re pressured into doing so by the majority of the world’s nations and citizens.”

The idea behind the proposed ban is to provide such pressure by stigmatizing nuclear weapons.

Beatrice Fihn, who helped launch the ban movement as Executive Director of the International Campaign to Abolish Nuclear Weapons, explains that such stigmatization made the landmine and cluster munitions bans succeed and can succeed again: “The market for landmines is pretty much extinct—nobody wants to produce them anymore because countries have banned and stigmatized them.  Just a few years ago, the United States—who never signed the landmines treaty—announced that it’s basically complying with the treaty. If the world comes together in support of a nuclear ban, then nuclear weapons countries will likely follow suit, even if it doesn’t happen right away.

Susi Snyder from from the Dutch “Don’t Bank on the Bomb” project explains:

If you prohibit the production, possession, and use of these weapons and the assistance with doing those things, we’re setting a stage to also prohibit the financing of the weapons. And that’s one way that I believe the ban treaty is going to have a direct and concrete impact on the ongoing upgrades of existing nuclear arsenals, which are largely being carried out by private contractors.”

Nuclear arms are the only weapons of mass destruction not yet prohibited by an international convention, even though they are the most destructive and indiscriminate weapons ever created”, the letter states, motivating a ban.

“The horror that happened at Hiroshima and Nagasaki should never be repeated.  Nuclear weapons should be banned,” says Columbia University professor Martin Chalfie, 2008 Nobel Laureate in Chemistry.

Norwegian neuroscience professor May-Britt Moser, a 2014 Nobel Laureate in Physiology/Medicine, says, “In a world with increased aggression and decreasing diplomacy – the availability nuclear weapons is more dangerous than ever. Politicians are urged to ban nuclear weapons. The world today and future generations depend on that decision.”

The open letter: https://futureoflife.org/nuclear-open-letter/

A Principled AI Discussion in Asilomar

We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best.

This sense among the attendees echoes a wider societal engagement with AI that has heated up dramatically over the past few years. Due to this rising awareness of AI, dozens of major reports have emerged from academia (e.g. the Stanford 100 year report), government (e.g. two major reports from the White House), industry (e.g. materials from the Partnership on AI), and the nonprofit sector (e.g. a major IEEE report).

In planning the Asilomar meeting, we hoped both to create meaningful discussion among the attendees, and also to see what, if anything, this rather heterogeneous community actually agreed on. We gathered all the reports we could and compiled a list of scores of opinions about what society should do to best manage AI in coming decades. From this list, we looked for overlaps and simplifications, attempting to distill as much as we could into a core set of principles that expressed some level of consensus. But this “condensed” list still included ambiguity, contradiction, and plenty of room for interpretation and worthwhile discussion.

Leading up to the meeting, we extensively surveyed meeting participants about the list, gathering feedback, evaluation, and suggestions for improved or novel principles. The responses were folded into a significantly revised version for use at the meeting. In Asilomar, we gathered more feedback in two stages. First, small breakout groups discussed subsets of the principles, giving detailed refinements and commentary on them. This process generated improved versions (in some cases multiple new competing versions) and a few new principles. Finally, we surveyed the full set of attendees to determine the level of support for each version of each principle.

After such detailed, thorny and sometimes contentious discussions and a wide range of feedback, we were frankly astonished at the high level of consensus that emerged around many of the statements during that final survey. This consensus allowed us to set a high bar for inclusion in the final list: we only retained principles if at least 90% of the attendees agreed on them.

What remained was a list of 23 principles ranging from research strategies to data rights to future issues including potential super-intelligence, which was signed by those wishing to associate their name with the list. This collection of principles is by no means comprehensive and it’s certainly open to differing interpretations, but it also highlights how the current “default” behavior around many relevant issues could violate principles that most participants agreed are important to uphold.

We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years.

To start the discussion, here are some of the things other AI researchers who signed the Principles had to say about them.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“Value alignment is a big one. Robots aren’t going to try to revolt against humanity, but they’ll just try to optimize whatever we tell them to do. So we need to make sure to tell them to optimize for the world we actually want.”

-Anca Dragan, Assistant Professor in the EECS Department at UC Berkeley, and co-PI for the Center for Human Compatible AI
Read her complete interview here.

Shared Prosperity
“I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously — I mean for their personal gain. And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”

-Yoshua Bengio, Professor of CSOR at the University of Montreal, and head of the Montreal Institute for Learning Algorithms (MILA)
Read his complete interview here.

Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
“I believe that AI will create profound change even before it is ‘advanced’ and thus we need to plan and manage growth of the technology. As humans we are not good at long-term planning because our civil systems don’t encourage it, however, this is an area in which we must develop our abilities to ensure a responsible and beneficial partnership between man and machine.”

-Kay Firth-Butterfield, Executive Director of AI-Austin.org, and an adjunct Professor of Law at the University of Texas at Austin
Read her complete interview here.

Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
“It’s absolutely crucial that individuals should have the right to manage access to the data they generate… AI does open new insight to individuals and institutions. It creates a persona for the individual or institution – personality traits, emotional make-up, lots of the things we learn when we meet each other. AI will do that too and it’s very personal. I want to control how [my] persona is created. A persona is a fundamental right.”

-Guruduth Banavar, VP, IBM Research, Chief Science Officer, Cognitive Computing
Read his complete interview here.

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
“The one closest to my heart. … AI systems should behave in a way that is aligned with human values. But actually, I would be even more general than what you’ve written in this principle. Because this principle has to do not only with autonomous AI systems, but I think this is very important and essential also for systems that work tightly with humans in the loop, and also where the human is the final decision maker. Because when you have human and machine tightly working together, you want this to be a real team. So you want the human to be really sure that the AI system works with values aligned to that person. It takes a lot of discussion to understand those values.”

-Francesca Rossi, Research scientist at the IBM T.J. Watson Research Centre, and a professor of computer science at the University of Padova, Italy, currently on leave
Read her complete interview here.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“One reason that I got involved in these discussions is that there are some topics I think are very relevant today, and one of them is the arms race that’s happening amongst militaries around the world already, today. This is going to be very destabilizing. It’s going to upset the current world order when people get their hands on these sorts of technologies. It’s actually stupid AI that they’re going to be fielding in this arms race to begin with and that’s actually quite worrying – that it’s technologies that aren’t going to be able to distinguish between combatants and civilians, and aren’t able to act in accordance with international humanitarian law, and will be used by despots and terrorists and hacked to behave in ways that are completely undesirable. And that’s something that’s happening today. You have to see the recent segment on 60 Minutes to see the terrifying swarms of robot UAVs that the American military is now experimenting with.”

-Toby Walsh, Guest Professor at Technical University of Berlin, Professor of Artificial Intelligence at the University of New South Wales, and leads the Algorithmic Decision Theory group at Data61, Australia’s Centre of Excellence for ICT Research
Read his complete interview here.

AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
“I’m not a fan of wars, and I think it could be extremely dangerous. Obviously I think that the technology has a huge potential, and even just with the capabilities we have today it’s not hard to imagine how it could be used in very harmful ways. I don’t want my contributions to the field and any kind of techniques that we’re all developing to do harm to other humans or to develop weapons or to start wars or to be even more deadly than what we already have.”

-Stefano Ermon, Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory
Read his complete interview here.

Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“I agree! As a scientist, I’m against making strong or unjustified assumptions about anything, so of course I agree. Yet this principle bothers me … because it seems to be implicitly saying that there is an immediate danger that AI is going to become superhumanly, generally intelligent very soon, and we need to worry about this issue. This assertion … concerns me because I think it’s a distraction from what are likely to be much bigger, more important, more near term, potentially devastating problems. I’m much more worried about job loss and the need for some kind of guaranteed health-care, education and basic income than I am about Skynet. And I’m much more worried about some terrorist taking an AI system and trying to program it to kill all Americans than I am about an AI system suddenly waking up and deciding that it should do that on its own.”

-Dan Weld, Professor of Computer Science & Engineering and Entrepreneurial Faculty Fellow at the University of Washington
Read his complete interview here.

Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
“In many areas of computer science, such as complexity or cryptography, the default assumption is that we deal with the worst case scenario. Similarly, in AI Safety, we should assume that AI will become maximally capable and prepare accordingly. If we are wrong, we will still be great shape.”

-Roman Yampolskiy, Associate Professor of CECS at the University of Louisville, and founding director of the Cyber Security Lab
Read his complete interview here.

Obama’s Nuclear Legacy

The following article and infographic were originally posted on Futurism.

The most destructive device that humanity ever created is the nuclear bomb. It’s a technology that is capable of unparalleled devastation; it’s a technology that The United Nations classifies as “the most dangerous weapon on Earth.”

One bomb can destroy a whole city in seconds, and in so doing, end the lives of millions of people (depending on where it is dropped). If that’s not enough, it can throw the natural environment into chaos. We know this because we’ve used them before.

The first device of this kind was unleashed at approximately 8:15 am on August 6th, 1945. At this time, a US B-29 bomber dropped an atomic bomb on the Japanese city of Hiroshima. It killed around 80,000 people instantly. Over the coming years, many more would succumb to radiation sickness. All-in-all, it is estimated that over 200,000 people died as a result of the nuclear blasts in Japan.

How far have we come since then? How many bombs do we have at our disposal? Here’s a look at our legacy.

EA Global X Boston Conference

The first EA Global X conference, EAGxBoston, is being held at MIT on April 30th, 12:30-6:30pm. Boston EAs have created an incredible lineup bringing together a who’s who of researchers, EAs, EA orgs, and up-and-coming orgs including:
Dean Karlan (Yale, Innovations for Poverty Action)
Joshua Greene (Harvard, Moral Cognition Lab)
Rachel Glennerster (MIT, Poverty Action Lab)
Piali Mukhopadhyay (GiveDirectly)
Bruce Friedrich (The Good Food Institute)
Julia Wise (The Centre for Effective Altruism)
Ian Ross (Hampton Creek, Facebook)
Allison Smith (Animal Charity Evaluators)
Elizabeth Pearce (Boston University, Iodine Global Network)
Cher-Wen DeWitt (One Acre Fund)
Rhonda Zapatka (Trickle Up)
Elijah Goldberg (ImpactMatters)
Jason Ketola (MaxMind)
Lucia Sanchez (Innovations for Poverty Action)
Sharon Nunez Gough (Animal Equality)
Bruce Friedrich (The Good Food Institute, New Crop Capital)
Jon Camp (The Humane League)
Victoria Krakovna (Harvard, Future of Life Institute)
Eric Gastfriend (Harvard Business School EA, FLI, and formerly 80,000 Hours)
Dillon Bowen (Tufts EA, formerly 80,000 Hours and Giving What We Can)
Jason Trigg (earning-to-give at a startup and formerly as a hedge fund quant)
and more

The day will be filled with talks, panels, and networking opportunities. The program will address the major effective altruist cause areas of global health poverty and development, animal agriculture, and global catastrophic risk, as well as movement concerns like conducting research, building community, and choosing a career direction. We will also be introducing some up-and-coming organizations.

FLI’s Victoria Krakovna, Richard Mallah, and Lucas Perry participated in a panel about Global Catastrophic Risks.

More information and registration can be found on the conference website:
http://eagxboston.com

All proceeds after our minimum costs will be donated to EA charities. If you need a tax-receipt, please contact Randy Carlton <[masked]>. Please note that the early bird special ends on April 19th.

We have a limited amount of space, so if you’d like to join, please register today and share this invitation with interested friends via our Facebook group:
https://www.facebook.com/EAGxBoston/

Let’s get together, and learn what we can do even better together!

EAGxBoston Team from MIT Sloan EA, MIT EA, Tufts EA, Harvard EA, HBS EA, Animal Charity Evaluators and The Commonwealth Market
http://eagxboston.com

Hawking Says ‘Don’t Bank on the Bomb’ and Cambridge Votes to Divest $ 1Billion From Nuclear Weapons

1,000 nuclear weapons are plenty enough to deter any nation from nuking the US, but we’re hoarding over 7,000, and a long string of near-misses have highlighted the continuing risk of an accidental nuclear war which could trigger a nuclear winter, potentially killing most people on Earth. Yet rather than trimming our excess nukes, we’re planning to spend $4 million per hour for the next 30 years making them more lethal.

Although I’m used to politicians wasting my tax dollars, I was shocked to realize that I was voluntarily using my money for this nuclear boondoggle by investing in the very companies that are lobbying for and building new nukes: some of the money in my bank account gets loaned to them and my S&P500 mutual fund invests in them. “If you want to slow the nuclear arms race, then put your money where your mouth is and don’t bank on the bomb!”, my physics colleague Stephen Hawking told me. To make it easier for others to follow his sage advice, I made an app for that together with my friends at the Future of Life Institute, and launched this “Brief History of Nukes” that’s 3.14 long in honor of Hawking’s fascination with pi.

Our campaign got off to an amazing start this weekend at an MIT conferencewhere our Mayor Denise Simmons announced that the Cambridge City Council has unanimously decided to divest their billion dollar city pension fund from nuclear weapons production. “Not in our name!”, she said, and drew a standing ovation. “It’s my hope that this will inspire other municipalities, companies and individuals to look at their investments and make similar moves”.

“In Europe, over 50 large institutions have already limited their nuclear weapon investments, but this is our first big success in America”, said Susi Snyder, who leads the global nuclear divestment campaign dontbankonthebomb.com. Boston College philosophy major Lucas Perry, who led the effort to persuade Cambridge to divest, hoped that this online analysis tool will create a domino effect: “I want to empower other students opposing the nuclear arms race to persuade their own towns and universities to follow suit.”

Many financial institutions now offer mutual funds that cater to the growing interest in socially responsible investing, including Ariel, Calvert, Domini, Neuberger, Parnassuss, Pax World and TIAA-CREF. “We appreciate and share Cambridge’s desire to exclude nuclear weapons production from its pension fund. Pension funds are meant to serve the long-term needs of retirees, a service that nuclear weapons do not offer”, said Julie Fox Gorte, Senior Vice President for Sustainable Investing at Pax World.

“Divestment is a powerful way to stigmatize the nuclear arms race through grassroots campaigning, without having to wait for politicians who aren’t listening”, said conference co-organizer Cole Harrison, Executive Director of Massachusetts Peace Action, the nation’s largest grassroots peace organization. “If you’re against spending more money making us less safe, then make sure it’s not your money.”

You’ll find our divestment app here. If you’d like to persuade your own municipality to follow Cambridge’s lead, using their policy order as a model, here it is:

WHEREAS: Nations across the globe still maintain over 15,000 nuclear weapons, some of which are hundreds of times more powerful than those that obliterated Hiroshima and Nagasaki, and detonation of even a small fraction of these weapons could create a decade-long nuclear winter that could destroy most of the Earth’s population; and
WHEREAS: The United States has plans to invest roughly one trillion dollars over the coming decades to upgrade its nuclear arsenal, which many experts believe actually increases the risk of nuclear proliferation, nuclear terrorism, and accidental nuclear war; and
WHEREAS: In a period where federal funds are desperately needed in communities like Cambridge in order to build affordable housing, improve public transit, and develop sustainable energy sources, our tax dollars are being diverted to and wasted on nuclear weapons upgrades that would make us less safe; and
WHEREAS: Investing in companies producing nuclear weapons implicitly supports this misdirection of our tax dollars; and
WHEREAS: Socially responsible mutual funds and other investment vehicles are available that accurately match the current asset mix of the City of Cambridge Retirement Fund while excluding nuclear weapons producers; and
WHEREAS: The City of Cambridge is already on record in supporting the abolition of nuclear weapons, opposing the development of new nuclear weapons, and calling on President Obama to lead the nuclear disarmament effort; now therefore be it
ORDERED: That the City Council go on record opposing investing funds from the Cambridge Retirement System in any entities that are involved in or support the production or upgrading of nuclear weapons systems; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Cambridge Peace Commissioner and other appropriate City staff to organize an informational forum on possibilities for Cambridge individuals and institutions to divest their pension funds from investments in nuclear weapons contractors; and be it further
ORDERED: That the City Manager be and hereby is requested to work with the Board of the Cambridge Retirement System and other appropriate City staff to ensure divestment from all companies involved in production of nuclear weapons systems, and in entities investing in such companies, and the City Manager is requested to report back to the City Council about the implementation of said divestment in a timely manner.

AAAI Safety Workshop Highlights: Debate, Discussion, and Future Research

The 30th annual Association for the Advancement of Artificial Intelligence (AAAI) conference kicked off on February 12 with two days of workshops, followed by the main conference, which is taking place this week. FLI is honored to have been a part of the AI, Ethics, and Safety Workshop that took place on Saturday, February 13.

phoenix_convention_center1

Phoenix Convention Center where AAAI 2016 is taking place.

The workshop featured many fascinating talks and discussions, but perhaps the most contested and controversial was that by Toby Walsh, titled, “Why the Technological Singularity May Never Happen.”

Walsh explained that, though general knowledge has increased, human capacity for learning has remained relatively consistent for a very long time. “Learning a new language is still just as hard as it’s always been,” he said, to provide an example. If we can’t teach ourselves how to learn faster he doesn’t see any reason to believe that machines will be any more successful at the task.

He also argued that even if we assume that we can improve intelligence, there’s no reason to assume it will increase exponentially, leading to an intelligence explosion. He believes it is just as possible that machines could develop intelligence and learning that increases by half for each generation, thus it would increase, but not exponentially, and it would be limited.

Walsh does anticipate superintelligent systems, but he’s just not convinced they will be the kind that can lead to an intelligence explosion. In fact, as one of the primary authors of the Autonomous Weapons Open Letter, Walsh is certainly concerned about aspects of advanced AI, and he ended his talk with concerns about both weapons and job loss.

Both during and after his talk, members of the audience vocally disagreed, providing various arguments about why an intelligence explosion could be likely. Max Tegmark drew laughter from the crowd when he pointed out that while Walsh was arguing that a singularity might not happen, the audience was arguing that it might happen, and these “are two perfectly consistent viewpoints.”

Tegmark added, “As long as one is not sure if it will happen or it won’t, it’s wise to simply do research and plan ahead and try to make sure that things go well.”

As Victoria Krakovna has also explained in a previous post, there are other risks associated with AI that can occur without an intelligence explosion.

The afternoon portion of the talks were all dedicated to technical research by current FLI grant winners, including Vincent Conitzer, Fuxin Li, Francesca Rossi, Bas Steunebrink, Manuela Veloso, Brian Ziebart, Jacob Steinhardt, Nate Soares, Paul Christiano, Stefano Ermon, and Benjamin Rubinstein. Topics ranged from ensuring value alignments between humans and AI to safety constraints and security evaluation, and much more.

While much of the research presented will apply to future AI designs and applications, Li and Rubinstein presented examples of research related to image recognition software that could potentially be used more immediately.

Li explained the risks associated with visual recognition software, including how someone could intentionally modify the image in a human-imperceptible way to make it incorrectly identify the image. Current methods rely on machines accessing huge quantities of images to reference and learn what any given image is. However, even the smallest perturbation of the data can lead to large errors. Li’s own research looks at unique ways for machines to recognize an image, thus limiting the errors.

Rubinstein’s focus is geared more toward security. The research he presented at the workshop is similar to facial recognition, but goes a step farther, to understand how small changes made to one face can lead systems to confuse the image with that of someone else.

Fuxin Li

Fuxin Li

rubinstein_AAAI

Ben Rubinstein

 

 

AAAI_panel

Future of beneficial AI research panel: Francesca Rossi, Nate Soares, Tom Dietterich, Roman Yampolskiy, Stefano Ermon, Vincent Conitzer, and Benjamin Rubinstein.

The day ended with a panel discussion on the next steps for AI safety research that also drew much debate between panelists and the audience. The panel included AAAI president, Tom Dietterich, as well as Rossi, Soares, Conitzer, Ermon, Rubinstein, and Roman Yampolskiy, who also spoke earlier in the day.

Among the prevailing themes were concerns about ensuring that AI is used ethically by its designers, as well as ensuring that a good AI can’t be hacked to do something bad. There were suggestions to build on the idea that AI can help a human be a better person, but again, concerns about abuse arose. For example, an AI could be designed to help voters determine which candidate would best serve their needs, but then how can we ensure that the AI isn’t secretly designed to promote a specific candidate?

Judy Goldsmith, sitting in the audience, encouraged the panel to consider whether or not an AI should be able to feel pain, which led to extensive discussion about the pros and cons of creating an entity that can suffer, as well as questions about whether such a thing could be created.

Francesca_Nate

Francesca Rossi and Nate Soares

Tom_Roman

Tom Dietterich and Roman Yampolskiy

After an hour of discussion many suggestions for new research ideas had come up, giving researchers plenty of fodder for the next round of beneficial-AI grants.

We’d also like to congratulate Stuart Russell and Peter Norvig who were awarded the 2016 AAAI/EAAI Outstanding Educator Award for their seminal text “Artificial Intelligence: A Modern Approach.” As was mentioned during the ceremony, their work “inspired a new generation of scientists and engineers throughout the world.”

Norig_Russell_3

Congratulations to Peter Norvig and Stuart Russell!

2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part of what we’ve accomplished in the last 12 months. It’s been a big year for us…

 

In the beginning

conference150104

Participants and attendees of the inaugural Puerto Rico conference.

2015 began with a bang, as we kicked off the New Year with our Puerto Rico conference, “The Future of AI: Opportunities and Challenges,” which was held January 2-5. We brought together about 80 top AI researchers, industry leaders and experts in economics, law and ethics to discuss the future of AI. The goal, which was successfully achieved, was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls. Before the conference, relatively few AI researchers were thinking about AI safety, but by the end of the conference, essentially everyone had signed the open letter, which argued for timely research to make AI more robust and beneficial. That open letter was ultimately signed by thousands of top minds in science, academia and industry, including Elon Musk, Stephen Hawking, and Steve Wozniak, and a veritable Who’s Who of AI researchers. This letter endorsed a detailed Research Priorities Document that emerged as the key product of the conference.

At the end of the conference, Musk announced a donation of $10 million to FLI for the creation of an AI safety research grants program to carry out this prioritized research for beneficial AI. We received nearly 300 research grant applications from researchers around the world, and on July 1, we announced the 37 AI safety research teams who would be awarded a total of $7 million for this first round of research. The research is funded by Musk, as well as the Open Philanthropy Project.

 

Forging ahead

On April 4, we held a large brainstorming meeting on biotech risk mitigation that included George Church and several other experts from the Harvard Wyss Institute and the Cambridge Working Group. We concluded that there are interesting opportunities for FLI to contribute in this area, and we endorsed the CWG statement on the Creation of Potential Pandemic Pathogens.

On June 29, we organized a SciFoo workshop at Google, which Meia Chita-Tegmark wrote about for the Huffington Post. We held a media outreach dinner event that evening in San Francisco with Stuart Russell, Murray Shanahan, Ilya Sutskever and Jaan Tallinn as speakers.

SF_event

All five FLI-founders flanked by other beneficial-AI enthusiasts. From left to right, top to bottom: Stuart Russell. Jaan Tallinn, Janos Kramar, Anthony Aguirre, Max Tegmark, Nick Bostrom, Murray Shanahan, Jesse Galef, Michael Vassar, Nate Soares, Viktoriya Krakovna, Meia Chita-Tegmark and Katja Grace

Less than a month later, we published another open letter, this time advocating for a global ban on offensive autonomous weapons development. Stuart Russell and Toby Walsh presented the autonomous weapons open letter at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, while Richard Mallah garnered more support and signatories engaging AGI researchers at the Conference on Artificial General Intelligence in Berlin. The letter has been signed by over 3,000 AI and robotics researchers, including leaders such as Demis Hassabis (DeepMind), Yann LeCun (Facebook), Eric Horvitz (Microsoft), Peter Norvig (Google), Oren Etzioni (Allen Institute), six past presidents of the AAAI, and over 17,000 other scientists and concerned individuals, including Stephen Hawking, Elon Musk, and Steve Wozniak.

This was followed by an open letter about the economic impacts of AI, which was spearheaded by Erik Brynjolfsson, a member of our Scientific Advisory Board. Inspired by our Puerto Rico AI conference and the resulting open letter, a team of economists and business leaders launched their own open letter about AI’s future impact on the economy. It includes specific policy suggestions to ensure positive economic impact.

By October 2015, we wanted to try to bring more public attention to not only artificial intelligence, but also other issues that could pose an existential risk, including biotechnology, nuclear weapons, and climate change. We launched a new incarnation of our website, which now focuses on relevant news and the latest research in all of these fields. The goal is to draw more public attention to both the risks and the opportunities that technology provides.

Besides these major projects and events, we also organized, helped with, and participated in numerous other events and discussions.

 

Other major events

Richard Mallah, Max Tegmark, Francesca Rossi and Stuart Russell went to the Association for the Advancement of Artificial Intelligence conference in January, where they encouraged researchers to consider safety issues. Stuart spoke to about 500 people about the long-term future of AI. Max spoke at the first annual International Workshop on AI, Ethics, and Society, organized by Toby Walsh, as well as at a funding workshop, where he presented the FLI grants program.

Max spoke again, at the start of March, this time for the Helen Caldicott Nuclear Weapons Conference, about reducing the risk of accidental nuclear war and how this relates to automation and AI. At the end of the month, he gave a talk at Harvard Effective Altruism entitled, “The Future of Life with AI and other Powerful Technologies.” This year, Max also gave talks about the Future of Life Institute at a Harvard-Smithsonian Center for Astrophysics colloquium, MIT Effective Altruism, and the MIT “Dissolve Conference” (with Prof. Jonathan King), at a movie screening of “Dr. Strangelove,” and at a meeting in Cambridge about reducing the risk of nuclear war.

In June, Richard presented at Boston University’s Science and the Humanities Confront the Anthropocene conference about the risks associated with emerging technologies. That same month, Stuart Russell and MIRI Executive Director, Nate Soares, participated in a panel discussion about the risks and policy implications of AI (video here).

military_AI

Concerns about autonomous weapons led to an open letter calling for a ban.

Richard then led the FLI booth at the International Conference on Machine Learning in July, where he engaged with hundreds of researchers about AI safety and beneficence. He also spoke at the SmartData conference in August about the relationship between ontology alignment and value alignment, and he participated in the DARPA Wait, What? conference in September.

Victoria Krakovna and Anthony Aguirre both spoke at the Effective Altruism Global conference at Google headquarters in July, where Elon Musk, Stuart Russell, Nate Soares and Nick Bostrom also participated in a panel discussion. A month later, Jaan Tallin spoke at the EA Global Oxford conference. Victoria and Anthony also organized a brainstorming dinner on biotech, which was attended by many of the Bay area’s synthetic biology experts, and Victoria put together two Machine Learning Safety meetings in the Bay Area. The latter were dinner meetings, which aimed to bring researchers and FLI grant awardees together to help strengthen connections and discuss promising research directions. One of the dinners included a Q&A with Stuart Russell.

September saw FLI and CSER co-organize an event at the Policy Exchange in London where Huw Price, Stuart Russell, Nick Bostrom, Michael Osborne and Murray Shanahan discussed AI safety to the scientifically minded in Westminster, including many British members of parliament.

Only a month later, Max Tegmark and Nick Bostrom were invited to speak at a United Nations event about AI safety, and our Scientific Advisory Board member, Stephen Hawking released his answers to the Reddit “Ask Me Anything” (AMA) about artificial intelligence.

Toward the end of the year, we began to focus more effort on nuclear weapons issues. We’ve partnered with the Don’t Bank on the Bomb campaign, and we’re pleased to support financial research to determines which companies and institutions invest in and profit from the production of new nuclear weapons systems. The goal is to draw attention to and stigmatize such production, which arguably increases the risk of accidental nuclear war without notably improving today’s nuclear deterrence. In November, Lucas Perry presented some of our research at the Massachusetts Peace Action conference.

Anthony launched a new site, Metaculus.com. The Metaculus project, which is something of an offshoot of FLI, is a new platform for soliciting and aggregating predictions about technological breakthroughs, scientific discoveries, world happenings, and other events.  The aim of this project is to build an all-purpose, crowd-powered forecasting engine that can help organizations (like FLI) or individuals better understand the trajectory of future events and technological progress. This will allow for more quantitatively informed predictions and decisions about how to optimize the future for the better.

 

NIPS-panel-3

Richard Mallah speaking at the third panel discussion of the NIPS symposium.

In December, Max participated in a panel discussion at the Nobel Week Dialogue about The Future of Intelligence and moderated two related panels. Richard, Victoria, and Ariel Conn helped organize the Neural Information Processing Systems symposium, “Algorithms Among Us: The Societal Impacts of Machine Learning,” where Richard participated in the panel discussion on long-term research priorities. To date, we’ve posted two articles with takeaways from the symposium and NIPS as a whole. Just a couple days later, Victoria rounded out the active year with her attendance at the Machine Learning and the Market for Intelligence conference in Toronto, and Richard presented to the IEEE Standards Association.

 

In the Press

We’re excited about all we’ve achieved this year, and we feel honored to have received so much press about our work. For example:

The beneficial AI open letter has been featured by media outlets around the world, including WIRED, Financial Times, Popular Science, CIO, BBC, CNBC, The Independent,  The Verge, Z, DNet, CNET, The Telegraph, World News Views, The Economic Times, Industry Week, and Live Science.

You can find more media coverage of Elon Musk’s donation at Fast Company, Tech Crunch , WIRED, Mashable, Slash Gear, and BostInno.

Max, along with our Science Advisory Board member, Stuart Russell, and Erik Horvitz from Microsoft, were interviewed on NPR’s Science Friday about AI safety.

Max was later interviewed on NPR’s On Point Radio, along with FLI grant recipients Manuela Veloso and Thomas Dietterich, for a lively discussion about the AI safety research program.

Stuart Russell was interviewed about the autonomous weapons open letter on NPR’s All Things Considered (audio) and Al Jazeera America News (video), and Max was also interviewed about the autonomous weapons open letter on FOX Business News and CNN International.

Throughout the year, Victoria was interviewed by Popular Science, Engineering and Technology Magazine, Boston Magazine and Blog Talk Radio.

Meia Chita-Tegmark wrote five articles for the Huffington Post about artificial intelligence, including a Halloween story of nuclear weapons and highlights of the Nobel Week Dialogue, and Ariel wrote two about artificial intelligence.

In addition we had a few extra special articles on our new website:

Nobel-prize winning physicist, Frank Wilczek, shared a sci-fi short story he wrote about a future of AI wars. FLI volunteer, Eric Gastfriend, wrote a popular piece, in which he consider the impact of exponential increase in the number of scientists. Richard wrote a widely read article laying out the most important AI breakthroughs of the year. We launched the FLI Audio Files with a podcast about the Paris Climate Agreement. And Max wrote an article comparing Russia’s warning of a cobalt bomb to Dr. Strangelove

On the last day of the year, the New Yorker published an article listing the top 10 tech quotes of 2015, and a quote from our autonomous weapons open letter came in at number one.

 

A New Beginning

2015 has now come to an end, but we believe this is really just the beginning. 2016 has the potential to be an even bigger year, bringing new and exciting challenges and opportunities. The FLI slogan says, “Technology is giving life the potential to flourish like never before…or to self-destruct.” We look forward to another year of doing all we can to help humanity flourish!

Happy New Year!

happy_new_year_2016

What’s so exciting about AI? Conversations at the Nobel Week Dialogue

Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence.” The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although challenges in developing AI and concerns about human-computer interaction were both expressed, in the celebratory spirit of the Nobel Prize, let’s focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AI’s potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind.”