Entries by

Research for Beneficial Artificial Intelligence

Click here to see this page in other languages: Chinese  Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. It’s no coincidence that the first Asilomar Principle is about research. On the face of it, the Research Goal Principle may not seem as glamorous or exciting as some

When Should Machines Make Decisions?

Click here to see this page in other languages: Chinese   Russian  Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives. When is it okay to let a machine make a decision instead of a person? Most of us allow Google Maps to choose the best route to a

Help Support FLI This Giving Tuesday

We’ve accomplished a lot. FLI has only been around for a few years, but during that time, we’ve: Helped mainstream AI safety research, Funded 37 AI safety research grants, Launched multiple open letters that have brought scientists and the public together for the common cause of a beneficial future, Drafted the 23 Asilomar Principles which

Three Tweets to Midnight: Nuclear Crisis Stability and the Information Ecosystem

The following policy memo was written and posted by the Stanley Foundation. Download the PDF (252K) How might a nuclear crisis play out in today’s media environment? What dynamics in this information ecosystem—with social media increasing the velocity and reach of information, disrupting journalistic models, creating potent vectors for disinformation, and changing how political leaders

ICAN Wins Nobel Peace Prize

We at FLI offer an excited congratulations to the International Campaign to Abolish Nuclear Weapons (ICAN), this year’s winners of the Nobel Peace Prize. We could not be more honored to have had the opportunity to work with ICAN during their campaign to ban nuclear weapons. Over 70 years have passed since the bombs were

Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer

If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer. To learn

The Future of Humanity Institute Releases Three Papers on Biorisks

Click here to see this page in other languages:  Russian  Earlier this month, the Future of Humanity Institute (FHI) released three new papers that assess global catastrophic and existential biosecurity risks and offer a cost-benefit analysis of various approaches to dealing with these risks. The work – done by Piers Millett, Andrew Snyder-Beattie, Sebastian Farquhar,

Countries Sign UN Treaty to Outlaw Nuclear Weapons

Update 9/25/17: 53 countries have now signed and 3 have ratified. Today, 50 countries took an important step toward a nuclear-free world by signing the United Nations Treaty on the Prohibition of Nuclear Weapons. This is the first treaty to legally ban nuclear weapons, just as we’ve seen done previously with chemical and biological weapons.

Stanislav Petrov, the Man Who Saved the World, Has Died

A Soviet early warning satellite showed that the United States had launched five land-based missiles at the Soviet Union. The alert came at a time of high tension between the two countries, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. In addition, earlier in the

An Open Letter to the United Nations Convention on Certain Conventional Weapons (Chinese)

由于各个企业在人工智能和机器人上建立的技术能被转用在发展自动武器上,因此我们觉得特别有责任发出预警。 我们衷心欢迎联合国常规武器公约建立致命自动武器系统的专家小组(GGE),我们当中的很多研究和工程人员都渴望能为你们的审议提供建议。 我们赞赏任命印度大使的Amandeep Singh Gil作为该小组的主席,我们恳请专家小组中的缔约方努力寻求制止在这些武器上的军备竞赛,保护市民免受武器滥用的伤害,避免这些技术产生的不稳定。 我们非常遗憾GGE小组原本定于今天开始的第一次会议, 由于小部分国家没有支付会员费而被取消,我们恳请敦促缔约方在限定11月的第一次会议上加倍努力。 致命自动武器足以构成第三次武器革命的威胁,一旦开发成功,他们将有可能使冲突升级至从未有过的庞大规模,而且会达到人类难以适应的速度;他们有可能成为恐怖的技术,成为暴君和恐怖分子残害无辜民众的武器,或者被黑客挟持的把柄,我们的时间并不多,一旦潘多拉的盒子被打开,就很难被关上。 因此,我们恳请缔约方找到一种保护我们全人类免于这种危险的途径。 Click here to see this page in other languages: English German Japanese   Russian 公开信签署者名单(按国家排序) Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia. Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia. Charles Gretton, founder of Hivery, Australia. Brad Lorge, founder & CEO of Premonition.io, Australia

Leaders of Top Robotics and AI Companies Call for Ban on Killer Robots

Click here to see this page in other languages: Chinese   Founders of AI/robotics companies, including Elon Musk (Tesla, SpaceX, OpenAI) and Demis Hassabis and Mustafa Suleyman (Google’s DeepMind), call for autonomous weapons ban, as UN delays negotiations. Leaders from AI and robotics companies around the world have released an open letter calling on the United Nations