Entries by Ariel Conn

Research for Beneficial Artificial Intelligence

Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. It’s no coincidence that the first Asilomar Principle is about research. On the face of it, the Research Goal Principle may not seem as glamorous or exciting as some of the other Principles that more directly address how […]

Podcast: Beneficial AI and Existential Hope in 2018

For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses […]

When Should Machines Make Decisions?

Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives. When is it okay to let a machine make a decision instead of a person? Most of us allow Google Maps to choose the best route to a new location. Many of us are excited to let […]

Help Support FLI This Giving Tuesday

We’ve accomplished a lot. FLI has only been around for a few years, but during that time, we’ve: Helped mainstream AI safety research, Funded 37 AI safety research grants, Launched multiple open letters that have brought scientists and the public together for the common cause of a beneficial future, Drafted the 23 Asilomar Principles which […]

Three Tweets to Midnight: Nuclear Crisis Stability and the Information Ecosystem

The following policy memo was written and posted by the Stanley Foundation. Download the PDF (252K) How might a nuclear crisis play out in today’s media environment? What dynamics in this information ecosystem—with social media increasing the velocity and reach of information, disrupting journalistic models, creating potent vectors for disinformation, and changing how political leaders […]

ICAN Wins Nobel Peace Prize

We at FLI offer an excited congratulations to the International Campaign to Abolish Nuclear Weapons (ICAN), this year’s winners of the Nobel Peace Prize. We could not be more honored to have had the opportunity to work with ICAN during their campaign to ban nuclear weapons. Over 70 years have passed since the bombs were […]

Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer

If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer. To learn […]

The Future of Humanity Institute Releases Three Papers on Biorisks

Earlier this month, the Future of Humanity Institute (FHI) released three new papers that assess global catastrophic and existential biosecurity risks and offer a cost-benefit analysis of various approaches to dealing with these risks. The work – done by Piers Millett, Andrew Snyder-Beattie, Sebastian Farquhar, and Owen Cotton-Barratt – looks at what the greatest risks […]

Countries Sign UN Treaty to Outlaw Nuclear Weapons

Update 9/25/17: 53 countries have now signed and 3 have ratified. Today, 50 countries took an important step toward a nuclear-free world by signing the United Nations Treaty on the Prohibition of Nuclear Weapons. This is the first treaty to legally ban nuclear weapons, just as we’ve seen done previously with chemical and biological weapons. […]

Stanislav Petrov, the Man Who Saved the World, Has Died

A Soviet early warning satellite showed that the United States had launched five land-based missiles at the Soviet Union. The alert came at a time of high tension between the two countries, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. In addition, earlier in the […]

杀人机器人: 世界顶级AI和机器人公司敦促联合国禁止致命杀人武器

一封公开信:人工智能和机器人界的企业领袖们在世界最大的人工智能大会上, 针对联合国机器人军备竞赛的讨论被推延到年底发表公开信。来自26个国家,116位机器人和人工智能公司的创始人签署了这封公开信,敦促联合国紧急磋商致命自动武器的危机,并在国际范围内禁止其使用。2016年12月,123个成员国已在联合国常规武器大会的总结会议上一致同意了开展对自动武器的正式讨论。其中19个国家已呼吁全面禁止。 这封信的主要组织者Toby Walsh教授在墨尔本的2017国际联合人工智能大会的开幕上发布了此信 。这个会议是全球顶级人工智能和机器人专家聚集的场所,Walsh教授是IJCAI2017 大会组委会成员之一。 这封公开信是AI和机器人公司第一次站在一起。之前仅有一家公司,加拿大的 Clearpath Robotics,正式呼吁对自动武器的禁止。 信中说:“致命自动武器有成为第三次武器革命的危险” ,“一旦开发成功,它们将使冲突的规模庞大到从未有过,而且会达到人类难以适应的速度” ,“它们将成为恐怖的武器,将成为暴君和恐怖分子残害无辜民众的武器,或者被黑客挟持的武器。我们的时间并不多。一旦潘多拉的盒子被打开,就很难被关上”。这封信紧急呼吁联合国磋商以寻求使所有人免于这种危险的武器。这封2017年公开信的签署者包括: Elon Musk, Tesla,SpaceX and OpenAI创始人 (美国) Mustafa Suleyman, 谷歌DeepMind 和应用AI部创始人及负责人(英国) Esben Østergaard, 环球机器人创始人和 CTO (丹麦) Jerome Monceaux, Aldebaran机器人创始人, Nao和Pepper机器人的制造者(法国) Jü​rgen Schmidhuber, 领先的深入学习专家和Nnaisense的创始人(瑞士) Yoshua Bengio, 领先的深入学习专家和Element AI的创始人(加拿大) 他们的公司雇用成千上万的研究人员,机器人和工程师,总价高达数十亿美元,覆盖全球从北到南,东到西:澳大利亚,加拿大,中国,捷克共和国,丹麦,爱沙尼亚,芬兰,法国,德国, 冰岛,印度,爱尔兰,意大利,日本,墨西哥,荷兰,挪威,波兰,俄罗斯,新加坡,南非,西班牙,瑞士,英国,阿拉伯联合酋长国和美国。 Walsh教授是2017年公开信的组织者之一。他也组织了2015年在布宜诺斯艾利斯的IJCAI大会上发表对自动武器的警告。 2015年的公开信由数千名在世界各地的大学和研究实验室工作的人工智能和机器人研究人员签署,并得到了英国物理学家史蒂芬霍金,苹果联合创始人史蒂夫·沃兹尼亚克和认知科学家诺姆·乔姆斯基等人的赞同。“几乎所有的技术都可用于善也可用于恶。人工智能也一样,它可以帮助解决当今社会面临的许多紧迫问题:不平等,贫困,气候变化和持续的全球金融危机。 然而,同样的技术也可以用于自动武器以致战争工业化。”, “我们今天要做出决定,选择我们想要的未来。 我强烈支持许多人道及其他组织呼吁联合国禁止此种武器,就像禁止化学武器和其他武器一样。“他补充说。“在两年前在同一次会议上,我们发布了一个公开信,得到了数千名AI和机器人领域的研究人员的签名,呼吁禁止自动武器。 这有助于把问题推上联合国议程,进行正式讨论。 我希望2017的公开信会由于更多AI和机器人工业界的支持,而增加联合国讨论的紧迫性,使这种讨论立刻开始。 公开信的第一签署人,Clearpath Robertics 的联合创始人Ryan Gariepy说“知名企业和个人签署公开信印证了我们的警告,这种威胁不是假想的,而是真实,紧迫, 需要立刻行动的”。“和很多处在科幻境界的AI技术不一样,自主武器正处于发展的尖端,而且有极大的潜力能无辜民众和全球的稳定会造成极大的伤害,”他补充说,“发展致命的自动武器系统是不明智的,也是不道德的,应该在国际范围内被禁止”。 Element AI的创始人, “深度学习”领军者,Yoshua […]