Entries by Ariel Conn

Three Tweets to Midnight: Nuclear Crisis Stability and the Information Ecosystem

The following policy memo was written and posted by the Stanley Foundation. Download the PDF (252K) How might a nuclear crisis play out in today’s media environment? What dynamics in this information ecosystem—with social media increasing the velocity and reach of information, disrupting journalistic models, creating potent vectors for disinformation, and changing how political leaders […]

ICAN Wins Nobel Peace Prize

We at FLI offer an excited congratulations to the International Campaign to Abolish Nuclear Weapons (ICAN), this year’s winners of the Nobel Peace Prize. We could not be more honored to have had the opportunity to work with ICAN during their campaign to ban nuclear weapons. Over 70 years have passed since the bombs were […]

Podcast: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer

If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours, tries to answer. To learn […]

The Future of Humanity Institute Releases Three Papers on Biorisks

Earlier this month, the Future of Humanity Institute (FHI) released three new papers that assess global catastrophic and existential biosecurity risks and offer a cost-benefit analysis of various approaches to dealing with these risks. The work – done by Piers Millett, Andrew Snyder-Beattie, Sebastian Farquhar, and Owen Cotton-Barratt – looks at what the greatest risks […]

Countries Sign UN Treaty to Outlaw Nuclear Weapons

Update 9/25/17: 53 countries have now signed and 3 have ratified. Today, 50 countries took an important step toward a nuclear-free world by signing the United Nations Treaty on the Prohibition of Nuclear Weapons. This is the first treaty to legally ban nuclear weapons, just as we’ve seen done previously with chemical and biological weapons. […]

Stanislav Petrov, the Man Who Saved the World, Has Died

A Soviet early warning satellite showed that the United States had launched five land-based missiles at the Soviet Union. The alert came at a time of high tension between the two countries, due in part to the U.S. military buildup in the early 1980s and President Ronald Reagan’s anti-Soviet rhetoric. In addition, earlier in the […]

杀人机器人: 世界顶级AI和机器人公司敦促联合国禁止致命杀人武器

一封公开信:人工智能和机器人界的企业领袖们在世界最大的人工智能大会上, 针对联合国机器人军备竞赛的讨论被推延到年底发表公开信。来自26个国家,116位机器人和人工智能公司的创始人签署了这封公开信,敦促联合国紧急磋商致命自动武器的危机,并在国际范围内禁止其使用。2016年12月,123个成员国已在联合国常规武器大会的总结会议上一致同意了开展对自动武器的正式讨论。其中19个国家已呼吁全面禁止。 这封信的主要组织者Toby Walsh教授在墨尔本的2017国际联合人工智能大会的开幕上发布了此信 。这个会议是全球顶级人工智能和机器人专家聚集的场所,Walsh教授是IJCAI2017 大会组委会成员之一。 这封公开信是AI和机器人公司第一次站在一起。之前仅有一家公司,加拿大的 Clearpath Robotics,正式呼吁对自动武器的禁止。 信中说:“致命自动武器有成为第三次武器革命的危险” ,“一旦开发成功,它们将使冲突的规模庞大到从未有过,而且会达到人类难以适应的速度” ,“它们将成为恐怖的武器,将成为暴君和恐怖分子残害无辜民众的武器,或者被黑客挟持的武器。我们的时间并不多。一旦潘多拉的盒子被打开,就很难被关上”。这封信紧急呼吁联合国磋商以寻求使所有人免于这种危险的武器。这封2017年公开信的签署者包括: Elon Musk, Tesla,SpaceX and OpenAI创始人 (美国) Mustafa Suleyman, 谷歌DeepMind 和应用AI部创始人及负责人(英国) Esben Østergaard, 环球机器人创始人和 CTO (丹麦) Jerome Monceaux, Aldebaran机器人创始人, Nao和Pepper机器人的制造者(法国) Jü​rgen Schmidhuber, 领先的深入学习专家和Nnaisense的创始人(瑞士) Yoshua Bengio, 领先的深入学习专家和Element AI的创始人(加拿大) 他们的公司雇用成千上万的研究人员,机器人和工程师,总价高达数十亿美元,覆盖全球从北到南,东到西:澳大利亚,加拿大,中国,捷克共和国,丹麦,爱沙尼亚,芬兰,法国,德国, 冰岛,印度,爱尔兰,意大利,日本,墨西哥,荷兰,挪威,波兰,俄罗斯,新加坡,南非,西班牙,瑞士,英国,阿拉伯联合酋长国和美国。 Walsh教授是2017年公开信的组织者之一。他也组织了2015年在布宜诺斯艾利斯的IJCAI大会上发表对自动武器的警告。 2015年的公开信由数千名在世界各地的大学和研究实验室工作的人工智能和机器人研究人员签署,并得到了英国物理学家史蒂芬霍金,苹果联合创始人史蒂夫·沃兹尼亚克和认知科学家诺姆·乔姆斯基等人的赞同。“几乎所有的技术都可用于善也可用于恶。人工智能也一样,它可以帮助解决当今社会面临的许多紧迫问题:不平等,贫困,气候变化和持续的全球金融危机。 然而,同样的技术也可以用于自动武器以致战争工业化。”, “我们今天要做出决定,选择我们想要的未来。 我强烈支持许多人道及其他组织呼吁联合国禁止此种武器,就像禁止化学武器和其他武器一样。“他补充说。“在两年前在同一次会议上,我们发布了一个公开信,得到了数千名AI和机器人领域的研究人员的签名,呼吁禁止自动武器。 这有助于把问题推上联合国议程,进行正式讨论。 我希望2017的公开信会由于更多AI和机器人工业界的支持,而增加联合国讨论的紧迫性,使这种讨论立刻开始。 公开信的第一签署人,Clearpath Robertics 的联合创始人Ryan Gariepy说“知名企业和个人签署公开信印证了我们的警告,这种威胁不是假想的,而是真实,紧迫, 需要立刻行动的”。“和很多处在科幻境界的AI技术不一样,自主武器正处于发展的尖端,而且有极大的潜力能无辜民众和全球的稳定会造成极大的伤害,”他补充说,“发展致命的自动武器系统是不明智的,也是不道德的,应该在国际范围内被禁止”。 Element AI的创始人, “深度学习”领军者,Yoshua […]

Leaders of Top Robotics and AI Companies Call for Ban on Killer Robots

Founders of AI/robotics companies, including Elon Musk (Tesla, SpaceX, OpenAI) and Demis Hassabis and Mustafa Suleyman (Google’s DeepMind), call for autonomous weapons ban, as UN delays negotiations. Leaders from AI and robotics companies around the world have released an open letter calling on the United Nations to ban autonomous weapons, often referred to as killer […]

Can AI Remain Safe as Companies Race to Develop It?

Click here to see this page in other languages: Chinese  Race Avoidance Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards. Artificial intelligence could bestow incredible benefits on society, from faster, more accurate medical diagnoses to more sustainable management of energy resources, and so much more. But in today’s economy, […]

Podcast: The Art of Predicting with Anthony Aguirre and Andrew Critch

How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, […]

Transcript: The Art of Predicting

[beginning of recorded material] Ariel: I’m Ariel Conn with the Future of Life Institute. Much of the time, when we hear about attempts to predict the future, it conjures images of fortune tellers and charlatans, but, in fact, we can fairly accurately predict that, not only will the sun come up tomorrow, but also at […]

Safe Artificial Intelligence May Start with Collaboration

Research Culture Principle: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. Competition and secrecy are just part of doing business. Even in academia, researchers often keep ideas and impending discoveries to themselves until grants or publications are finalized. But sometimes even competing companies and research labs work […]

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an experimental psychologist, neuroscientist, and philosopher. He studies moral judgment and decision-making, primarily using behavioral experiments and functional neuroimaging (fMRI). Other interests include religion, cooperation, and the capacity for complex thought. He is the author of […]

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a Research Professor at Robert Gordon University Aberdeen in Scotland. Her research in artificial intelligence develops innovative data/text/web mining technologies to discover knowledge to embed in case-based reasoning systems, recommender systems, and other intelligent information systems. […]