Entries by Ariel Conn

Can AI Remain Safe as Companies Race to Develop It?

Race Avoidance Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards. Artificial intelligence could bestow incredible benefits on society, from faster, more accurate medical diagnoses to more sustainable management of energy resources, and so much more. But in today’s economy, the first to achieve a technological breakthrough are the winners, […]

Podcast: The Art of Predicting with Anthony Aguirre and Andrew Critch

How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, […]

Transcript: The Art of Predicting

[beginning of recorded material] Ariel: I’m Ariel Conn with the Future of Life Institute. Much of the time, when we hear about attempts to predict the future, it conjures images of fortune tellers and charlatans, but, in fact, we can fairly accurately predict that, not only will the sun come up tomorrow, but also at […]

Safe Artificial Intelligence May Start with Collaboration

Research Culture Principle: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI. Competition and secrecy are just part of doing business. Even in academia, researchers often keep ideas and impending discoveries to themselves until grants or publications are finalized. But sometimes even competing companies and research labs work […]

Joshua Greene Interview

The following is an interview with Joshua Greene about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Greene is an experimental psychologist, neuroscientist, and philosopher. He studies moral judgment and decision-making, primarily using behavioral experiments and functional neuroimaging (fMRI). Other interests include religion, cooperation, and the capacity for complex thought. He is the author of […]

Susan Craw Interview

The following is an interview with Susan Craw about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Craw is a Research Professor at Robert Gordon University Aberdeen in Scotland. Her research in artificial intelligence develops innovative data/text/web mining technologies to discover knowledge to embed in case-based reasoning systems, recommender systems, and other intelligent information systems. […]

Support Grows for UN Nuclear Weapons Ban

“Do you want to be defended by the mass murder of people in other countries?” According to Princeton physicist Zia Mian, nuclear weapons are “fundamentally anti-democratic” precisely because citizens are never asked this question. Mian argues that “if you ask people this question, almost everybody would say, ‘No, I do not want you to incinerate […]

The U.S. Worldwide Threat Assessment Includes Warnings of Cyber Attacks, Nuclear Weapons, Climate Change, etc.

Last Thursday – just one day before the WannaCry ransomware attack would shut down 16 hospitals in the UK and ultimately hit hundreds of thousands of organizations and individuals in over 150 countries – the Director of National Intelligence, Daniel Coats, released the Worldwide Threat Assessment of the US Intelligence Community. Large-scale cyber attacks are […]

GP-write and the Future of Biology

Imagine going to the airport, but instead of walking through – or waiting in – long and tedious security lines, you could walk through a hallway that looks like a terrarium. No lines or waiting. Just a lush, indoor garden. But these plants aren’t something you can find in your neighbor’s yard – their genes […]

Forget the Cold War – Experts say Nuclear Weapons Are a Bigger Risk Today

Until recently, many Americans believed that nuclear weapons don’t represent the same threat as during the Cold War. However, recent events and aggressive posturing between nuclear nations —especially the U.S., Russia, and North Korea—has increased public awareness and concern. These fears were addressed at a recent MIT conference on nuclear weapons. “The possibility of a […]

Podcast: Climate Change with Brian Toon and Kevin Trenberth

Too often, the media focus their attention on climate-change deniers, and as a result, when scientists speak with the press, it’s almost always a discussion of whether climate change is real. Unfortunately, that can make it harder for those who recognize that climate change is a legitimate threat to fully understand the science and impacts […]

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is the Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. He is the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines and Hacking H(app)iness – Why […]

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a philosopher and cognitive scientist at the University of Connecticut, YHouse (NY) and the Institute for Advanced Study in Princeton, NJ. Q. Explain what you think of the following principles: 4) Research Culture: A culture of […]

Patrick Lin Interview

The following is an interview with Patrick Lin about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is an associate philosophy professor. He regularly gives invited briefings to industry, media, and […]

Podcast: Law and Ethics of Artificial Intelligence

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial […]