Entries by Ariel Conn

The U.S. Worldwide Threat Assessment Includes Warnings of Cyber Attacks, Nuclear Weapons, Climate Change, etc.

Last Thursday – just one day before the WannaCry ransomware attack would shut down 16 hospitals in the UK and ultimately hit hundreds of thousands of organizations and individuals in over 150 countries – the Director of National Intelligence, Daniel Coats, released the Worldwide Threat Assessment of the US Intelligence Community. Large-scale cyber attacks are […]

GP-write and the Future of Biology

Imagine going to the airport, but instead of walking through – or waiting in – long and tedious security lines, you could walk through a hallway that looks like a terrarium. No lines or waiting. Just a lush, indoor garden. But these plants aren’t something you can find in your neighbor’s yard – their genes […]

Forget the Cold War – Experts say Nuclear Weapons Are a Bigger Risk Today

Until recently, many Americans believed that nuclear weapons don’t represent the same threat as during the Cold War. However, recent events and aggressive posturing between nuclear nations —especially the U.S., Russia, and North Korea—has increased public awareness and concern. These fears were addressed at a recent MIT conference on nuclear weapons. “The possibility of a […]

Podcast: Climate Change with Brian Toon and Kevin Trenberth

Too often, the media focus their attention on climate-change deniers, and as a result, when scientists speak with the press, it’s almost always a discussion of whether climate change is real. Unfortunately, that can make it harder for those who recognize that climate change is a legitimate threat to fully understand the science and impacts […]

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is the Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. He is the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines and Hacking H(app)iness – Why […]

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a philosopher and cognitive scientist at the University of Connecticut, YHouse (NY) and the Institute for Advanced Study in Princeton, NJ. Q. Explain what you think of the following principles: 4) Research Culture: A culture of […]

Patrick Lin Interview

The following is an interview with Patrick Lin about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is an associate philosophy professor. He regularly gives invited briefings to industry, media, and […]

Podcast: Law and Ethics of Artificial Intelligence

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial […]

Survivors Speak Out As UN Negotiates Nuke Ban

To imagine innocence is to picture children playing. As such, most people and governments are horrified by the idea of children and other helpless civilians suffering and dying, even during war. Finding a way to prevent the unnecessary slaughter of innocents has brought over 115 countries to the United Nations in New York this week […]

Can We Properly Prepare for the Risks of Superintelligent AI?

Risks Principle: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. We don’t know what the future of artificial intelligence will look like. Though some may make educated guesses, the future is unclear. AI could keep developing like all other technologies, […]

Artificial Intelligence and Income Inequality

Click here to see this page in other languages: Chinese   Shared Prosperity Principle: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. Income inequality is a well recognized problem. The gap between the rich and poor has grown over the last few decades, but it became increasingly pronounced after the […]

Is an AI Arms Race Inevitable?

Click here to see this page in other languages:  Russian  AI Arms Race Principle: An arms race in lethal autonomous weapons should be avoided.* Perhaps the scariest aspect of the Cold War was the nuclear arms race. At its peak, the US and Russia held over 70,000 nuclear weapons, only a fraction of which could […]

Transcript: UN Nuclear Weapons Ban with Beatrice Fihn and Susi Snyder

ARIEL: I’m Ariel Conn with the Future of Life Institute. Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. Previous nuclear treaties have included the Test Ban Treaty, and the Non-Proliferation Treaty. But in the 70 plus years of the United Nations, the countries have […]

Preparing for the Biggest Change in Human History

Click here to see this page in other languages: Chinese   Importance Principle: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. In the history of human progress, a few events have stood out as especially revolutionary: the intentional […]

Bart Selman Interview

The following is an interview with Bart Selman about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Selman is a Professor of Computer Science at Cornell University, a Fellow of the American Association for Artificial Intelligence (AAAI) and a Fellow of the American Association for the Advancement of Science (AAAS). Q: From […]