Entries by Ariel Conn

John C. Havens Interview

The following is an interview with John C. Havens about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Havens is the Executive Director of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. He is the author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines and Hacking H(app)iness – Why […]

Susan Schneider Interview

The following is an interview with Susan Schneider about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Schneider is a philosopher and cognitive scientist at the University of Connecticut, YHouse (NY) and the Institute for Advanced Study in Princeton, NJ. Q. Explain what you think of the following principles: 4) Research Culture: A culture of […]

Patrick Lin Interview

The following is an interview with Patrick Lin about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Lin is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is an associate philosophy professor. He regularly gives invited briefings to industry, media, and […]

Podcast: Law and Ethics of Artificial Intelligence

The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial […]

Survivors Speak Out As UN Negotiates Nuke Ban

To imagine innocence is to picture children playing. As such, most people and governments are horrified by the idea of children and other helpless civilians suffering and dying, even during war. Finding a way to prevent the unnecessary slaughter of innocents has brought over 115 countries to the United Nations in New York this week […]

Can We Properly Prepare for the Risks of Superintelligent AI?

Risks Principle: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact. We don’t know what the future of artificial intelligence will look like. Though some may make educated guesses, the future is unclear. AI could keep developing like all other technologies, […]

Artificial Intelligence and Income Inequality

Shared Prosperity Principle: The economic prosperity created by AI should be shared broadly, to benefit all of humanity. Income inequality is a well recognized problem. The gap between the rich and poor has grown over the last few decades, but it became increasingly pronounced after the 2008 financial crisis. While economists debate the extent to […]

Is an AI Arms Race Inevitable?

AI Arms Race Principle: An arms race in lethal autonomous weapons should be avoided.* Perhaps the scariest aspect of the Cold War was the nuclear arms race. At its peak, the US and Russia held over 70,000 nuclear weapons, only a fraction of which could have killed every person on earth. As the race to […]

Podcast: UN Nuclear Weapons Ban with Beatrice Fihn and Susi Snyder

Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. Previous nuclear treaties have included the Test Ban Treaty, and the Non-Proliferation Treaty. But in the 70 plus years of the United Nations, the countries have yet to agree on a treaty to completely ban nuclear […]

Transcript: UN Nuclear Weapons Ban with Beatrice Fihn and Susi Snyder

ARIEL: I’m Ariel Conn with the Future of Life Institute. Last October, the United Nations passed a historic resolution to begin negotiations on a treaty to ban nuclear weapons. Previous nuclear treaties have included the Test Ban Treaty, and the Non-Proliferation Treaty. But in the 70 plus years of the United Nations, the countries have […]

Preparing for the Biggest Change in Human History

Importance Principle: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. In the history of human progress, a few events have stood out as especially revolutionary: the intentional use of fire, the invention of agriculture, the industrial revolution, […]

Bart Selman Interview

The following is an interview with Bart Selman about the Beneficial AI 2017 conference and The Asilomar Principles that it produced. Selman is a Professor of Computer Science at Cornell University, a Fellow of the American Association for Artificial Intelligence (AAAI) and a Fellow of the American Association for the Advancement of Science (AAAS). Q: From […]

How Smart Can AI Get?

Capability Caution Principle: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities. A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What […]

MIRI February 2017 Newsletter

Following up on a post outlining some of the reasons MIRI researchers and OpenAI researcher Paul Christiano are pursuing different research directions, Jessica Taylor has written up the key motivations for MIRI’s highly reliable agent design research. Research updates A new paper: “Toward Negotiable Reinforcement Learning: Shifting Priorities in Pareto Optimal Sequential Decision-Making“ New at […]

Can We Ensure Privacy in the Era of Big Data?

Personal Privacy Principle: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data. A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and […]

Podcast: Top AI Breakthroughs, with Ian Goodfellow and Richard Mallah

2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist […]

Transcript: AI Breakthroughs with Ian Goodfellow and Richard Mallah

[beginning of recorded material] Ariel: I’m Ariel Conn with the Future of Life Institute. If you’ve been following FLI at all, you know that we’re both very concerned but also very excited about the potential impact artificial intelligence will have on our future. Naturally, we’ve been following the big breakthroughs that occur in the field […]