Artificial Intelligence (AI) will one day become the most powerful technology in human history, being either the best thing ever to happen to humanity, or the worst. Polls have shown that most AI researchers expect artificial general intelligence (AGI) within decades, able to perform all cognitive tasks at super-human level. This could enable dramatic acceleration of science and technology, curing diseases, eliminating poverty and helping life spread into our cosmos. But it could also enable unprecedented inequality, surveillance, power concentration and accidents. How can we reap the benefits of AI without being replaced on the job market and perhaps altogether?
Artificial Intelligence News

Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich
Our AI systems work remarkably well in closed worlds. That’s…

European Parliament Passes Resolution Supporting a Ban on Killer Robots
Click here to see this page in other languages: Russian
The…

The Risks Posed By Lethal Autonomous Weapons
The following article was originally posted on Metro.
Killer…

State of California Endorses Asilomar AI Principles
Click here to see this page in other languages: Russian
On…

Podcast: Artificial Intelligence – Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins
Experts predict that artificial intelligence could become…

Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity
Click here to see this page in other languages: Russian
Finance,…

AI Alignment Podcast: The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce
What role does metaethics play in AI alignment and safety?…

Podcast: Six Experts Explain the Killer Robots Debate
Why are so many AI researchers so worried about lethal autonomous…

Machine Reasoning and the Rise of Artificial General Intelligences: An Interview With Bart Selman
From Uber’s advanced computer vision system to Netflix’s…

$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust
$2 million has been allocated to fund research that anticipates…

AI Companies, Researchers, Engineers, Scientists, Entrepreneurs, and Others Sign Pledge Promising Not to Develop Lethal Autonomous Weapons
Leading AI companies and researchers take concrete action against…

AI Alignment Podcast: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy
What role does cyber security play in AI alignment and safety?…

Podcast: Mission AI – Giving a Global Voice to the AI Discussion with Charlie Oliver and Randi Williams
How are emerging technologies like artificial intelligence…

A Summary of Concrete Problems in AI Safety
By Shagun Sodhani
Click here to see this page in other languages: Russian
It’s…

How Will the Rise of Artificial Superintelligences Impact Humanity?
Cars drive themselves down our streets. Planes fly themselves…

AI Alignment Podcast: Astronomical Future Suffering and Superintelligence with Kaj Sotala
In a classic taxonomy of risks developed by Nick Bostrom…

AI Safety: Measuring and Avoiding Side Effects Using Relative Reachability
This article was originally published on the Deep Safety blog.
A…

Teaching Today’s AI Students To Be Tomorrow’s Ethical Leaders: An Interview With Yan Zhang
Some of the greatest scientists and inventors of the future are…

ICRAC Open Letter Opposes Google’s Involvement With Military
From improving medicine to better search engines to assistants…

Lethal Autonomous Weapons: An Update from the United Nations
Earlier this month, the United Nations Convention on Conventional…