Artificial Intelligence (AI) will one day become the most powerful technology in human history, being either the best thing ever to happen to humanity, or the worst. Polls have shown that most AI researchers expect artificial general intelligence (AGI) within decades, able to perform all cognitive tasks at super-human level. This could enable dramatic acceleration of science and technology, curing  diseases, eliminating poverty and helping life spread into our cosmos. But it could also enable unprecedented inequality, surveillance, power concentration and accidents.  How can we reap the benefits of AI without being replaced on the job market and perhaps altogether?

LEARN MORE >

Artificial Intelligence News

Making AI Safe in an Unpredictable World: An Interview with Thomas G. Dietterich

Our AI systems work remarkably well in closed worlds. That’s…

European Parliament Passes Resolution Supporting a Ban on Killer Robots

Click here to see this page in other languages:  Russian  The…

The Risks Posed By Lethal Autonomous Weapons

The following article was originally posted on Metro. Killer…

State of California Endorses Asilomar AI Principles

Click here to see this page in other languages:  Russian  On…

Governing AI: An Inside Look at the Quest to Ensure AI Benefits Humanity

Click here to see this page in other languages:  Russian  Finance,…

Podcast: Six Experts Explain the Killer Robots Debate

Why are so many AI researchers so worried about lethal autonomous…

Machine Reasoning and the Rise of Artificial General Intelligences: An Interview With Bart Selman

From Uber’s advanced computer vision system to Netflix’s…

$2 Million Donated to Keep Artificial General Intelligence Beneficial and Robust

$2 million has been allocated to fund research that anticipates…

A Summary of Concrete Problems in AI Safety

By Shagun Sodhani Click here to see this page in other languages:  Russian  It’s…

How Will the Rise of Artificial Superintelligences Impact Humanity?

Cars drive themselves down our streets. Planes fly themselves…

AI Alignment Podcast: Astronomical Future Suffering and Superintelligence with Kaj Sotala

In a classic taxonomy of risks developed by Nick Bostrom…

AI Safety: Measuring and Avoiding Side Effects Using Relative Reachability

This article was originally published on the Deep Safety blog. A…

ICRAC Open Letter Opposes Google’s Involvement With Military

From improving medicine to better search engines to assistants…
Autonomous killer drones: what if?

Lethal Autonomous Weapons: An Update from the United Nations

Earlier this month, the United Nations Convention on Conventional…