Entries by Max Tegmark

Friendly AI: Aligning Goals

The following is an excerpt from my new book, Life 3.0: Being Human in the Age of Artificial Intelligence. You can join and follow the discussion at ageofai.org. The more intelligent and powerful machines get, the more important it becomes that their goals are aligned with ours. As long as we build only relatively dumb […]

Hawking Says ‘Don’t Bank on the Bomb’ and Cambridge Votes to Divest $ 1Billion From Nuclear Weapons

1,000 nuclear weapons are plenty enough to deter any nation from nuking the US, but we’re hoarding over 7,000, and a long string of near-misses have highlighted the continuing risk of an accidental nuclear war which could trigger a nuclear winter, potentially killing most people on Earth. Yet rather than trimming our excess nukes, we’re […]

Experiment in Annihilation

To celebrate the 88th birthday of its author today, we’re republishing the first-ever comprehensive non-classified paper on the hydrogen bomb and problems with its early testing. It was translated into French by Jean-Paul Sartre and published in his journal “Les Temps Modernes” and the opening lines were once read in the US Congress without attribution. […]

Think-tank dismisses leading AI researchers as luddites

By Stuart Russell and Max Tegmark 2015 has seen a major growth in funding, research and discussion of issues related to ensuring that future AI systems are safe and beneficial for humanity. In a surprisingly polemic report, ITIF think-tank president Robert Atkinson misinterprets this growing altruistic focus of AI researchers as innovation-stifling “Luddite-induced paranoia.” This contrasts with […]

Dr. Strangelove is back: say hi to the cobalt bomb!

I must confess that, as a physics professor, some of my nightmares are extra geeky. My worst one is the C-bomb, a hydrogen bomb surrounded by large amounts of cobalt. When I first heard about this doomsday device in Stanley Kubrik’s dark nuclear satire “Dr. Strangelove”, I wasn’t sure if it was physically possible. Now […]

About Environment

After transforming our environment to allow farming and burgeoning populations, how can we minimize negative impact on climate and biodiversity?  Media American Institute of Physics: The Discovery of Global Warming: Hypertext history of global warming National Academy of Sciences – Science Museum: Earthlab: Interactive materials about climate change The Encyclopedia of Earth: Climate Change FAQs: […]

About Artificial Intelligence

Most benefits of civilization stem from intelligence, so how can we enhance these benefits with artificial intelligence without being replaced on the job market and perhaps altogether? Future computer technology can bring great benefits, and also new risks, as described in the resources below. Videos Stuart Russell – The Long-Term Future of (Artificial) Intelligence Humans […]

About Biotechnology

Biotechnology How can we live longer and healthier lives while avoiding risks such as engineered pandemics? Future biotechnology can bring great benefits, and also new risks, as described in the resources below.  Videos Prof. Marc Lipsitch: Risks and Benefits of Gain-of-Function Experiments in Potentially Pandemic Pathogens Cathal Garvey: Bringing Biotechnology into the Home (TEDx Talk): In […]

AI safety at the United Nations

Nick Bostrom and I were invited to speak at the United Nations about how to avoid AI risk. I’d never been there before, and it was quite the adventure! Here’s the video – I start talking at 1:54:40 and Nick Bostrom at 2:14:30.

$11M AI safety research program launched

Elon-Musk-backed program signals growing interest in new branch of artificial intelligence research. A new international grants program jump-starts tesearch to Ensure AI remains beneficial.   July 1, 2015Amid rapid industry investment in developing smarter artificial intelligence, a new branch of research has begun to take off aimed at ensuring that society can reap the benefits […]

Elon Musk donates $10M to keep AI beneficial

Thursday January 15, 2015 We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity. There is now a broad consensus that AI research is progressing steadily, […]

Wallace defends AI weapons

Sam Wallace, a former US army officer, has an  interesting piece criticizing our open letter suggestion as “unrealistic and dangerous”. I just wrote a  response together with Stuart Russell and Toby Walsh. Although we disagree with Wallace’s arguments, we’re grateful that he published them so that we can get them discussed and analyzed out in the open. Please join […]

Hawking Reddit AMA on AI

Our Scientific Advisory Board member Stephen Hawking’s long-awaited Reddit AMA answers on Artificial Intelligence just came out, and was all over today’s world news, including MSNBC, Huffington Post, The Independent and Time. Read the Q&A below and visit the official Reddit page for the full discussion: Question 1: Professor Hawking- Whenever I teach AI, Machine Learning, or Intelligent Robotics, my class and […]

Open letter on AI weapons

At a press conference at the IJCAI AI-meeting in Buenos Aires today, Stuart Russell and Toby Walsh announced an open letter on autonomous weapons that we’ve helped organize. We’re delighted that it’s been signed by over 1700 AI/robotics researchers and over 11000 others. Signatories include 14 current and past presents of AI/robotics organizations (AAAI, IEEE-RAS, […]

AI safety research on NPR

I just had the pleasure of discussing our new AI safety research program on National Public Radio. I was fortunate to be joined by two of the winners of our grants competition: CMU roboticist Manuela Veloso and Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence.