Entries by admin

Research priorities for robust and beneficial AI

A summary of the research areas covered by our grants program can be found here. 若您对推动科技安全和有益发展感兴趣,我们诚意邀请您加入【生命未来研究所】【志愿者】的团队。在这里,你有机会和一群志愿者一起通过写作、翻译、推广活动和与专家/学者交流来催化对人类未来有益处的研究与题案。根据您的兴趣,志愿者有机会学习有关科技风险与安全的相关知识,并收获写作、资料收集和推广方面的经验。富有领导才能的志愿者在未来也有机会成为小组负责人。 欢迎您的加入,有兴趣者请联系lina@futureoflife.org。  

AI safety conference in Puerto Rico

The Future of AI: Opportunities and Challenges  This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI while avoiding pitfalls (see this open letter and […]

The Power to Remake a Species

  Once started, a carefully implemented gene drive could eradicate the entire malaria-causing Anopheles species of mosquito. In 2013, some 200 million humans suffered from malaria, and an estimated 584,000 of them died, 90 percent in Africa. The vast majority of those killed were children under age 5. Decades of research have fallen short of […]

GCRI: Aftermath

Aftermath Finding practical paths to recovery after a worldwide catastrophe. by Steven Ashley March 13, 2015 Tony Barrett Global Catastrophic Risk Institute OK, we survived the cataclysm. Now what? In recent years, warnings by top scientists and industrialists have energized research into the sort of civilization-threatening calamities that are typically the stuff of sci-fi and […]

MIRI: Artificial Intelligence: The Danger of Good Intentions

Nate Soares (left) and Nisan Stiennon (right) The Machine Intelligence Research Institute Credit: Vivian Johnson The Terminator had Skynet, an intelligent computer system that turned against humanity, while the astronauts in 2001: A Space Odyssey were tormented by their spaceship’s sentient computer HAL 9000, which had gone rogue. The idea that artificial systems could gain consciousness and try […]

Global catastrophic risk news summary

Here are the July and August global catastrophic risk news summaries, written by Robert de Neufville of the Global Catastrophic Risk Institute. The July summary covers the Iran deal, Russia’s new missile early warning system, dangers of AI, new Ebola cases, and more. The August summary covers the latest confrontation between North and South Korea, the world’s first low-enriched uranium storage bank, the […]

MIRI News: September 2015

[ Rob Bensinger is the Outreach Coordinator of the Machine Intelligence Research Institute (MIRI), a research nonprofit studying the technical questions raised by the prospect of smarter-than-human autonomous AI. MIRI’s researchers are among the recipients of the 2015 Future of Life Institute project grants in AI. ] MIRI’s September Newsletter collects recent news and links related to the long-term impact […]

Happy Petrov Day!

32 years ago today, Soviet army officer Stanislav Petrov refused to follow protocol and averted a nuclear war. From 9/26 is Petrov Day: “On September 26th, 1983, Lieutenant Colonel Stanislav Yevgrafovich Petrov was the officer on duty when the warning system reported a US missile launch. Petrov kept calm, suspecting a computer error. Then the system […]

Policy Exchange: Co-organized with CSER

This event occurred on September 1, 2015. When one of the world’s leading experts in Artificial Intelligence makes a speech suggesting that a third of existing British jobs could be made obsolete by automation, it is time for think tanks and the policymaking community to take notice. This observation – by Associate Professor of Machine […]

Future of Life Institute Summer 2015 Newsletter

TOP DEVELOPMENTS * $7M in AI research grants announced: We were delighted to announce the selection of 37 AI safety research teams which we plan to award a total of $7 million in funding. The grant program is funded by Elon Musk and the Open Philanthropy Project. Max Tegmark, along with FLI grant recipients Manela Veloso and Thomas Dietterich, were interviewed on […]

Future of Life Institute Summer 2015 Newsletter

TOP DEVELOPMENTS * $7M in AI research grants announced: We were delighted to announce the selection of 37 AI safety research teams which we plan to award a total of $7 million in funding. The grant program is funded by Elon Musk and the Open Philanthropy Project. Max Tegmark, along with FLI grant recipients Manela Veloso and Thomas Dietterich, were interviewed on […]

AI conference

This event was held January 2-5, 2015 in San Juan, Puerto Rico. We organized our first conference, The Future of AI: Opportunities and Challenges. This conference brought together the world’s leading AI builders from academia and industry to engage with each other and experts in economics, law and ethics. The goal was to identify promising research […]

Martin Rees: Catastrophic Risks: The Downsides of Advancing Technology

This event was held Thursday, November 6, 2014 in Harvard auditorium Jefferson Hall 250. Our Earth is 45 million centuries old. But this century is the first when one species ours can determine the biosphere’s fate. Threats from the collective “footprint” of 9 billion people seeking food, resources and energy are widely discussed. But less […]