Entries by Ariel Conn

X-risk News of the Week: Human Embryo Gene Editing

X-risk = Existential Risk. The risk that we could accidentally (hopefully accidentally) wipe out all of humanity. X-hope = Existential Hope. The hope that we will all flourish and live happily ever after. If you keep up with science news at all, then you saw the headlines splashed all over news sources on Monday: The […]

Nuclear Warmongering Is Back in Fashion

“We should not be surprised that the Air Force and Navy think about actually employing nuclear weapons rather than keeping them on the shelf and assuming that will be sufficient for deterrence.” This statement was made by Adam Lowther, a research professor at the Air Force Research Institute, in an article for The National Interest, […]

An Explosion of CRISPR Developments in Just Two Months

    CRISPR made big headlines in late November of 2015, when researchers announced they could possibly eliminate malaria using the gene-editing technique to start a gene drive in mosquitos. A gene drive occurs when a preferred version of a gene replaces the unwanted version in every case of reproduction, overriding Mendelian genetics, which say […]

Are Humans Dethroned in Go? AI Experts Weigh In

Today DeepMind announced a major AI breakthrough: they’ve developed software that can defeat a professional human player at the game of Go. This is a feat that has long eluded computers. Francesca Rossi, a top AI scientist with IBM, told FLI, “AI researchers were waiting for computers to master Go, but we did not expect […]

Is the Legal World Ready for AI?

Our smart phones are increasingly giving us advice and directions based on their best Internet searches. Driverless cars are rolling down the roads in many states. Increasingly complicated automation is popping up in nearly every industry. As exciting and beneficial as most these advancements are, problems will still naturally occur. Is the legal system keeping […]

North Korea’s Nuclear Test

North Korea claims that, on January 6, they successfully tested their first hydrogen bomb. Seismic analysis indicates that they did, in fact, test what was likely a nuclear bomb, but experts – and now the White House — dispute whether it was a real hydrogen bomb. David Wright, Co-director of the Global Security Program for […]

2015: An Amazing Year in Review

Just four days before the end of the year, the Washington Post published an article, arguing that 2015 was the year the beneficial AI movement went mainstream. FLI couldn’t be more excited and grateful to have helped make this happen and to have emerged as integral part of this movement. And that’s only a part […]

Who’s In Control?

The Washington Post just asked one of the most important questions in the field of artificial intelligence: “Are we fully in control of our technology?” There are plenty of other questions about artificial intelligence that are currently attracting media attention, such as: Is superintelligence imminent and will it kill us all? As necessary as it is to […]

Were the Paris Climate Talks a Success?

An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute: On Friday, December 18, I talked with Seth Baum, the Executive Director of the Global Catastrophic Risk Institute, about the realistic impact of the Paris Climate Agreement. The Paris Climate talks ended December 12th, and there’s been a lot of fanfare in […]

Inside OpenAI: An Interview by SingularityHUB

The following interview was conducted and written by Shelly Fan for SingularityHUB. Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning. Backed by Tesla and Space X’s Elon Musk and Y Combinator’s […]

GCRI December News Summary

The following is the December news summary for the Global Catastrophic Risk Institute, written by Robert de Neufville. It was originally published at the Global Catastrophic Risk Institute. Please sign up for the GCRI newsletter. Chinese power plant image courtesy of Tobias Brox under a Creative Commons Attribution-ShareAlike 3.0 Unported license (the image has been cropped) Turkish F-16s shot […]

OpenAI Announced

Press release from OpenAI: Introducing OpenAI by Greg Brockman, Ilya Sutskever, and the OpenAI team December 11, 2015 OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since […]

The Future of Humanity Institute Is Hiring!

Exciting news from FHI: 1. Research Fellow – AI – Strategic Artificial Intelligence Research Centre, Future of Humanity Institute (Vacancy ID# 121242). We are seeking expertise in the technical aspects of AI safety, including a solid understanding of present-day academic and industrial research frontiers, machine learning development, and knowledge of academic and industry stakeholders and groups. […]

Guest Blog: Paris, Nuclear Weapons, and Suicide Bombing

The following post was written by Dr. Alan Robock, a Distinguished Professor of Climate Science at Rutgers University. France’s 300 nuclear weapons were useless to protect them from the horrendous suicide bomb attacks in Paris on Nov. 13, 2016. And if France ever uses those weapons to attack another country’s cities and industrial areas, France […]

The World Has Lost 33% of Its Farmable Land

During the Paris climate talks last week, researchers from the University of Sheffield’s Grantham Center revealed that in the last 40 years, the world has lost nearly 33% of its farmable land. The loss is attributed to erosion and pollution, but the effects are expected to be exacerbated by climate change. Meanwhile, global food production […]