with Catherine Rhodes
A Chinese researcher recently made international news with claims that he had edited the first human babies using CRISPR. In doing so, he violated international ethics standards, and he appears to have acted without his funders or his university knowing. But this is only the latest example of biological research triggering ethical concerns. Gain-of-function research a few years ago, which made avian flu more virulent, also sparked controversy when scientists tried to publish their work. And there’s been extensive debate globally about the ethics of human cloning.
As biotechnology and other emerging technologies become more powerful, the dual-use nature of research — that is, research that can have both beneficial and risky outcomes — is increasingly important to address. How can scientists and policymakers work together to ensure regulations and governance of technological development will enable researchers to do good with their work, while decreasing the threats?
On this month’s podcast, Ariel spoke about these issues with Catherine Rhodes, a senior research associate and deputy director of the Center for the Study of Existential Risk.
Topics discussed in this episode include:
- Gain-of-function research, the H5N1 virus (avian flu), and the risks of publishing dangerous information
- The roles of scientists, policymakers, and the public to ensure that technology is developed safely and ethically
- The controversial Chinese researcher who claims to have used CRISPR to edit the genome of twins
- How scientists can anticipate whether the results of their research could be misused by someone else
- To what extent does risk stem from technology, and to what extent does it stem from how we govern it?
Recent Articles and Updates
What We’ve Been Up to This Month
Jessica Cussins and Richard Mallah attended the Partnership on AI “All Partners Meeting” in San Francisco on November 14 and 15. Jessica participated in the “Fair, Transparent, and Accountable AI” working group, while Richard gave a short talk on technical AI safety and participated in the “Safety Critical AI Working Group.”
Victoria Krakovna ran a session at EA Global London on the ML approach to AI safety, together with Jan Leike, at the end of October. They explored some of the assumptions and considerations that come up as they reflect on different research agendas. This blog post discusses the session.
Ariel Conn participated in the last meeting of the N Square Innovations Network for the first cohort. The original Cohort met with Cohort 2 at the Rhode Island School of Design to discuss how the next group will be able to continue to develop new approaches for reducing the nuclear threat.