The viral disease known as COVID-19 has claimed hundreds of thousands of lives, devastated the global economy and brought much of the world to a virtual standstill. Sluggish and insufficient government responses have hugely exacerbated the damage. And yet for researchers and modelers, the outbreak of a pandemic — as well as our underpreparedness for such an event — came as no surprise. Given the potential consequences, why didn’t we plan ahead?
As we begin to rebuild, it’s important to consider the failures and mistakes that helped fuel this crisis. What can we learn from COVID-19 about pandemic preparedness, and preparedness for other catastrophic risks? We asked experts from a variety of fields — risk management, medicine, biology, engineering and more — what they had to say. Find a few of our favorite quotes below, and browse the full responses, plus answers from other experts, here.
Jaan Tallinn | Cofounder, Skype
“…humanity will have species-wide emergencies in the future, so being dismissive about “tail risks” is myopic and harmful…” More
Clarissa Rios Rojas | Research Associate, CSER
“…As citizens, we need to observe the decisions that our political leaders are taking and evaluate them step by step so in the future we can have an informed vote…” More
Stuart Russell | Professor, UC Berkeley
“…nothing can be done if knowledge and expertise are discarded in favor of political expediency and prejudice…” More
Essential to our assessment of risk and our ability to plan for the future is understanding the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community called “superforecasters” are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it’s done, and the ways it can help us with crucial decision making. Listen here.
Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we’ve invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today’s episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck’s and Rohin’s thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing. Listen here.
FLI and Sapiens Plurum are thrilled to announce their sixth annual short fiction contest, themed “Better Futures: Interspecies Interaction.” The purpose of the contest is to entice authors to conceive of the future in terms of desirable outcomes, and imagine how we might get there. The winner will receive $1000; Second prize is $500 and third is $300. Submissions will be accepted through May 31, 2020.
The Better Futures Contest asks writers to imagine how technology can increase empathy and connection. The news today is full of examples of technology creating dissension and amplifying differences. We ask authors to imagine ways that technology can improve how we relate to each other and bring us closer, even across species. We welcome stories that view life from another species’ point of view and/or explore empathy between different forms of life.