AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala

In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity?

In this podcast, Lucas spoke with Kaj Sotala, an associate researcher at the Foundational Research Institute. He has previously worked for the Machine Intelligence Research Institute, and has publications on AI safety, AI timeline forecasting, and consciousness research.

Topics discussed in this episode include:

-The definition of and a taxonomy of suffering risks
-How superintelligence has special leverage for generating or mitigating suffering risks
-How different moral systems view suffering risks
-What is possible of minds in general and how this plays into suffering risks
-The probability of suffering risks
-What we can do to mitigate suffering risks