Why MIRI Matters, and Other MIRI News

The Machine Intelligence Research Institute (MIRI) just completed its most recent round of fundraising, and with that Jed McCaleb wrote a brief post explaining why MIRI and their AI research is so important. You can find a copy of that message below, followed by MIRI’s January newsletter, which was put together by Rob Bensinger.

Jed McCaleb on Why MIRI Matters

A few months ago, several leaders in the scientific community signed an open letter pushing for oversight into the research and development of artificial intelligence, in order to mitigate the risks and ensure the societal benefit of the advanced technology. Researchers largely agree that AI is likely to begin outperforming humans on most cognitive tasks in this century.

Similarly, I believe we’ll see the promise of human-level AI come to fruition much sooner than we’ve fathomed. Its effects will likely be transformational — for the better if it is used to help improve the human condition, or for the worse if it is used incorrectly.

As AI agents become more capable, it becomes more important to analyze and verify their decisions and goals. MIRI’s focus is on how we can create highly reliable agents that can learn human values and the overarching need for better decision-making processes that power these new technologies.

The past few years has seen a vibrant and growing AI research community. As the space continues to flourish, the need for collaboration will continue to grow as well. Organizations like MIRI that are dedicated to security and safety engineering help fill this need. And, as a nonprofit, its research is free from profit obligations. This independence in research is important because it will lead to safer and more neutral results.

By supporting organizations like MIRI, we’re putting the safeguards in place to make sure that this immensely powerful technology is used for the greater good. For humanity’s benefit, we need to guarantee that AI systems can reliably pursue goals that are aligned with society’s human values. If organizations like MIRI are able to help engineer this level of technological advancement and awareness in AI systems, imagine the endless possibilities of how it can help improve our world. It’s critical that we put the infrastructure in place in order to ensure that AI will be used to make the lives of people better. This is why I’ve donated to MIRI, and why I believe it’s a worthy cause that you should consider as well.

January 2016 Newsletter

Research updates

General updates

News and links

1 reply
  1. Mindey
    Mindey says:

    Yes, the last paper ( “Proof-Producing Reflection for HOL” ) actually gives a hope. It looks like we have at least a principle to establish Vingean reflection property. There are many challenges ahead.

    We realize that it is unlikely that real-world agents will base their behavior entirely on formal proofs, and therefore, this property would have to be established probabilistically, for the first iteration of superintelligence that mankind creates with respect to mankind itself for this to work.

    We are talking about something what mankind should do as a single mind. Meanwhile, it is not a single mind. We have a sporadic rise of intelligences as non-open institutions are are working towards empowering themselves with deep learning, which they don’t fully understand.

    As a side note, I had started writing a blog at https://xrisk.xyz to be able to discuss in more details and more widely about the research papers in x-risks, by supporting multilinguality and mathematical markup in comments with anyone who would like to. What I would like to do is to create a wider formal discussion of research details outside the research communities, and in the multilingual wider world. I’d love to have feedback on it.

Comments are closed.