2 minute read
MIRI May 2016 Newsletter
May 19, 2016
- Two new papers split logical uncertainty into two distinct subproblems: “Uniform Coherence” and “Asymptotic Convergence in Online Learning with Unbounded Delays.”
- New at IAFF: An Approach to the Agent Simulates Predictor Problem; Games for Factoring Out Variables; Time Hierarchy Theorems for Distributional Estimation Problems
- We will be presenting “The Value Learning Problem” at the IJCAI-16 Ethics for Artificial Intelligence workshop instead of the AAAI Spring Symposium where it was previously accepted.
- We’re launching a new research program with a machine learning focus. Half of MIRI’s team will be investigating potential ways to specify goals and guard against errors in advanced neural-network-inspired systems.
- We ran a type theory and formal verification workshop this past month.
News and links
- The Open Philanthropy Project explains its strategy of high-risk, high-reward hits-based givingand its decision to make AI risk its top focus area this year.
- Also from OpenPhil: Is it true that past researchers over-hyped AI? Is there a realistic chance of AI fundamentally changing civilization in the next 20 years?
- From Wired: Inside OpenAI, and Facebook is Building AI That Builds AI.
- The White House announces a public workshop series on the future of AI.
- The Wilberforce Society suggests policies for narrow and general AI development.
- Two new AI safety papers: “A Model of Pathways to Artificial Superintelligence Catastrophe for Risk and Decision Analysis” and “The AGI Containment Problem.”
- Peter Singer weighs in on catastrophic AI risk.
- Digital Genies: Stuart Russell discusses the problems of value learning and corrigibility in AI.
- Nick Bostrom is interviewed at CeBIT (video) and also gives a presentation on intelligence amplification and the status quo bias (video).
- Jeff MacMahan critiques philosophical critiques of effective altruism.
- Yale political scientist Allan Dafoe is seeking research assistants for a project on political and strategic concerns related to existential AI risk.
- The Center for Applied Rationality is accepting applicants to a free workshop for machine learning researchers and students.
This newsletter was originally posted here.
If you enjoyed this, you also might like:
January 26, 2023
January 19, 2023