AI, Ethics and Society
Contents
2nd International Workshop
13th February 2016 | Phoenix, Arizona USA
What is the future of AI? And what should we be doing about it now?
The focus of this workshop is on the ethical and societal implications of building AI systems. It follows a successful full-day workshop held at AAAI-15. There is an increasing appetite within and outside AI for such discussions. The workshop will consist of invited talks and tutorials, submitted papers, and one or more panel discussions. Topics include, but are not limited to:
- The future of AI
- AI as a threat to or saviour for humanity
- Mechanisms to ensure moral behaviours in AI systems
- Safeguards necessary within AI research
- Autonomous agents in the military
- Autonomous agents in commerce and other domains
- The impact of AI on work and other aspects of our lives
Tentative Schedule
09.00-10.15: AI and Ethics 1
Benjamin Kuipers | Human-like Morality and Ethics for Robots.
Joanna Bryson | Patiency Is Not a Virtue: AI and the Design of Ethical Systems.
Jessica Taylor | Quantilizers: A Safer Alternative to Maximizers for Limited Optimization.
10.15-10.45: Coffee.
10.15-11.20: Posters.
Tsvi Benson-Tilsen and Nate Soares | Formalizing Convergent Instrumental Goals.
Kaj Sotala | Defining Human Values for Value Learners
Aaron Isaksen, Julian Togelius, Frank Lantz and Andy Nealen | Playing Games Across the Superintelligence Divide.
Jason Wilson | Group Optimization: A Framework for Evaluation and Designing Human-Robot Relationships.
Mark Riedl and Brent Harrison | Using Stories to Teach Human Values to Artificial Agents.
Emanuelle Burton, Judy Goldsmith and Nicholas Mattei | Using “The Machine Stops” for Teaching Ethics in Artificial Intelligence and Computer Science.
11.20-13.00: AI and Ethics 2
Toby Walsh|Why the Technological Singularity May Never Happen.
Miles Brundage | Modeling Progress in AI.
Roman Yampolskiy | Taxonomy of Pathways to Dangerous Artificial Intelligence
David Abel, James Macglashan and Michael Littman | Reinforcement Learning as a Framework for Ethical Decision Making.
13.00-14.00: Lunch.
14.00-15.00: AI & Safety 1
Max Tegmark & Richard Mallah | Introductory remarks on the history and importance of the AI safety and beneficence grants program, and the landscape of current funded AI projects and how they tie together conceptually.
Kaj Sotala | Teaching AI Systems Human Values Through Human-Like Concept Learning.
Vincent Conitzer | How to Build Ethics into Robust Artificial Intelligence.
Fuxin Li | Understanding When a Deep Network Is Going to Be Wrong
Francesca Rossi | Safety Constraints and Ethical Principles in Collective Decision Making Systems.
Bas Steunebrink | Experience-based AI (EXPAI)
15.00-16.00: AI & Safety 2
Manuela Veloso | Explanations for Complex AI Systems
Brian Ziebart | Towards Safer Inductive Learning
Percy Liang | Predictable AI via Failure Detection and Robustness
Benja Fallenstein | Aligning Superintelligence With Human Interests
Paul Christiano | Counterfactual Human Oversight
16.00-16.30: Coffee Break.
16.30-18.00: AI & Safety 3
Stuart Russell | Value Alignment and Moral Metareasoning
Stefano Ermon | Robust Probabilistic Inference Engines for Autonomous Agents
Benjamin Rubinstein | Security Evaluation of Machine Learning Systems
Panel discussion: What are the most promising research directions for keeping AI beneficial? (Russell, Conitzer, Parkes, Liang, Ermon, Rubinstein)
Location and Date
February 13th, 2016 at the Phoenix Convention Center: 100 N 3rd St, Phoenix, AZ 85004
For more information about the event and the participants, please visit the official AI, Ethics and Safety Workshop page.
About the Future of Life Institute
The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.