Skip to content

AI Researcher Andre Platzer

Published:
September 30, 2016
Author:
Revathi Kumar

Contents

AI Safety Research




Andre Platzer

Associate Professor

Computer Science Department, Robotics Institute

Carnegie Mellon University

aplatzer@cs.cmu.edu

Project: Faster Verification of AI-based Cyber-physical Systems

Amount Recommended:    $200,000




Project Summary

The most exciting and impactful uses of artificial intelligence (AI) that will affect everybody in crucial ways involve smart decision making to help people in the real world, such as when driving cars or flying aircraft, or help from robots engaging with humans. All of these have a huge potential for making the world a better place but also impose considerable responsibility on the system designer to ensure it will not do more harm than good because its decisions are sometimes systematically unsafe. Responsibly allowing mankind to rely on such technology requires stringent assurance of its safety. A nontrivial enterprise for the decision flexibility in AI. The goal of this research project is to develop formal verification and validation technology that helps ensure safety, robustness, and reliability of AI-based system designs. The world is an uncertain place. So perfect performance cannot always be guaranteed in all respects. But good system designs do not compromise safety when their performance degrades. The PI is proposing to advance his verification tool KeYmaera that has been used successfully for systems known as cyber-physical systems (combining computer decisions with physics or motion) toward the additional challenges that a deep integration of AI into those systems provides.

Technical Abstract

The most important and most impactful AI-based systems are those that directly interface with the physical world. Cyber-physical systems (CPS) combine computers (for decisions) and physics (motion) and play a prominent role, e.g., in cars, aircraft, robots. Due to their impact on people, they come with stringent safety requirements in order to make sure they make the world a better place. In order to enable sophisticated automation for these systems, AI-based systems become more prominent but their impact on the safety of the system is not understood well so far.

This project studies ways of extending safety analysis and verification technology for cyber-physical systems with ways of addressing the additional challenges that AI-based CPS provide.

The PI developed a verification tool, KeYmaera, for CPS, which has had quite some success in verifying CPS. KeYmaera has been used successfully for a safety analysis of an AI-intensive CPS, the Airborne Collision Avoidance System ACAS X, albeit with non-negligible effort. For a system of such a world-wide impact as ACAS X, the effort amortizes. This project proposes to develop verification technology that reduces the effort needed to verify AI-based systems in order to achieve more widespread adoption of safety analysis for AI-based CPS.




Publications

  1. Fulton, Nathan and Platzer, André. A logic of proofs for differential dynamic logic: Toward independently checkable proof certificates for dynamic logics. Jeremy Avigad and Adam Chlipala, editors, Proceedings of the 2016 Conference on Certified Programs and Proofs, CPP 2016, St. Petersburg, FL, USA, January 18-19, 2016, pp. 110-121. ACM, 2016. http://nfulton.org/papers/lpdl.pdf

Presentations

  1. Fulton, Nathan. Presented A Logic of Proofs for Differential Dynamic Logic: Toward Independently Checkable Proof Certificates for Dynamic Logics at The  5th ACM SIGPLAN Conference  on Certified  Programs  and Proofs.
  2. Nathan  Fulton,  Stefan  Mitsch,  and  André  Platzer. Presented  a tutorial  on KeYmaera  X and hybrid  systems  verification  at CPSWeek 2016, and a similar  tutorial  has been accepted at FM 2016.
  3. Nathan  Fulton  presented  a talk  on work supported  by this  grant  at a workshop  on Safe  AI for CPS held  at Carnegie  Mellon  in  April  2016. http://www.ls.cs.cmu.edu/CPSVVIF-2016/Program.html

Course Materials


Course Names:

  1. “Foundations of Cyber-Physical Systems” – Spring 2016

Software Releases

  1. Major contributions to the KeYmaera X Theorem Prover for Hybrid Systems were supported by this grant:
    • An initial public release of the KeYmaera X source code under a FOSS license, including the improved tactics framework and contextual equivalence tactics discussed in the grant proposal.
    • Several subsequent public releases of the prover and its source code containing improvements resulting from watching students interact with KeYmaera X in the FCPS course.

Ongoing Projects/Recent Progress

  1. Andre Platzer and colleagues
    • Worked on enabling  fast  reasoning  by improving  locality  of proof  search  tactics  and  providing  ways  of local  reasoning  by contextual  equivalence.   They  successfully  added contextual  equivalence  and local  reasoning  tactics  to the KeYmaera  X tool,  as well  as tactics  for  contextual  implication  reasoning  and  reasoning  by monotonicity  (the  ACAS X case study  suggests  both of  these  are crucial elements  of fast  verification  of large-scale  AI CPS).
    • Developed  a logic  that  reifies  the structure  of proofs  that  contain  efficient  contextual  equivalence  reasoning.  The  resulting  theory  and tooling  will  be instrumental  to all  future  developments  in  KeYmaera  X –including  the  development  of verified  AI-based controllers  for  CPS.
    • Developed  a suite  of proof  tactics  for  differential  dynamic logic  that  support  contextual  equivalence  reasoning.  These  tactics  are published  in  the KeYmaera  X GitHub  repository  and were evaluated  partially  upon  sub-components  of the  previously  existing  ACAS X safety  proof in  KeYmaera  X. A research  paper describing  and evaluating  the tactics  framework  and some  of the  tactics  library  is  currently  under review.  The  KeYmaera  X improvements  enabled  by this grant  will  undoubtedly  serve  as a spring-board  for  improvements  to CPS verification  technology  in  the PI’s research  group,  included  planned  work for year  two of this  project.
    • Developed  a logic  that  reifies  the contextual equivalence  proofs  generated  by the tactics  library  described  above [1]. Explicit  proofs  provide  a theoretical  foundation  that  the PI and his  supported  student  plan  to exploit  in  later  phases  of this  grant. The  primary  goal  of the  proposed work is to demonstrate  that  AI can be justifiably  used in  safety-critical  CPS. Demonstrating  this  claim  will  require  generating  code from  models  so that  models  of AI controllers  for  Cyber-Physical Systems  (AI CPS) can be validated  on real-world  systems.  Proof terms provide  a theoretical  foundation  for  solving  important  sub-problems  in  hybrid  systems  code generation. Proof  terms  are additionally  attractive  from  an AI-in-CPS perspective  because  of the importance  of explainability in  AI systems.  Explicit  proof  certificates  serve  as formal,  permanent,  and persistent  explanations  for  why  a system  is  believed  to behave  properly;  therefore,  if  AI CPS systems can be expressed  in  differential  dynamic  logic,  then  the proof  certification  logic  developed  in  this  work can be used  to interrogate  the  formal  explanation  for whyan AI CPS system  is considered  to be safe.
  2. Educational  activities  supported  by this  grant  focused  on extending  the  user  base of KeYmaera  X so as to understand  the impact  of contextual  equivalence  reasoning  on  the speed and difficulty  of  CPS verification  tasks. Toward  these ends,  the PI modified  his  Foundations  of  Cyber-Physical  Systems undergraduate  course to use the  KeYmaera  X prover and invited  experts  from  industry  to judge  final course  projects. The  PI and his  collaborators  also  presented  a KeYmaera  X tutorial  at CPS Week’16.
    • The  robust  tactics  library  developed  during  the  past year  is an enabling  technology  for  the development of reasoning  techniques  focused  on AI-based CPS. The  second year  of this  grant  will  focus  on constructing  models  of  AI-based CPS and developing  proof  search  procedures  that  can be used  for more automatically  and efficiently  establishing safety  properties  for AI-based CPS. In  addition  to this proposed work, these researchers also  plan  to develop  code extraction  procedures  so that  models  of AI-based controllers  verified  in  KeYmaera  X can be validated  on real-world  systems

This content was first published at futureoflife.org on September 30, 2016.

About the Future of Life Institute

The Future of Life Institute (FLI) is a global non-profit with a team of 20+ full-time staff operating across the US and Europe. FLI has been working to steer the development of transformative technologies towards benefitting life and away from extreme large-scale risks since its founding in 2014. Find out more about our mission or explore our work.

Our content

Related content

Other posts about 

If you enjoyed this content, you also might also be interested in:

AI Researcher Brian Ziebart

AI Safety Research Brian Ziebart Assistant Professor Department of Computer Science University of Illinois at Chicago bziebart@uic.edu Project: Towards Safer […]
October 1, 2016

AI Researcher Jacob Steinhardt

AI Safety Research Jacob Steinhardt Graduate Student Stanford University jacob.steinhardt@gmail.com Project: Summer Program in Applied Rationality and Cognition Amount Recommended:    $88,050 […]
October 1, 2016

AI Researcher Bas Steunebrink

AI Safety Research Bas Steunebrink Artificial Intelligence / Machine Learning, Postdoctoral Researcher IDSIA (Dalle Molle Institute for Artificial Intelligence) bas@idsia.ch […]
October 1, 2016

AI Researcher Moshe Vardi

AI Safety Research Moshe Vardi Computer Scientist, Professor Department of Computer Science Rice University vardi@cs.rice.edu Project: Artificial Intelligence and the […]
October 1, 2016

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram