Skip to content

Hao Chen

Organisation
University College London
Biography

Why do you care about AI Existential Safety?

As AI technology increasingly integrates into our daily lives and work, the scenarios it encounters also become more complex and variable. For example, autonomous vehicles must accurately identify pedestrians and other vehicles under complex weather and traffic conditions; medical diagnostic systems need to provide accurate and reliable diagnoses when facing a variety of diseases and patient differences. Therefore, enhancing the robustness and generalization capabilities of AI systems can not only help AI better adapt to new situations and reduce decision-making errors but also significantly improve the accuracy and safety of its decisions, ensuring the effective application of AI technology. Enhancing these capabilities is key to AI safety.

Please give at least one example of your research interests related to AI existential safety:

My research primarily focuses on reinforcement learning, especially its generalization capabilities and robustness against noise disturbances. These characteristics are crucial for the existential safety of artificial intelligence, as reinforcement learning models are often deployed in constantly changing and complex environments, such as autonomous vehicles and smart medical systems. In these applications, the models must adapt to unprecedented situations and environmental noise to ensure the reliability and safety of decisions. Specifically, I study ad-hoc collaborative algorithms, which allow AI systems to flexibly adjust their behavioral strategies according to the specific requirements of different environments, thus adapting to changes in the environment and effectively transferring knowledge learned in one setting to others. Additionally, I am focused on enhancing the robustness of these models in handling changes in input data or operational errors, including developing algorithms capable of automatically adapting to various inputs. These research efforts aim to advance the safety and reliability of reinforcement learning technologies, reduce the potential negative consequences triggered by artificial intelligence, maximize the positive impacts of AI technology, and bring broader benefits to human society.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram