Skip to content

Wayne Wei Wang

Position
PhD Candidate
Organisation
University of Hong Kong
Biography

Why do you care about AI Existential Safety?

Advanced AI technologies and their potential risks and societal implications are subjects of worldwide attention. The development of transformative AI systems promises immense benefits but also grave perils. On one hand, AI has the capacity to drive groundbreaking advancements across numerous domains, catalyzing innovation and enhancing human well-being. However, the unchecked or misaligned development of AI also brings short-term risks (e.g. deepfakes and copyright infringement) and long-term risks (such as concentration of power), with the potential to harm individuals, organizations, and communities. This dichotomous nature compels researchers to prioritize the study of robust AI safety and governance frameworks at global, comparative, and local levels. The aim is to safeguard against unintended consequences by ensuring technological progress remains firmly grounded in human values, legal principles, and social norms. Ultimately, the focus on AI safety stems from an ethical responsibility and conviction that proactive governance is essential for navigating the complex landscape of advanced AI. Hard-soft-law-by-design approaches (e.g. technical governance standards at varying levels of developers, labs, entities, industries, communities, and states) offer institutional flexibility to foster AI innovation and entrepreneurship while still serving the public good.

Please give at least one example of your research interests related to AI existential safety:

The global landscape has witnessed a marked increase in AI-related legislation, encompassing a diverse array of hard-law examples that are currently in force, recently passed, or proposed across jurisdictions. Concurrently, the ethical, societal, and responsible drive for establishing risk-based governance models has given rise to the emergence of soft laws, embodied as AI safety benchmarks. These legislative and normative frameworks reflect a range of priorities, whether pro-innovation, pro-rights, or both. However, a discernible gap persists in the interpretive layer between AI developers and legislators/regulators. This cognitive divide is characterized by the challenge of translating complex AI technological elements into coherent behavioral codes.

To bridge this gap, I examine the concept of “AI legitimacy” – an interdisciplinary, evidence-enabled normative framework endowed with the coercive force of legal authority. The divergence in the terminology used by various AI stakeholders, all of whom aim to mitigate AI-related risks, underscores the need for a shared, yet harmonized “corpus” to reach a collective consensus across jurisdictions. Accordingly, the research presented herein aims to design AI safety institutions, such as machine-level and human/organization-level audits with legal authority, to ensure robust and trustworthy human-AI interaction in alignment with human values, legal rules, and social norms.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram