Skip to content

Abeer Sharma

Organisation
University of Hong Kong
Biography

Why do you care about AI Existential Safety?

Advanced AI systems pose major risks if not developed responsibly. Misuse by governments or corporations could seriously threaten human rights through surveillance, suppressing dissent, algorithmic profiling, and increasing loss of freedoms. Even more alarming, improperly designed superintelligent AI coupled with drones and robots could potentially pose existential risks to humanity. To prevent this, AI development must prioritize human rights, strong oversight and accountability, and extensive safety research to ensure it remains reliably beneficial and aligned with human values

Please give at least one example of your research interests related to AI existential safety:

Broadly, I am interested in issues of AI safety in the context of law enforcement and public governance, with a particular focus on the development of risk management strategies and safeguards for AI development and deployment to prevent the infringement of human and civil rights.

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram