We’ve accomplished a lot. FLI is a small organization that has only been around for a few years, but during that time, we’ve: Helped mainstream AI safety research, including hosting two AI safety conferences (in Puerto Rico and Asilomar) and an Ethics of Value Alignment Workshop and supporting numerous AI safety workshops and events; Funded […]
About The FLI Team
This author has yet to write their bio.Meanwhile lets just say that we are proud The FLI Team contributed a whooping 6 entries.
Entries by The FLI Team
Click here to see this page in other languages: Russian On August 30, the State of California unanimously adopted legislation in support of the Future of Life Institute’s Asilomar AI Principles. The Asilomar AI Principles are a set of 23 principles intended to promote the safe and beneficial development of artificial intelligence. The principles – which include […]
Here are some examples of the kinds of topics we are interested in funding. Many of them are based on problems from the following research agendas: Concrete Problems in AI Safety and Agent Foundations. Scalable reward learning One way to specify human preferences to an artificial agent is having the agent learn a reward function […]
For many years, artificial intelligence (AI) research has been appropriately focused on the challenge of making AI effective, with significant recent success, and great future promise. This recent success has raised an important question: how can we ensure that the growing power of AI is matched by the growing wisdom with which we manage it? […]
We, the organizers, found it extraordinarily inspiring to be a part of the BAI 2017 conference, the Future of Life Institute’s second conference on the future of artificial intelligence. Along with being a gathering of endlessly accomplished and interesting people, it gave a palpable sense of shared mission: a major change is coming, over unknown […]