Skip to content
All Newsletters

FLI May 2021 Newsletter

Published
31 May, 2021
Author
Georgiana Gilgallon

Contents

FLI May 2021 Newsletter

 

We’re looking for translators!

 


The outreach team is now recruiting Spanish and Portuguese speakers for translation work!

The goal is to make our social media content accessible to our rapidly growing audience in Central America, South America, and Mexico. The translator would be sent between one and five posts a week for translation. In general, these snippets of text would only be as long as a single tweet.

We prefer a commitment of two hours per week but do not expect the work to exceed one hour per week. The hourly compensation is $15. Depending on outcomes for this project, the role may be short-term.

For more details and to apply, please fill out this form. We are also registering other languages for future opportunities so those with fluency in other languages may fill out this form as well.

New Podcast Episodes

 


Bart Selman on the Promises and Perils of Artificial Intelligence

In this new podcast episode, Lucas is joined by Professor of Computer Science at Cornell University Bart Selman to discuss all things artificial intelligence.

Highlights of the interview include Bart talking about what superintelligence could consist in, whether superintelligent systems might solve problems like income inequality and whether they could teach us anything about moral philosophy. He also discusses the possibility of AI consciousness, the grave threat of lethal autonomous weapons and whether the global race to advanced artificial intelligence may negatively affect our chances of successfully solving the alignment problem. Enjoy!

Reading & Resources

 


The Centre for the Study of Existential Risk is hiring for a Deputy Director!

The Centre for the Study of Existential Risk, University of Cambridge, is looking for a new Deputy Director. This role will involve taking full operational responsibility for the day-to-day activities of the Centre, including people and financial management, and contributing to strategic planning for the centre.

CSER is looking for someone with strong experience in operations and strategy, with the interest and intellectual versatility to engage with and communicate the Centre’s research.

The deadline for applications is Sunday 4 July. More details on both the role and person profile are available in the further particulars, here.

The Leverhulme Centre for the Future of Intelligence (CFI) and CSER are also hiring for a Centre Administrator to lead the Department’s professional services support team. Further details can be found here.


The Global Catastrophic Risk Institute is looking for collaborators and advisees!

The Global Catastrophic Risk Institute (GCRI) is currently welcoming inquiries from people who are interested in seeking their advice and/or collaborating with them. These inquiries can concern any aspect of global catastrophic risk but GCRI is particularly interested to hear from those interested in its ongoing projects. These projects include AI policy, expert judgement on long-term AI, forecasting global catastrophic risks and improving China-West relations.

Participation can consist of anything from a short email exchange to more extensive project work. In some cases, people may be able to get involved by contributing to ongoing dialogue, collaborating on research and outreach activities, and co-authoring publications. Inquiries are welcome from people at any career point, including students, any academic or professional background, and any place in the world. People from underrepresented groups are especially encouraged to reach out.

Find more details here!

This article in the New York Times details how scientific breakthroughs together with advocacy efforts caused the average lifespan to double between 1920 and 2020. We were particularly pleased to see last year’s Future of Life Award winner Bill Foege mentioned for his crucial role in the eradication of smallpox.

“The story of our extra life span almost never appears on the front page of our actual daily newspapers, because the drama and heroism that have given us those additional years are far more evident in hindsight than they are in the moment. That is, the story of our extra life is a story of progress in its usual form: brilliant ideas and collaborations unfolding far from the spotlight of public attention, setting in motion incremental improvements that take decades to display their true magnitude.”

The International Committee of the Red Cross (ICRC) recently released its official position on autonomous weapons; “Unpredictable autonomous weapon systems should be expressly ruled out…This would best be achieved with a prohibition on autonomous weapon systems that are designed or used in a manner such that their effects cannot be sufficiently understood, predicted and explained.”

Our newsletter

Regular updates about the Future of Life Institute, in your inbox

Subscribe to our newsletter and join over 20,000+ people who believe in our mission to preserve the future of life.

Recent newsletters

Future of Life Institute Newsletter: On SB 1047, Gov. Newsom Caves to Big Tech

A disappointing outcome for the AI safety bill, updates from UNGA, our $1.5 million grant for global risk convergence research, and more.
1 October, 2024

Future of Life Institute Newsletter: California’s AI Safety Bill Heads to Governor’s Desk

Latest policymaking updates, OpenAI safety team reportedly halved, moving towards an AWS treaty, and more.
30 August, 2024

Future of Life Institute Newsletter: New $4 million grants program!

Mitigating AI-driven power concentration, Pindex and FLI collaboration, announcing our newest grantees and their projects, and more.
1 August, 2024
All Newsletters

Sign up for the Future of Life Institute newsletter

Join 40,000+ others receiving periodic updates on our work and cause areas.
cloudmagnifiercrossarrow-up linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram