Current Job Postings From FLI and Our Partner Organizations:


BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Its main strategy is to identify technologies that may pose significant civilization-scale risks, and to promote and provide support for research and other activities aimed at reducing those risks.

Project Manager

BERI is seeking Project Managers to join its core staff. Project Managers are the main role at BERI; they’re competent generalists ready and eager to work on the entirety of BERI’s existing and upcoming portfolio of projects. We are only interested in candidates who, if successful, intend to take this role for 2+ years.

For organizations that BERI is highly aligned with, BERI is willing to invest significant time and funding into providing direct support. BERI currently offers this type of direct support to three organizations: CHAIFHI, and CSER.

We expect that most project managers will focus on these or future collaborations, or on expanding BERI’s ability to deliver on collaborations by working on new collaboration-associated programs or services.

That said, there are a number of other potential workstreams at BERI that Project Managers may end up involved in. We appreciate candidates with an interest in these additional areas, although note that opportunities within them may not often be available:

  1. Projects supporting BERI’s efforts toward organizational maturity
  1. Internal operations, such as financial management and HR
  1. New program delivery

We want people who have high initiative and a tendency to get things done, who feel dissatisfied and want to take action when they see x-risk mitigation projects getting stalled in ways that they can personally unblock.

Specific Qualifications

A Project Manager should aim to be able to:

(a) maintain strong professional interpersonal relationships while clearly and responsibly holding to BERI’s ethos,

(b) be highly resourceful—as often as we can, we want to say ‘yes’ to to our partners’ requests,

(c) be efficient in execution,

(d) never let any “balls drop”— high-impact organizations are depending on BERI to execute projects for them, so it is important that candidates for this role are highly organized.

Click here to apply.

Operations Manager

As Operations Manager at BERI, you’ll be responsible for (a) the reliable and efficient functioning of finance and internal operations, while (b) maintaining a high degree of inspectability to BERI’s funders, Board of Directors, auditors, accountants, legal counsel, and other interested parties.

BERI’s financial processes touch all of our programs, and it is vital to our operations that such processes are absolutely reliable. We seek someone with the ability to create streamlined and robust common processes that will act as a catalyst to all of BERI’s current programs.

We also want to be sure that our processes are highly transparent. It should be easy for BERI’s various stakeholders to inspect our operations and verify that we are following responsible procedures.

We encourage applicants (including those with both more and less experience than described here) to apply for this role. We are a small organization, and therefore we’re able to customize titles, responsibilities, and expectations according to the candidate we select. If you’re unsure if you’re a fit, we encourage you to apply.


BERI strives to become the type of organization that gets things right on the first try. We believe an excellent Operations Manager will need to be able to:

  • Never drop balls. This requires excellent communication and follow-up. Given the importance of our internal processes to the rest of our programs, this trait is must-have.
  • Stay organized and maintain prioritization despite high volumes of information.
  • Communicate clearly with BERI staff and external stakeholders.
  • Use Quickbooks Online and other software such as Expensify, Gusto, and You do not need to be thoroughly acquainted with all of these systems, but we will favor applicants with more familiarity with finances and operations.
  • Develop thorough models of how BERI relates to other groups (the IRS, the government, the public, etc.) and apply those models to improve our processes. This will likely involve legal research, as you diligently examine the ethical, legal, and financial implications of BERI’s activities before taking action.
  • Comfortably use quantitative reasoning and spreadsheets in their workflow.
  • Notice and flag uncertainty, work to resolve confusion, and generally maintain epistemic humility.
  • Answer the telephone during most business hours.

This role will require substantial initial onboarding to BERI’s current systems. We want someone willing to “dive in” to the details, but who is nonetheless able to maintain big-picture prioritization of tasks.

Other desirable experience could include:

  • Experience working in finance or operations
  • Co-founding companies or non-profits
  • An degree in a STEM field, a JD, an MD, or an MBA, MPA, MFE, or PhD in other disciplines.
  • Experience engaging with legal considerations or developing new programs.
  • Experience managing budgets ($1MM+)
  • Programming ability, and an excitement to employ it to automate ongoing operations challenges.

(Click here to apply)

For more details, Please visit here!


MIRI’s mission is to ensure that the creation of smarter-than-human intelligence has a positive impact. Our strategic focus is our technical research agenda on “superintelligence alignment,” composed of numerous subproblems in AI, logic, decision theory, and other fields.

Our technical research program currently employs four full-time research fellows, fosters collaboration with our research associates and others, runs several research workshops each year, and funds independently-organized MIRIx workshops around the world.

Research Fellow

We’re seeking multiple research fellows who can work with our other research fellows to solve open problems related to superintelligence alignment, and prepare those results for publication. For those with some graduate study or a Ph.D. in a relevant field, the salary starts at $65,000 to $75,000 per year, depending on experience. For more senior researchers, the salary may be substantially higher, depending on experience. All full-time employees are covered by our company health insurance plan. Visa assistance is available if needed.

Ours is a young field. Our current research agenda includes work on tiling agents, logical uncertainty, decision theory, corrigibility, and value learning, but those subtopics do not exhaust the field. Other research topics will be seriously considered if you can make the case for their tractability and their relevance to the design of self-modifying systems which stably pursue humane values.

This is not a limited-term position. The ideal candidate has a career interest in these research questions and aims to develop into a senior research fellow at MIRI, or aims to continue these avenues of research at another institution after completing substantial work at MIRI.

Some properties you should have

    • Published research in computer science, logic, or mathematics.
    • Enough background in the relevant subjects (computer science, logic, etc.) to understand MIRI’s technical publications.
    • A proactive research attitude, and an ability to generate productive new research ideas.

A formal degree in mathematics or computer science is not required, but is recommended.

For more details, please visit here!

Software Engineer

The Machine Intelligence Research Institute is looking for highly capable software engineers to directly contribute to our work on the AI alignment problem, with a focus on projects related to machine learning. We’re seeking engineers with extremely strong programming skills who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work. In this role you will work closely with our research team to: create and run novel coding experiments and projects; build development infrastructure; and rapidly prototype, implement, and test AI alignment ideas related to machine learning.

Some qualities of the ideal candidate:

    • Comfortable programming in different languages and frameworks.
    • Has mastery in at least one technically demanding area.
    • Machine learning experience is not a requirement, though it is a plus.
    • Able to work with mathematicians on turning mathematical concepts into elegant code in a variety of environments and languages.
    • Able to work independently with minimal supervision, and in team/group settings.
    • Highly familiar with basic ideas related to AI alignment.
    • Residence in (or willingness to move to) the Bay Area. This job requires working directly with our research team, and won’t work as a remote position.
    • Enthusiasm about the prospect of working at MIRI and helping advance the field of AI alignment research.

Our hiring process tends to involve a lot of sample tasks and probationary-hires, so we encourage you to apply sooner than later.

Click here to apply.

For questions or comments, email

ML Living Library

The Machine Intelligence Research Institute is looking for a very specialized autodidact to keep us up to date on developments in machine learning—a “living library” of new results.

ML is a fast-moving and diverse field, making it a challenge for any group to stay updated on all the latest and greatest developments. To support our AI alignment research efforts, we want to hire someone to read every interesting-looking paper about AI and machine learning, and keep us abreast of noteworthy developments, including new techniques and insights.

This is a new position for a kind of work that isn’t standard. Although we hope to find someone who can walk in off the street and perform well, we’re also interested in candidates who think they might take three months of training to meet the requirements.

Examples of the kinds of work you’ll do:

    • Read through archives and journals to get a sense of literally every significant development in the field, past and present.
    • Track general trends in the ML space—e.g., “Wow, there sure is a lot of progress being made on Dota 2!”—and let us know about them.
    • Help an engineer figure out why their code isn’t working—e.g., “Oh, you forgot the pooling layer in your convolutional neural network.”
    • Answer/research MIRI staff questions about ML techniques or the history of the field.
    • Share important developments proactively; researchers who haven’t read the same papers as you often won’t know the right questions to ask unprompted!

If interested, click here to apply. For questions or comments, email Matt Graves (

Type Theorist

The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.

Click here to apply.

Center for Effective Altruism

The Centre for Effective Altruism helps to grow and maintain the effective altruism movement. Our mission is to

    • create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible; and
    • make the advancement of the wellbeing of all a worldwide intellectual project, doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth.

No current job postings for CEA, but you can still get involved!

Future of Humanity Institute

The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see

AI Safety and Machine Learning Internship Program

The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Learning the Preferences of Ignorant, Inconsistent AgentsSafe Reinforcement Learning via Human InterventionDeep RL from Human Preferences, and the Building Blocks of Interpretability. Past interns have collaborated with FHI researchers on a range of publications.

Applicants should have a background in machine learning or computer science, or in a related field (statistics, mathematics, physics, cognitive science). Previous research experience in machine learning or computer science is desirable but not required.

This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.

Internships are for 2.5 months or longer. We are now accepting applications for internships starting in or after September 2018 on a rolling basis. Interns are usually based in Oxford but remote internships are sometimes possible. (As per University guidelines, candidates must be fluent in English.)

To apply, please submit a CV and a short statement of interest (including relevant experience in machine learning, computer science, and programming) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organizations. Please direct questions about the application process to

For more details, please visit

Leverhulme Centre for the Future of Intelligence

The Leverhulme Centre for the Future of Intelligence (CFI) is a new, highly interdisciplinary research centre, addressing the challenges and opportunities of future development of artificial intelligence (AI), in both the short and long term. Funded by the Leverhulme Trust for 10 years, it is based in Cambridge, with partners in Oxford, Imperial College, and UC Berkeley. The Centre will have close links with industry partners in the AI field, and with policymakers, as well as with many academic disciplines. It will also work closely with a wide international network of researchers and research institutes.

No current job postings for CFI, but you can still get involved!

Center for the Study of Existential Risk

The Centre for the Study of Existential Risk is an interdisciplinary research centre focused on the study of risks threatening human extinction that may emerge from technological advances. CSER aims to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.

An existential risk is one that threatens the existence of our entire species.  The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between philosopher Huw Price, cosmologist Martin Rees, and software entrepreneur Jaan Tallinn — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. The Centre’s aim is to develop a new science of existential risk, and to develop protocols for the investigation and mitigation of technology-driven existential risks.

Research Associate in Paradigms of Artificial General Intelligence and Their Associated Risk

The Centre for the Study of Existential Risk (CSER) invites applications for a Post-Doctoral Research Associate to work on safety challenges associated with increasingly general artificial intelligence systems.

Research efforts are being devoted globally to developing artificial intelligence systems with greater generality: an ability to function effectively in a wider range of environments, and to solve a broader range of tasks. Looking ahead, there are likely to be areas of scientific and intellectual progress that will require the types of planning, abstract reasoning, and meaningful understanding of the world that we associate with general intelligence in humans and animals. A key question is whether systems with a greater degree of generality may have different risks and unknowns in comparison to the more specialised, constrained systems we are used to.

The Associate will contribute to and lead technical research on topics including: use of resources, performance on tasks requiring general intelligence, and rates of progress in artificial intelligence. The research will link to the growing body of work on different aspects of AI safety, with the aim of better understanding the links between the capability, generality and safety of AI systems.

As well as producing targeted research outputs within these areas, the Research Associate will collaborate on project organisation, and will and will build collaborations with world-leading partners in academia and industry, building on existing connections between CSER, the Leverhulme Centre for the Future of Intelligence and research groups at Cambridge, Oxford, Imperial, OpenAI, the Partnership on AI and others. This is an exciting opportunity for a talented researcher to engage in a cutting-edge research programme and to develop their own lines of enquiry.

DEADLINE: 26 August 2019

Interviews are planned for the week commencing 23 September 2019.

The full job description can be found here.

If you have any questions about this vacancy please contact

Research Associate in Responsible Innovation and Extreme Technological Risk (Fixed Term)

CSER invites applications for a postdoctoral Research Associate in Responsible Innovation and Extreme Technological Risk. This project asks how risk-awareness and societal responsibility can be encouraged in the development of technologies with great transformative potential without discouraging innovation.

Working closely with CSER’s industry and policy partners, the project draws insights from discussions about responsible innovation across academia, industry, policy, regulation and civil society to address the challenges posed by Extreme Technological Risk (ETR). Examples of relevant questions include: How should ETR be discussed in the public sphere while avoiding sensationalism and misunderstanding? What can be learned from historical examples of technology governance and culture-development? What are the roles of different forms of regulation in the development of transformative technologies with risk potential?

CSER’s approach to studying ETR emphasises interdisciplinarity and engagement with policy and technology communities; this is reflected in the profile of this role. We are open to applications from any disciplinary and/or professional background relevant to Responsible Innovation. These could include: science and technology studies, geography, sociology, philosophy of science and relevant technological fields (e.g. AI, biotechnology or geo-engineering).

The Research Associate will be expected to contribute to the production of individual and collaborative research relevant to the focus area of Responsible Innovation and Extreme Technological Risk, and to participate in CSER’s broader scholarly and outreach activities. The post-holder will also be expected to develop connections with scientific and technological communities to gain a deeper understanding of technological progress, future trajectories, and cultures of practice, to inform their work. This might be achieved through periods embedded in laboratories and/or research and development facilities.

DEADLINE: 26 August 2019

Interviews are planned for the week commencing 9 September 2019.

The full job description can be found here.

If you have any questions about this vacancy or the application process, please contact Ruth Farley, HR Administrator on

Research Associate in Global Population, Sustainability and the Environment (Fixed Term)

CSER invites applications for a postdoctoral Research Associate in the area of global population, sustainability and the environment. This is a fixed term post with funding available until 30 September 2022 in the first instance.

Over the next few years, CSER will lead research and outreach to develop a deeper understanding of the impacts of overpopulation on the global environment, and work towards establishing a new consensus that these impacts must be included in global policy. CSER’s approach to the study of extreme and existential risks emphasises interdisciplinarity and engagement with policy and technology communities.

CSER’s work on environmental risks focuses on establishing new approaches to assess and manage trends that could lead to globally catastrophic impacts, human extinction and civilizational collapse. There is growing awareness of the potentially catastrophic risks associated with global scale environmental changes, including biodiversity loss, climate change, ocean eutrophication and resource scarcity. The empirical challenges in assessing these risks are complicated by their global scale, complexity and unprecedented nature.

Many of these pressing issues relate to human population dynamics. The Sustainable Development Goals are reticent about the imbalance between a rising population and a diminishing biosphere and its connection to humanity’s unsustainable demands, and yet it is inconceivable that these goals can be met without addressing this subject.

While many of these trends are widely studied and well understood, we are seeking a researcher with the curiosity and passion to work on hard problems and extreme risks emerging from the intersection of environmental and population changes. It is particularly important to understand appropriate ways of managing such risks within the context of wider technological developments that could both contribute to human welfare and flourishing and help to manage these risks.

Supported by CSER’s research team and expert advisory board, the Research Associate will produce independent and collaborative research outputs targeted at academic, government, industry and other audiences.

DEADLINE: 26 August 2019

Interviews are planned for the week commencing 9 September 2019.

The full job description can be found here.

If you have any questions about this vacancy or the application process, please contact Ruth Farley, HR Administrator on

For more information, Please visit here!

The Open Philanthropy Project

The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give.

Research Fellow

The Open Philanthropy Project is aiming to hire one or more Research Fellows (title flexible) to help maximize the impact of our giving. Our staff of 35 currently gives away over $150 million per year with the aim of doing as much good as we can per dollar, in causes like criminal justice reformbiosecurity, and transformative basic science. Over the next decade, we plan to increase our giving by several times while continuing to raise our bar for impact. To do so, we need to assess our impact in our initial cause areas, figure out how to spend more effectively within them, and explore whether there are more promising new areas we should be expanding into.
Research Fellows will investigate key empirical questions, prioritize across disparate areas, and help pick grantees. In contrast to our recent Analyst hires primarily focused on long-term causes, Fellows will focus on causes in policy, scientific research, and global development. Over time, successful Fellows may become managers, grantmakers, or leaders of organizations that we can then fund. Research Fellows have the rare opportunity to study some of the world’s most pressing problems and put substantial resources behind their answers.

Other information:

  • This is a full-time, permanent position based in San Francisco, but we’re open to remote work for the right candidate.
  • We are committed to fostering a culture of inclusion and strongly encourage people with diverse backgrounds and experiences to apply.
  • We’re open to many levels of seniority — from a couple years of experience to a recent PhD to decades of research excellence — and we pay competitively.
  • We offer a comprehensive benefits package including full health, dental, vision and life insurance, an unconditional 401(k) grant of up to 15% of your salary, flexible work hours and location, ergonomic equipment and more.
  • We are happy to sponsor applicants who lack U.S. work authorization, but we can’t guarantee visa approval.
  • Our website has more about who we arewhat we’re about, and what we fund.

(Click here to apply)!

Communications Associate

We are aiming to hire a Communications Associate to help us communicate about our work. Our staff of 35 currently gives away over $150 million per year with the aim of doing as much good as we can per dollar, in causes like criminal justice reformbiosecurity, and potential risks from advanced artificial intelligence. Over the next decade, we plan to increase our giving by several times while continuing to raise our bar for impact.

We’re looking for a quick and clear writer with great judgment and attention to detail to support our communications work. The Communications Associate will:
  • Draft grant pages for the Open Philanthropy website based on internal materials;
  • Draft blog posts based on technical conversations with senior leadership and program staff;
  • Review newsletters and other external communications for readability, accuracy, and adherence to Open Phil’s voice and style;
  • Maintain social media guidelines and standards for Open Phil staff;
  • Support media engagement and track media mentions relevant to Open Phil; and
  • Lead other potential future projects including coordination of workshops and cause convenings, support for donor relations, design of outreach materials, and other work based on your interests and skills.
You might be a good fit if you:
  • Take pride in writing clean and clear prose;
  • Are a methodical and meticulous editor;
  • Are sensitive to nuances in tone and communication style;
  • Are able to write in a specific “voice” if needed;
  • Are excited about Open Phil’s work;
  • Are comfortable with scientific principles, data and basic statistics, and quantitative reasoning; and
  • Have an interest in learning more about media strategy and/or public relations.

(Click here to apply)!

Global Catastrophic Risk Institute

The Global Catastrophic Risk Institute (GCRI) is a nonprofit think tank specializing on the topic of global catastrophic risk (GCR). GCRI works with researchers from many academic disciplines and professionals from many sectors. GCRI leads research, education, and professional networking on GCR. GCRI research aims to identify and assess the most effective ways of reducing the risk of global catastrophe, as well as the issues raised by GCR. GCRI education aims to raise awareness and understanding about global catastrophic risk among students, professionals, and most of all by the general public.

No current job postings for GCRI, but you can still get involved!

80,000 Hours

80,000 Hours is an Oxford, UK-based organization that conducts research on the careers with positive social impact and provides career advice. It provides this advice online, through one-on-one advice sessions and through a community of like-minded individuals.

We’re not currently focused on hiring, though we’re always interesting in hearing from people dedicated to our mission who’d like to join the team full time. If you’d like to express interest in joining, fill out this short form.

Unfortunately we don’t offer internships or part-time volunteer positions.