Current Job Postings From FLI and Our Partner Organizations:

BERI

BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Its main strategy is to identify technologies that may pose significant civilization-scale risks, and to promote and provide support for research and other activities aimed at reducing those risks.

No current job postings for BERI, but you can still get involved!

MIRI

MIRI’s mission is to ensure that the creation of smarter-than-human intelligence has a positive impact. Our strategic focus is our technical research agenda on “superintelligence alignment,” composed of numerous subproblems in AI, logic, decision theory, and other fields.

Our technical research program currently employs four full-time research fellows, fosters collaboration with our research associates and others, runs several research workshops each year, and funds independently-organized MIRIx workshops around the world.

Research Fellow

We’re seeking multiple research fellows who can work with our other research fellows to solve open problems related to superintelligence alignment, and prepare those results for publication. For those with some graduate study or a Ph.D. in a relevant field, the salary starts at $65,000 to $75,000 per year, depending on experience. For more senior researchers, the salary may be substantially higher, depending on experience. All full-time employees are covered by our company health insurance plan. Visa assistance is available if needed.

Ours is a young field. Our current research agenda includes work on tiling agents, logical uncertainty, decision theory, corrigibility, and value learning, but those subtopics do not exhaust the field. Other research topics will be seriously considered if you can make the case for their tractability and their relevance to the design of self-modifying systems which stably pursue humane values.

This is not a limited-term position. The ideal candidate has a career interest in these research questions and aims to develop into a senior research fellow at MIRI, or aims to continue these avenues of research at another institution after completing substantial work at MIRI.

Some properties you should have

    • Published research in computer science, logic, or mathematics.
    • Enough background in the relevant subjects (computer science, logic, etc.) to understand MIRI’s technical publications.
    • A proactive research attitude, and an ability to generate productive new research ideas.

A formal degree in mathematics or computer science is not required, but is recommended.

For more details, please visit here!

Software Engineer

The Machine Intelligence Research Institute is looking for highly capable software engineers to directly contribute to our work on the AI alignment problem, with a focus on projects related to machine learning. We’re seeking engineers with extremely strong programming skills who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work. In this role you will work closely with our research team to: create and run novel coding experiments and projects; build development infrastructure; and rapidly prototype, implement, and test AI alignment ideas related to machine learning.

Some qualities of the ideal candidate:

    • Comfortable programming in different languages and frameworks.
    • Has mastery in at least one technically demanding area.
    • Machine learning experience is not a requirement, though it is a plus.
    • Able to work with mathematicians on turning mathematical concepts into elegant code in a variety of environments and languages.
    • Able to work independently with minimal supervision, and in team/group settings.
    • Highly familiar with basic ideas related to AI alignment.
    • Residence in (or willingness to move to) the Bay Area. This job requires working directly with our research team, and won’t work as a remote position.
    • Enthusiasm about the prospect of working at MIRI and helping advance the field of AI alignment research.

Our hiring process tends to involve a lot of sample tasks and probationary-hires, so we encourage you to apply sooner than later.

Click here to apply.

For questions or comments, email engineering@intelligence.org.

ML Living Library

The Machine Intelligence Research Institute is looking for a very specialized autodidact to keep us up to date on developments in machine learning—a “living library” of new results.

ML is a fast-moving and diverse field, making it a challenge for any group to stay updated on all the latest and greatest developments. To support our AI alignment research efforts, we want to hire someone to read every interesting-looking paper about AI and machine learning, and keep us abreast of noteworthy developments, including new techniques and insights.

This is a new position for a kind of work that isn’t standard. Although we hope to find someone who can walk in off the street and perform well, we’re also interested in candidates who think they might take three months of training to meet the requirements.

Examples of the kinds of work you’ll do:

    • Read through archives and journals to get a sense of literally every significant development in the field, past and present.
    • Track general trends in the ML space—e.g., “Wow, there sure is a lot of progress being made on Dota 2!”—and let us know about them.
    • Help an engineer figure out why their code isn’t working—e.g., “Oh, you forgot the pooling layer in your convolutional neural network.”
    • Answer/research MIRI staff questions about ML techniques or the history of the field.
    • Share important developments proactively; researchers who haven’t read the same papers as you often won’t know the right questions to ask unprompted!

If interested, click here to apply. For questions or comments, email Matt Graves (matthew.graves@intelligence.org).

Type Theorist

The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.

Click here to apply.

Center for Effective Altruism

The Centre for Effective Altruism helps to grow and maintain the effective altruism movement. Our mission is to

    • create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible; and
    • make the advancement of the wellbeing of all a worldwide intellectual project, doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth.

Operations Contractor (Grants Administrator)

The Centre for Effective Altruism (CEA) is looking for an operations contractor to assist with grant administration and operations support.

We’re looking for someone who can quickly get up to speed and own the administration of the grant making process. This will involve communicating with grantees, conducting due diligence, processing payments, monitoring grant performance, and keeping accurate records. To be successful in this role, you will need to be diligent, thorough, and proactive.

The role will also include supporting the operations team in other ways as needed. This will include a period of covering routine tasks for the UK Operations Specialist during paternity leave. This will likely involve overseeing HR, office management, finance, legal and administrative systems for a defined period.

The contractor will report directly to a member of the CEA Ops Team, and will also spend a good deal of time working with grant evaluators at CEA and EA Funds.

Responsibilities

    • Own and execute grantmaking administration within CEA, which includes:
      • Tracking grants through the grantmaking process
      • Due diligence
      • Compliance checks
      • Communication with grantees
      • Processing payments
      • Following up with grantees to monitor grant performance
    • Ensure that grantmaking within CEA is executed to a high professional standard.
    • Ensure that all grants are compliant with the internal grantmaking policy and process.
    • Ensure that all grants are tracked according to the grantmaking process.
    • Maintain professional communication with all grantees.
    • Support the Operations team with routine/ad hoc tasks
    • Cover for UK Operations Specialist during his paternity leave

Deadline:

Applications for this position must be received no later than Friday, June 7th 2019, 11:59 pm BST

For more details, please visit here!

Technical Operations Contractor

CEA is looking for a Technical Operations contractor to assist with building and maintaining essential automations.

Responsibilities

    • Build automations to support the efficient operation of CEA projects
    • Interface with client teams at CEA to discover their needs
    • Document work extensively, to ensure others can maintain and build upon it
    • Be responsive to incoming work requests, and provide clear and accurate estimates of project deliverables and turnaround times
    • Treat confidential data appropriately and maintain high standards of information security

Deadline:

Applications for this position must be recieved no later than Friday, June 7th 2019, 1:00 am BST

For more details, please visit here!

Future of Humanity Institute

The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.

AI Safety and Machine Learning Internship Program

The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Learning the Preferences of Ignorant, Inconsistent AgentsSafe Reinforcement Learning via Human InterventionDeep RL from Human Preferences, and the Building Blocks of Interpretability. Past interns have collaborated with FHI researchers on a range of publications.

Applicants should have a background in machine learning or computer science, or in a related field (statistics, mathematics, physics, cognitive science). Previous research experience in machine learning or computer science is desirable but not required.

This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.

Internships are for 2.5 months or longer. We are now accepting applications for internships starting in or after September 2018 on a rolling basis. Interns are usually based in Oxford but remote internships are sometimes possible. (As per University guidelines, candidates must be fluent in English.)

To apply, please submit a CV and a short statement of interest (including relevant experience in machine learning, computer science, and programming) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organizations. Please direct questions about the application process to ry.duff@gmail.com.

For more details, please visit http://www.fhi.ox.ac.uk/vacancies/

Leverhulme Centre for the Future of Intelligence

The Leverhulme Centre for the Future of Intelligence (CFI) is a new, highly interdisciplinary research centre, addressing the challenges and opportunities of future development of artificial intelligence (AI), in both the short and long term. Funded by the Leverhulme Trust for 10 years, it is based in Cambridge, with partners in Oxford, Imperial College, and UC Berkeley. The Centre will have close links with industry partners in the AI field, and with policymakers, as well as with many academic disciplines. It will also work closely with a wide international network of researchers and research institutes.

No current job postings for CFI, but you can still get involved!

Center for the Study of Existential Risk

The Centre for the Study of Existential Risk is an interdisciplinary research centre focused on the study of risks threatening human extinction that may emerge from technological advances. CSER aims to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.

An existential risk is one that threatens the existence of our entire species.  The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between philosopher Huw Price, cosmologist Martin Rees, and software entrepreneur Jaan Tallinn — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. The Centre’s aim is to develop a new science of existential risk, and to develop protocols for the investigation and mitigation of technology-driven existential risks.

Research Assistants in Sustainable Finance

CSER invites applications for two part-time Research Assistants 0.2FTE and 0.4FTE (7.5 and 15 hours per week) in the area of sustainable finance. The appointments are fixed term until 30 June 2020.

The Research Assistants will support Dr Ellen Quigley, and other colleagues associated with the research project on sustainable finance, in investigating the ways in which the financial system can contribute to the transition to a zero-emissions economy. The Research Assistant will undertake basic research activities including assistance with literature reviews, workshop organisation, and fact-finding, and will contribute to the writing of research reports and publications.

Responsibilities

    • Undertake basic research for the sustainable finance project, e.g. by conducting literature reviews.
    • Assist the lead researcher in development and drafting of reports, academic publications, and other materials.
    • Provide administrative and organisational support for project events.
    • Participate in relevant Centre-wide activities such as team-meetings, work-in-progress seminars, and public events. This may include presentation of work of the project.
    • Liaise regularly with the lead researcher on research progress and event coordination.
    • Plan own day-to-day research activity within the framework of the project.

How to apply

If you are interested in applying, please click on the ‘Apply online’ button on this webpage(or at the button below). This will route you to the University’s Web Recruitment System. Our hiring panel can only consider applications made through this system.

Interviews are planned for 25 June 2019.

The full job description can be found here.

If you have any questions about this vacancy you can contact Dr Simon Beard, sjb316@cam.ac.uk; or about the application process, contact jobs@CRASSH.cam.ac.uk.

The Open Philanthropy Project

The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give.

Salesforce Solutions Architect/Senior Administrator

About the Role

You would lead the design, implementation, and overall maintenance of our core grants and operations technology functions in Salesforce.

Core responsibilities would include:
    • Own the development of our nascent grants management system within Salesforce (the “System”) to make it a user-friendly, end-to-end solution for grant processing and information management.
    • Serve as the primary System administrator, conducting ongoing enhancement and maintenance functions, including user account maintenance, workflows, automation, new apps, page layouts, and objects.
    • Lead internal deployment for any major System implementations.
    • Develop and maintain Visualforce pages, Apex classes & triggers, and third-party integrations.
    • Design and conduct regular trainings and feedback sessions to enhance user experience.
    • Discuss the team’s problems and goals, and translate feedback into potential solutions. Lead communication around technology issues and needs across leadership, staff, and vendors.
    • Manage third-party consultants or developers as needed.
    • Continually assess the impact of new requirements or releases related to Salesforce and analyze all upstream and downstream applications (e.g. by attending relevant conferences, such as Technology Affinity Group, DreamForce, TrailheaDX).

This is a full-time position based out of San Francisco. (We would be open to short-term, i.e. less than a year, contractor arrangements as well.)

Starting salary: $90,000 – $125,000, commensurate with experience and skills, with an annual 401k grant (unconditional and immediately vested) of $13,500-$18,500, and a competitive benefits package.

For more details, please visit here!

Global Catastrophic Risk Institute

The Global Catastrophic Risk Institute (GCRI) is a nonprofit think tank specializing on the topic of global catastrophic risk (GCR). GCRI works with researchers from many academic disciplines and professionals from many sectors. GCRI leads research, education, and professional networking on GCR. GCRI research aims to identify and assess the most effective ways of reducing the risk of global catastrophe, as well as the issues raised by GCR. GCRI education aims to raise awareness and understanding about global catastrophic risk among students, professionals, and most of all by the general public.

No current job postings for GCRI, but you can still get involved!

80,000 Hours

80,000 Hours is an Oxford, UK-based organization that conducts research on the careers with positive social impact and provides career advice. It provides this advice online, through one-on-one advice sessions and through a community of like-minded individuals.

We’re not currently focused on hiring, though we’re always interesting in hearing from people dedicated to our mission who’d like to join the team full time. If you’d like to express interest in joining, fill out this short form.

Unfortunately we don’t offer internships or part-time volunteer positions.