Current Job Postings From FLI and Our Partner Organizations:

BERI

BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Its main strategy is to identify technologies that may pose significant civilization-scale risks, and to promote and provide support for research and other activities aimed at reducing those risks.

BERI is currently primarily seeking full-time employees. If you’re interested in working with us, please submit an application form (linked on each job posting below).

Machine Learning Engineer

Start date: ASAP
Status: Full or Part-time employee, paid hourly
Compensation: $30-$90/hr, depending on experience and output
Location: The San Francisco Bay area
Reports to: Executive Director

BERI is seeking to hire a machine learning engineer to collaborate with the Center for Human Compatible AI (CHAI) under UC Berkeley professor Stuart Russell. Pending final evaluation from CHAI, successful candidate(s) will be offered a 1-2 year visiting researcher scholar position at UC Berkeley to work with Professor Stuart Russell’s research group (CHAI’s Listing), alongside Research Scientist Andrew Critch, and with opportunities to collaborate with CHAI’s co-Principal Investigators at Berkeley (Pieter Abbeel, Anca Dragan, Tania Lombrozo), Cornell (Bart Selman, Joe Halpern), Michigan (Michael Wellman, Satinder Singh) and Princeton (Tom Griffiths), as well as with groups at Cambridge, Oxford, and Imperial College through the Leverhulme Centre for the Future of Intelligence. As global demand for AI safety research increases, we expect the experience gained from this work will be valued internationally.

To read more about why we are interested in hiring machine learning engineers, see this blog post.

We are especially interested in applicants who can take initiative in finding ways to help out with research at CHAI. This role involves figuring out what would be helpful for the research team and then doing it.

Requirements

    • Solid software engineering skills across multiple languages, ideally including Python and C/C++
    • Experience with machine learning software packages (e.g. TensorFlow, PyTorch)
    • Practical experience building machine learning or AI systems. This could be demonstrated by professional work experience, previous research papers or open-source contributions
    • Strong analytical and problem-solving skills
    • Excellent technical communication skills, the ability to elaborate complex technical concepts and collaborate effectively with fellow engineers and researchers

Desired Qualifications

    • Familiar with core CS concepts such as common data structures and algorithms
    • Comfortable conducting design and code reviews
    • Prior research or research engineering experience
    • Written work on ML or AI, including technical blog posts or publications in major conferences or journals
    • Distributed systems and basic DevOps experience to manage in-house and cloud servers for experiments (e.g. Terraform/Chef, Kubernetes/Mesos, Docker)
    • BS/BA, MS, or ideally PhD in computer science, data mining, machine learning, information retrieval, recommendation systems, natural language processing, statistics, math, engineering, operations research, or other quantitative discipline

Benefits Include

    • Time-off (paid vacation, holidays, sick leave, bereavement leave, & parental leave)
    • Generous professional development policy
    • Health insurance
    • Semi-flexible work schedule including hours, location, and unpaid vacation policies

BERI is proud to be an Equal Employment Opportunity employer. Our mission to improve human civilization’s long-term prospects for survival and flourishing is in service of all of humanity, and is incompatible with unfair discrimination practices that would pit factions of humanity against one another. We do not discriminate against qualified employees or applicants based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, sexual preference, marital status, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, or any other characteristic protected by federal or state law or local ordinance. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.

(Click here to apply)

ALLFED

How would we feed everyone if the sun was blocked or if there was a significant disruption to industry? ALLFED is working on planning, preparedness and research into practical food solutions so that in the event of a global catastrophe we can respond quickly and save lives and reduce the risk to civilization.

Food storage might seem like the obvious solution but it is very expensive, so we are researching alternative food sources that can scaled up quickly that don’t require the sun. Ideally these catastrophes would not happen and we support efforts to avoid them. Our research focuses on global catastrophic events rather than smaller scale disasters but some of our research may have implications for how we deal with current disasters. We are also focused on events where people have survived rather than human extinction events.

No current job postings for ALLFED, but you can still get involved!

MIRI

MIRI’s mission is to ensure that the creation of smarter-than-human intelligence has a positive impact. Our strategic focus is our technical research agenda on “superintelligence alignment,” composed of numerous subproblems in AI, logic, decision theory, and other fields.

Our technical research program currently employs four full-time research fellows, fosters collaboration with our research associates and others, runs several research workshops each year, and funds independently-organized MIRIx workshops around the world.

Research Fellow

We’re seeking multiple research fellows who can work with our other research fellows to solve open problems related to superintelligence alignment, and prepare those results for publication. For those with some graduate study or a Ph.D. in a relevant field, the salary starts at $65,000 to $75,000 per year, depending on experience. For more senior researchers, the salary may be substantially higher, depending on experience. All full-time employees are covered by our company health insurance plan. Visa assistance is available if needed.

Ours is a young field. Our current research agenda includes work on tiling agents, logical uncertainty, decision theory, corrigibility, and value learning, but those subtopics do not exhaust the field. Other research topics will be seriously considered if you can make the case for their tractability and their relevance to the design of self-modifying systems which stably pursue humane values.

This is not a limited-term position. The ideal candidate has a career interest in these research questions and aims to develop into a senior research fellow at MIRI, or aims to continue these avenues of research at another institution after completing substantial work at MIRI.

Some properties you should have

    • Published research in computer science, logic, or mathematics.
    • Enough background in the relevant subjects (computer science, logic, etc.) to understand MIRI’s technical publications.
    • A proactive research attitude, and an ability to generate productive new research ideas.

A formal degree in mathematics or computer science is not required, but is recommended.

For more details, please visit here!

Software Engineer

The Machine Intelligence Research Institute is looking for highly capable software engineers to directly contribute to our work on the AI alignment problem, with a focus on projects related to machine learning. We’re seeking engineers with extremely strong programming skills who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work. In this role you will work closely with our research team to: create and run novel coding experiments and projects; build development infrastructure; and rapidly prototype, implement, and test AI alignment ideas related to machine learning.

Some qualities of the ideal candidate:

    • Comfortable programming in different languages and frameworks.
    • Has mastery in at least one technically demanding area.
    • Machine learning experience is not a requirement, though it is a plus.
    • Able to work with mathematicians on turning mathematical concepts into elegant code in a variety of environments and languages.
    • Able to work independently with minimal supervision, and in team/group settings.
    • Highly familiar with basic ideas related to AI alignment.
    • Residence in (or willingness to move to) the Bay Area. This job requires working directly with our research team, and won’t work as a remote position.
    • Enthusiasm about the prospect of working at MIRI and helping advance the field of AI alignment research.

Our hiring process tends to involve a lot of sample tasks and probationary-hires, so we encourage you to apply sooner than later.

Click here to apply.

For questions or comments, email engineering@intelligence.org.

ML Living Library

The Machine Intelligence Research Institute is looking for a very specialized autodidact to keep us up to date on developments in machine learning—a “living library” of new results.

ML is a fast-moving and diverse field, making it a challenge for any group to stay updated on all the latest and greatest developments. To support our AI alignment research efforts, we want to hire someone to read every interesting-looking paper about AI and machine learning, and keep us abreast of noteworthy developments, including new techniques and insights.

This is a new position for a kind of work that isn’t standard. Although we hope to find someone who can walk in off the street and perform well, we’re also interested in candidates who think they might take three months of training to meet the requirements.

Examples of the kinds of work you’ll do:

    • Read through archives and journals to get a sense of literally every significant development in the field, past and present.
    • Track general trends in the ML space—e.g., “Wow, there sure is a lot of progress being made on Dota 2!”—and let us know about them.
    • Help an engineer figure out why their code isn’t working—e.g., “Oh, you forgot the pooling layer in your convolutional neural network.”
    • Answer/research MIRI staff questions about ML techniques or the history of the field.
    • Share important developments proactively; researchers who haven’t read the same papers as you often won’t know the right questions to ask unprompted!

If interested, click here to apply. For questions or comments, email Matt Graves (matthew.graves@intelligence.org).

Type Theorist

The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.

Click here to apply.

Center for Effective Altruism

The Centre for Effective Altruism helps to grow and maintain the effective altruism movement. Our mission is to

    • create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible; and
    • make the advancement of the wellbeing of all a worldwide intellectual project, doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth.

No current job postings for CEA, but you can still get involved!

Future of Humanity Institute

The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.

AI Safety and Machine Learning Internship Program

The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Learning the Preferences of Ignorant, Inconsistent AgentsSafe Reinforcement Learning via Human InterventionDeep RL from Human Preferences, and the Building Blocks of Interpretability. Past interns have collaborated with FHI researchers on a range of publications.

Applicants should have a background in machine learning or computer science, or in a related field (statistics, mathematics, physics, cognitive science). Previous research experience in machine learning or computer science is desirable but not required.

This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.

Internships are for 2.5 months or longer. We are now accepting applications for internships starting in or after September 2018 on a rolling basis. Interns are usually based in Oxford but remote internships are sometimes possible. (As per University guidelines, candidates must be fluent in English.)

To apply, please submit a CV and a short statement of interest (including relevant experience in machine learning, computer science, and programming) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organizations. Please direct questions about the application process to ry.duff@gmail.com.

For more details, please visit http://www.fhi.ox.ac.uk/vacancies/

Leverhulme Centre for the Future of Intelligence

The Leverhulme Centre for the Future of Intelligence (CFI) is a new, highly interdisciplinary research centre, addressing the challenges and opportunities of future development of artificial intelligence (AI), in both the short and long term. Funded by the Leverhulme Trust for 10 years, it is based in Cambridge, with partners in Oxford, Imperial College, and UC Berkeley. The Centre will have close links with industry partners in the AI field, and with policymakers, as well as with many academic disciplines. It will also work closely with a wide international network of researchers and research institutes.

No current job postings for CFI, but you can still get involved!

Center for the Study of Existential Risk

The Centre for the Study of Existential Risk is an interdisciplinary research centre focused on the study of risks threatening human extinction that may emerge from technological advances. CSER aims to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.

An existential risk is one that threatens the existence of our entire species.  The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between philosopher Huw Price, cosmologist Martin Rees, and software entrepreneur Jaan Tallinn — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. The Centre’s aim is to develop a new science of existential risk, and to develop protocols for the investigation and mitigation of technology-driven existential risks.

Academic Programme Manager

The Centre for the Study of Existential Risk (CSER) invites applications for an Academic Programme Manager.

The Academic Programme Manager will play a central role in developing and supporting CSER’s research team, projects and activities. We seek an ambitious candidate with initiative and a broad intellectual range for a senior research associate level role combining academic and management responsibilities. They will work with CSER’s Directors and research team to lead a subset of CSER’s research projects and develop our overall profile, and to build and maintain our collaborative networks. This is a unique opportunity to play a guiding role in a world-class research centre as it enters an exciting period of growth.

Candidates will have a high level of education, with substantial experience at a postdoctoral level, or equivalent experience within a relevant setting (e.g. policy, industry, think tank or NGO).

The post-holder will be expected to:

    • Plan and coordinate activities in CSER’s research programmes.
    • Contribute to strategic planning and leadership for CSER and manage the use of research resources and budgets, alongside CSER’s project administrator and CRASSH.
    • Organise and participate in meetings, workshops and conferences.
    • Act as an ambassador for the Centre’s research, both within Cambridge and externally, engaging with academics, the media, policy-makers, or other external audiences.
    • Conduct independent and collaborative research within CSER’s broad focus areas, to be published as papers in leading academic journals.
    • Actively seek additional funding for the activities of the Centre, including identifying opportunities, and developing proposals.

The post-holder will be encouraged to set their own priorities within the role, as well as contributing to Centre strategy, and will work with a high degree of independence while supporting the activities and development of CSER’s postdoctoral researchers.

Fixed-term: The funds for this post are available for 3 years from the start date in the first instance.

The closing date for applications is 14 October 2018. Interviews are planned for 25 October 2018.

For full advert and links to further information and the application system:
http://www.jobs.cam.ac.uk/job/18662/

Research Project Administrator (Fixed Term)

Applications are invited for a full-time Research Project Administrator to support a number of research grants at CSER (the Centre for the Study of Existential Risk). This varied and responsible role offers an exciting opportunity to be part of a rapidly expanding research centre. The project administrator will have responsibility for the smooth running of the Centre’s administrative and financial processes, and will manage an ambitious programme of events. He/she will provide administrative, clerical, event organisation, publicity and budget planning support as required to CSER’s Executive Director and staff through all stages from grant application to final reporting.

The successful applicant will be educated to degree level and have relevant experience in research project administration in a science, social science or humanities field and experience of organising events. He/she will be numerate and have good IT skills, including experience of (or ability to learn quickly) a web Content Management System and familiarity with spreadsheets and budgeting. He/she will be able to demonstrate excellent communication skills, work equally well on own initiative and as part of a team.

The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre dedicated to the study and mitigation of risks that could lead to human extinction. CSER is hosted within Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). CSER is based at 16 Mill Lane, where the role is mostly located, though attendance at the weekly staff meeting at CRASSH in the Alison Richard Building is required.

Fixed-term: The funds for this post are available until 31 August 2020 in the first instance.

To apply online for this vacancy, please click on the ‘Apply’ button below. This will route you to the University’s Web Recruitment System, where you will need to register an account (if you have not already) and log in before completing the online application form.

The closing date for applications is 14 October 2018. Interviews are planned on for the week commencing 22 October 2018. If you have any questions about this vacancy or the application process, please contact jobs@crassh.cam.ac.uk

Please quote reference VM16776 on your application and in any correspondence about this vacancy. The University values diversity and is committed to equality of opportunity. The University has a responsibility to ensure that all employees are eligible to live and work in the UK.

For full advert and links to further information and the application system:

http://www.jobs.cam.ac.uk/job/18837/

The Open Philanthropy Project

The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give.

No current job postings for Open Philanthropy Project, but you can still get involved!

Global Catastrophic Risk Institute

The Global Catastrophic Risk Institute (GCRI) is a nonprofit think tank specializing on the topic of global catastrophic risk (GCR). GCRI works with researchers from many academic disciplines and professionals from many sectors. GCRI leads research, education, and professional networking on GCR. GCRI research aims to identify and assess the most effective ways of reducing the risk of global catastrophe, as well as the issues raised by GCR. GCRI education aims to raise awareness and understanding about global catastrophic risk among students, professionals, and most of all by the general public.

No current job postings for GCRI, but you can still get involved!

80,000 Hours

80,000 Hours is an Oxford, UK-based organization that conducts research on the careers with positive social impact and provides career advice. It provides this advice online, through one-on-one advice sessions and through a community of like-minded individuals.

We’re not currently focused on hiring, though we’re always interesting in hearing from people dedicated to our mission who’d like to join the team full time. If you’d like to express interest in joining, fill out this short form.

Unfortunately we don’t offer internships or part-time volunteer positions.