Current Job Postings From FLI and Our Partner Organizations:

BERI

BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Its main strategy is to identify technologies that may pose significant civilization-scale risks, and to promote and provide support for research and other activities aimed at reducing those risks.

BERI is currently primarily seeking full-time employees. If you’re interested in working with us, please submit an application form (linked on each job posting below).

Program Investigator and Manager

BERI is seeking to hire a full-time Program Investigator and Manager (PI/M). In short, we need someone with a high degree of competence in evaluating claims and arguments about existential risk and how to reduce it, who also wants to help whenever possible by developing and managing well-reasoned project ideas.

Status: Full-time employee, salaried
Start date: As soon as a promising candidate is identified
Compensation: $60,000-$120,000, depending on experience
Typical hours: 40 per week, primarily during standard business hours.
Work Location: Berkeley/Oakland
Reports to: Executive Director

Qualifications needed: A demonstrated track record of serious interest in existential risk reduction, along with education and professional experience demonstrating a high degree of individually good judgement and ability to reason about science, technology, engineering and mathematics (STEM) development, such as:

    • a PhD in a STEM-related field, with some management experience;
    • an MD or JD, with experience in a STEM-related profession, and some management experience
    • professional investment experience, with demonstrated competence in evaluating STEM research, and some management experience.

Click here to apply!

Machine Learning Engineer

Start date: ASAP
Status: Full or Part-time employee, paid hourly
Compensation: $30-$90/hr, depending on experience and output
Location: The San Francisco Bay area
Reports to: Executive Director

Click here to apply!

Machine Learning Team Lead

BERI is seeking to hire a full-time Machine Learning Team Lead to grow and manage a team of machine learning engineers to work in collaboration with researchers at UC Berkeley, Stanford, and other ML research groups in the Bay Area.

Qualifications needed: A demonstrated track record of

    • research and publication in machine learning;
    • managing a team of at least 3 machine learning engineers with impressive collective output;
    • seeking out and initiating novel collaborations; and
    • a serious interest in existential risk reduction.

Status: Full-time employee, salaried
Start date: As soon as a promising candidate is identified
Compensation: $100k-$150k, negotiated
Typical hours: 40 per week, primarily during standard business hours.
Work Location: Berkeley/Oakland
Reports to: Executive Director

Click here to apply!

ALLFED

How would we feed everyone if the sun was blocked or if there was a significant disruption to industry? ALLFED is working on planning, preparedness and research into practical food solutions so that in the event of a global catastrophe we can respond quickly and save lives and reduce the risk to civilization.

Food storage might seem like the obvious solution but it is very expensive, so we are researching alternative food sources that can scaled up quickly that don’t require the sun. Ideally these catastrophes would not happen and we support efforts to avoid them. Our research focuses on global catastrophic events rather than smaller scale disasters but some of our research may have implications for how we deal with current disasters. We are also focused on events where people have survived rather than human extinction events.

No current job postings for ALLFED, but you can still get involved!

MIRI

MIRI’s mission is to ensure that the creation of smarter-than-human intelligence has a positive impact. Our strategic focus is our technical research agenda on “superintelligence alignment,” composed of numerous subproblems in AI, logic, decision theory, and other fields.

Our technical research program currently employs four full-time research fellows, fosters collaboration with our research associates and others, runs several research workshops each year, and funds independently-organized MIRIx workshops around the world.

Research Fellow

We’re seeking multiple research fellows who can work with our other research fellows to solve open problems related to superintelligence alignment, and prepare those results for publication. For those with some graduate study or a Ph.D. in a relevant field, the salary starts at $65,000 to $75,000 per year, depending on experience. For more senior researchers, the salary may be substantially higher, depending on experience. All full-time employees are covered by our company health insurance plan. Visa assistance is available if needed.

Ours is a young field. Our current research agenda includes work on tiling agents, logical uncertainty, decision theory, corrigibility, and value learning, but those subtopics do not exhaust the field. Other research topics will be seriously considered if you can make the case for their tractability and their relevance to the design of self-modifying systems which stably pursue humane values.

This is not a limited-term position. The ideal candidate has a career interest in these research questions and aims to develop into a senior research fellow at MIRI, or aims to continue these avenues of research at another institution after completing substantial work at MIRI.

Some properties you should have

    • Published research in computer science, logic, or mathematics.
    • Enough background in the relevant subjects (computer science, logic, etc.) to understand MIRI’s technical publications.
    • A proactive research attitude, and an ability to generate productive new research ideas.

A formal degree in mathematics or computer science is not required, but is recommended.

For more details, please visit here!

Software Engineer

The Machine Intelligence Research Institute is looking for highly capable software engineers to directly contribute to our work on the AI alignment problem, with a focus on projects related to machine learning. We’re seeking engineers with extremely strong programming skills who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work. In this role you will work closely with our research team to: create and run novel coding experiments and projects; build development infrastructure; and rapidly prototype, implement, and test AI alignment ideas related to machine learning.

Some qualities of the ideal candidate:

    • Comfortable programming in different languages and frameworks.
    • Has mastery in at least one technically demanding area.
    • Machine learning experience is not a requirement, though it is a plus.
    • Able to work with mathematicians on turning mathematical concepts into elegant code in a variety of environments and languages.
    • Able to work independently with minimal supervision, and in team/group settings.
    • Highly familiar with basic ideas related to AI alignment.
    • Residence in (or willingness to move to) the Bay Area. This job requires working directly with our research team, and won’t work as a remote position.
    • Enthusiasm about the prospect of working at MIRI and helping advance the field of AI alignment research.

Our hiring process tends to involve a lot of sample tasks and probationary-hires, so we encourage you to apply sooner than later.

Click here to apply.

For questions or comments, email engineering@intelligence.org.

ML Living Library

The Machine Intelligence Research Institute is looking for a very specialized autodidact to keep us up to date on developments in machine learning—a “living library” of new results.

ML is a fast-moving and diverse field, making it a challenge for any group to stay updated on all the latest and greatest developments. To support our AI alignment research efforts, we want to hire someone to read every interesting-looking paper about AI and machine learning, and keep us abreast of noteworthy developments, including new techniques and insights.

This is a new position for a kind of work that isn’t standard. Although we hope to find someone who can walk in off the street and perform well, we’re also interested in candidates who think they might take three months of training to meet the requirements.

Examples of the kinds of work you’ll do:

    • Read through archives and journals to get a sense of literally every significant development in the field, past and present.
    • Track general trends in the ML space—e.g., “Wow, there sure is a lot of progress being made on Dota 2!”—and let us know about them.
    • Help an engineer figure out why their code isn’t working—e.g., “Oh, you forgot the pooling layer in your convolutional neural network.”
    • Answer/research MIRI staff questions about ML techniques or the history of the field.
    • Share important developments proactively; researchers who haven’t read the same papers as you often won’t know the right questions to ask unprompted!

If interested, click here to apply. For questions or comments, email Matt Graves (matthew.graves@intelligence.org).

Type Theorist

The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.

Click here to apply.

Center for Effective Altruism

The Centre for Effective Altruism helps to grow and maintain the effective altruism movement. Our mission is to

    • create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible; and
    • make the advancement of the wellbeing of all a worldwide intellectual project, doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth.
Current opportunities

EA Grants Evaluator (part-time, contract)

As an EA Grants evaluator (part-time, contract) at the Centre for Effective Altruism, you will report to Kerry Vaughan and will assist in ensuring the success of the EA Grants program. This will involve both reviewing applications and assisting in deciding which applications we decide not to evaluate further. It will also involve managing the logistics of the application process to ensure that applicants receive a timely response to their application.

Responsibilities:

    • Performing a first-pass evaluation of all applications and rejecting those that fail to meet the funding objects of EA Grants.
    • Maintain the database of all applicants and ensure that we provide timely updates on the status of all applications.
    • Collecting the information needed to process each grant after the funding decision has been made and ensuring that it is progressing in a timely manner.
    • Creating short write ups of each grant and the rationale behind it.
    • Collecting information about past grantees to assess the quality of our decision making process.

Deadline:

Applications for this position must be recieved no later than Tuesday, April 24th 2018, 1:00 am BST

For more details please visit here!

Future of Humanity Institute

The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.

Senior Administrator

FHI is excited to invite applications for a full time Senior Administrator to work with the faculty of Philosophy, University of Oxford with responsibility for overseeing the effective and efficient day-to-day non-academic management and administration of two of the Faculty’s research centres, the Future of Humanity Institute (FHI) and the recently established Global Priorities Institute (GPI).

We are entering a period of rapid expansion and the Senior Administrator will be an integral part of establishing and embedding complex operational procedures. The successful candidate will be responsible for effectively putting into practice our plans, particularly in the areas of finance, personnel and administration. S/he will provide comprehensive support in relation to HR matters in both institutes, will oversee the planning and management of their finances and will provide strategic advice on administrative matters.

This is a challenging role in rapidly expanding areas, requiring a proactive mindset, considerable initiative, intellectual ability and versatility and the readiness and flexibility to do the work required to best achieve the Institutes’ goals. It will also need excellent operational and organisational skills and significant experience of financial and personnel management. In addition it requires strengths in general management and in written and oral communication. The Senior Administrator will work largely independently, and will need to relate well to both academic and administrative staff at all levels.

Candidates should apply via this link. You will be required to upload a supporting statement and CV as part of your online application.

The closing date for applications is 12.00 noon on 20 April 2018.

AI Safety Research Fellow

FHI is excited to invite applications for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment.

You will be responsible for conducting technical research in AI Safety. You can find examples of related work from FHI on our website. Your research is likely to involve collaboration with researchers at FHI and with outside researchers in AI or computer science. You will co-publish technical work at major conferences, carry out collaborative research projects, and maintain relationships with relevant research labs and key individuals.

The ideal candidate will have a Bachelors and/or Masters degree along with the experience of contributing to publications in AI/machine learning or in closely related fields. Exceptional candidates with a research background in nearby fields (statistics, computer science, mathematics, neuroscience, cognitive science, physics) and a demonstrated interest in AI Safety will also be considered. For those without an AI Safety background, FHI will provide mentoring and support.

Candidates should apply via this link and must submit a CV, supporting statement, references and a short research proposal as part of their application. Applications received through any other channel will not be considered.

The closing date for applications is 12.00 midday on 30th April 2018. Please contact fhiadmin@philosophy.ox.ac.uk with questions about the role or application process.

Research Fellow in Macrostrategy

Applications are invited for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. This is a fixed-term post for 24 months from the date of appointment, located at the FHI offices in the beautiful city of Oxford.

Reporting to the Director of Research at the Future of Humanity Institute, the successful candidate will be responsible for identifying crucial considerations for improving humanity’s long-run potential. The Research Fellow will be evaluating strategies that could reduce existential risk, particularly with respect to the long-term outcomes of technologies such as artificial intelligence.

The postholder’s main responsibilities will include: contributing to the development of research agenda and conducting individual research; collaborating with partner institutions and research groups; assisting in completion of research for peer-reviewed publications; disseminating research findings by participating in seminars, lectures, and other public meetings; small scale project management and providing guidance to junior colleagues as well as developing ideas for research income and new research methodologies.

Applicants will be familiar with existing literature in existential risk and related fields or will have other equivalent evidence of outstanding research capability. Interdisciplinary research experience may be an advantage.

FHI’s work in the area of macrostrategy includes Nick Bostrom’s Superintelligence, the book Global Catastrophic Risks as well as this paper on the strategic implications of openness in AI development.

Candidates should apply via this link and must submit a CV, supporting statement, references and a short research proposal as part of their application. Applications received through any other channel will not be considered.

The closing date for applications is 12.00 midday on 18th April 2018. In case of any questions about the role and the application process, please contact fhiadmin@philosophy.ox.ac.uk.

Website and Communications officer

FHI is excited to invite applications for a full-time Website and Communications officer. The post is fixed-term for 12 months from the date of appointment.

The role holder will be responsible for developing and implementing a communications strategy for all activities of the institute. S/he will develop and maintain FHI’s website and social media presence, design proposals and research reports using tools like InDesign or Photoshop, and disseminate FHI’s research findings to a wider audience. The responsibilities will involve engaging with FHI’s researchers, online audience, donors, partner institutes and collaborators.

The ideal candidate will have outstanding written communication skills and attention to detail. S/he should be able to quickly assimilate varied and complex information related to job tasks, have a flexible and approachable manner, and be able to work calmly and efficiently under pressure of deadlines. A cheerful attitude, the ability to work independently and prioritize own work, and the ability to communicate with a variety of audiences effectively and with tact are very important for success in the role.

Candidates should apply via this link, and must submit a CV, a supporting statement, a sample press release (as described in the job description), and links of websites created / co-created as part of their application. The closing date for applications is 12.00 midday on 6th April 2018.

AI Policy and Governance Internship

Through research and policy engagement, the Governance of AI Program strives to steer the development of artificial intelligence for the common good. The Governance of AI Program is based at the University of Oxford’s Future of Humanity Institute, in close collaboration with Yale University. We track contemporary applications of AI in justice, the economy, cybersecurity, and the military, and take seriously the immediately pressing issues they pose to transparency, fairness, accountability, and security. Our particular focus, however, is on the challenges arising from transformative AI: advanced AI systems whose impact may be as profound as the industrial revolution.

Our work looks at:

    • trends, causes, and forecasts of AI progress;
    • transformations of the sources of wealth and power;
    • global political dimensions of AI-induced unemployment and inequality;
    • risks and dynamics of international AI races;
    • possibilities for global cooperation;
    • associated emerging technologies such as those involving crypto-economic systems, weapons systems, nanotechnology, biotechnology, and surveillance;
    • global public opinion, values, ethics;
    • long-run possibilities for beneficial global governance of advanced AI

To apply, please submit the following information to fhijobs@philosophy.ox.ac.uk, with “Application: AI Policy and Governance Internship” as the subject line.

AI Safety and Reinforcement Learning Internship Programme 2018

FHI seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Reinforcement Learning, Learning the Preferences of Ignorant, Inconsistent Agents, Learning the Preferences of Bounded Agents, and Safely Interrupible Agents.

Applicants should have a background in machine learning, computer science, mathematics or other related fields. Previous research experience in computer science (particular in machine learning) is desirable but not required.

This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.

Internships are 2.5 months or longer. Summer internships take place between April and September 2018 (dates are flexible within that range). Applications for Summer internships are due by February 10. If you are interested in interning after September 2018, please indicate that on your application.

Selection Criteria

    • You should be fluent in English
    • You must be available to come to Oxford for approximately 12 weeks, (please indicate the period when you would be available when you apply).

To apply, please send a CV and a short statement of interest (including relevant experience in machine learning and any other programming experience) to fhijobs@philosophy.ox.ac.uk.

For more details, please visit http://www.fhi.ox.ac.uk/vacancies/

Leverhulme Centre for the Future of Intelligence

The Leverhulme Centre for the Future of Intelligence (CFI) is a new, highly interdisciplinary research centre, addressing the challenges and opportunities of future development of artificial intelligence (AI), in both the short and long term. Funded by the Leverhulme Trust for 10 years, it is based in Cambridge, with partners in Oxford, Imperial College, and UC Berkeley. The Centre will have close links with industry partners in the AI field, and with policymakers, as well as with many academic disciplines. It will also work closely with a wide international network of researchers and research institutes.

Research Fellow

The Interdisciplinary Ethics Research Group require a Research Fellow (3 years fixed-term contract) to support Associate Professor Keith Hyams on his Leverhulme Project Grant entitled Anthropogenic Global Catastrophic Risk: The Challenge of Governance for a period of three years. This project studies human-induced risk that threatens sustained and wide scale loss of life and damage to civilisation across the globe. Central to the danger posed by future Anthropogenic Global Catastrophic Risks (AGCRs) is the problem that technological progress and uptake has proceeded much more rapidly than commensurate understanding and implementation of effective governance. The project aims to address this deficit by investigating the key challenges of governance for AGCRs, and by identifying practical and ethical guidelines within which new governance solutions for specific AGCRs will be advanced. It will focus in particular on risks arising from biotechnology, nanotechnology, and artificial intelligence.

Applications from female candidates and those from a minority ethnic background are especially welcome as these groups are currently underrepresented within the Department.

Job Description

The Research Fellow will work closely with Professor Keith Hyams (PAIS) who is the Principal Investigator (PI) of the Leverhulme project ‘Anthropogenic Global Catastrophic Risk: The Challenge of Governance’. The postholder will also work with the Co-Is, Professor Sebastien Perrier (Chemistry), Professor Nathan Griffiths (Computer Science), and Professor John McCarthy (Life Sciences). The postholder will be actively involved in both research and administrative tasks.

Duties and Responsibilities : Research tasks include but are not limited to conducting literature surveys, writing single authored and co-authored research and policy papers on the governance of global catastrophic risk, co-editing an edited collection of papers, taking active part in the design, implementation and analysis of research methodologies such as interviews, surveys, and focus groups, document analysis, attending workshops and conferences and other scientific gatherings relevant for the project, taking part in the dissemination of results. Administrative tasks include but are not limited to coordination amongst project team members, relevant University of Warwick offices and other beneficiaries; organizing workshops and meetings, updating the project website.

Closing Date – 23 May 2018

For more details, visit here!

Center for the Study of Existential Risk

The Centre for the Study of Existential Risk is an interdisciplinary research centre focused on the study of risks threatening human extinction that may emerge from technological advances. CSER aims to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.

An existential risk is one that threatens the existence of our entire species.  The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between philosopher Huw Price, cosmologist Martin Rees, and software entrepreneur Jaan Tallinn — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. The Centre’s aim is to develop a new science of existential risk, and to develop protocols for the investigation and mitigation of technology-driven existential risks.

Postdoctoral Research Associate

The Centre for the Study of Existential Risk (CSER) is advertising a 9 month Postdoctoral Research Associate position on ‘Horizon-Scanning and Foresight for Extreme Technological Risks’ in our Managing Extreme Technological Risks programme (funded by the Templeton World Charity Foundation).

Work will include developing and refining our web-based register of extreme technological risk (http://www.x-risk.net ), including leading an associated expert elicitation process and more broadly investigating the utility of a range of horizon-scanning and foresight techniques and methodologies for early detection of extreme technological risks, and, where appropriate, how these might be matched with management options.

The closing date for applications is Tuesday 10 April 2018. Interviews on 20 April 2018.

See job advert at http://www.jobs.cam.ac.uk/job/16872/ for details.

The Open Philanthropy Project

The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give.

We are currently hiring for the following roles:

 

 

Grants Associate

We are seeking a Grants Associate to help process philanthropic grants totaling more than 100 million dollars per year. The Grants Associate will be responsible for collecting the information needed to process each grant, liaising between Open Philanthropy staff and grantee organizations, and responding pragmatically to unanticipated issues. We are seeking someone who is interested in effective philanthropy, with excellent attention to detail, organizational skills and communication skills.

Responsibilities include:

    • Following processes to collect information about each grant, track the grant in our internal systems, and ensure that it is progressing in a timely manner
    • Making suggestions to improve those processes and systems
    • Using your knowledge of Open Philanthropy’s principles and goals, and your judgment, to handle unexpected issues that arise
    • Working responsively, switching between tasks and re-ordering priorities based on new information
    • Miscellaneous additional duties depending on individual preferences and abilities

For more details please visit here!

Operations Associate

The Open Philanthropy Project is seeking an Operations Associate who is interested in contributing to an organization dedicated to effective philanthropy. Responsibilities are primarily tasks relating to external communications and internal coordination, grant tracking and processing, and other operational projects that arise.

Responsibilities include:

    • Perform administrative tasks including data entry, answering inbound email, etc.
    • Help with the publishing process for the pages we write about our grants
    • Manage communication with external parties and route internally as needed
    • Help with administrative processes around grant management
    • A variety of additional duties depending on organizational needs and individual preferences and abilities

For more details please visit here!

Research Analyst

The Open Philanthropy Project seeks to hire several Research Analysts in 2018. We are looking for exceptional generalists committed to doing as much good as possible. We intend to invest heavily in training and mentoring these hires, in the hopes that over the long run they will have the potential to become core contributors to the organization.

Core Research Analyst duties include:

We recommend submitting applications by April 15, 2018.

For more details please visit here!

Analyst Specializing in Potential Risks from Advanced Artificial Intelligence

The Open Philanthropy Project seeks to hire people to specialize in key analyses relevant to potential risks from advanced artificial intelligence. We are seeking to hire people to focus on any of the following areas:

    • AI alignment
    • AI timelines
    • AI governance and strategy

Applicants who have extremely strong existing qualifications for these roles should apply directly for them. However, we also note that the Research Analyst role is a possible route to the roles listed here.

For more details please visit here!

Global Catastrophic Risk Institute

The Global Catastrophic Risk Institute (GCRI) is a nonprofit think tank specializing on the topic of global catastrophic risk (GCR). GCRI works with researchers from many academic disciplines and professionals from many sectors. GCRI leads research, education, and professional networking on GCR. GCRI research aims to identify and assess the most effective ways of reducing the risk of global catastrophe, as well as the issues raised by GCR. GCRI education aims to raise awareness and understanding about global catastrophic risk among students, professionals, and most of all by the general public.

No current job postings for GCRI, but you can still get involved!

80,000 Hours

80,000 Hours is an Oxford, UK-based organization that conducts research on the careers with positive social impact and provides career advice. It provides this advice online, through one-on-one advice sessions and through a community of like-minded individuals.

We’re not currently focused on hiring, though we’re always interesting in hearing from people dedicated to our mission who’d like to join the team full time. If you’d like to express interest in joining, fill out this short form.

Unfortunately we don’t offer internships or part-time volunteer positions.