Current Job Postings From FLI and Our Partner Organizations:
BERI’s mission is to improve human civilization’s long-term prospects for survival and flourishing. Its main strategy is to identify technologies that may pose significant civilization-scale risks, and to promote and provide support for research and other activities aimed at reducing those risks.
BERI is currently primarily seeking full-time employees. If you’re interested in working with us, please submit an application form (linked on each job posting below).
Start date: ASAP
Status: Full or Part-time employee, paid hourly
Compensation: $30-$90/hr, depending on experience and output
Location: The San Francisco Bay area
Reports to: Executive Director
BERI is seeking to hire a machine learning engineer to collaborate with the Center for Human Compatible AI (CHAI) under UC Berkeley professor Stuart Russell. Pending final evaluation from CHAI, successful candidate(s) will be offered a 1-2 year visiting researcher scholar position at UC Berkeley to work with Professor Stuart Russell’s research group (CHAI’s Listing), alongside Research Scientist Andrew Critch, and with opportunities to collaborate with CHAI’s co-Principal Investigators at Berkeley (Pieter Abbeel, Anca Dragan, Tania Lombrozo), Cornell (Bart Selman, Joe Halpern), Michigan (Michael Wellman, Satinder Singh) and Princeton (Tom Griffiths), as well as with groups at Cambridge, Oxford, and Imperial College through the Leverhulme Centre for the Future of Intelligence. As global demand for AI safety research increases, we expect the experience gained from this work will be valued internationally.
To read more about why we are interested in hiring machine learning engineers, see this blog post.
We are especially interested in applicants who can take initiative in finding ways to help out with research at CHAI. This role involves figuring out what would be helpful for the research team and then doing it.
- Solid software engineering skills across multiple languages, ideally including Python and C/C++
- Experience with machine learning software packages (e.g. TensorFlow, PyTorch)
- Practical experience building machine learning or AI systems. This could be demonstrated by professional work experience, previous research papers or open-source contributions
- Strong analytical and problem-solving skills
- Excellent technical communication skills, the ability to elaborate complex technical concepts and collaborate effectively with fellow engineers and researchers
- Familiar with core CS concepts such as common data structures and algorithms
- Comfortable conducting design and code reviews
- Prior research or research engineering experience
- Written work on ML or AI, including technical blog posts or publications in major conferences or journals
- Distributed systems and basic DevOps experience to manage in-house and cloud servers for experiments (e.g. Terraform/Chef, Kubernetes/Mesos, Docker)
- BS/BA, MS, or ideally PhD in computer science, data mining, machine learning, information retrieval, recommendation systems, natural language processing, statistics, math, engineering, operations research, or other quantitative discipline
- Time-off (paid vacation, holidays, sick leave, bereavement leave, & parental leave)
- Generous professional development policy
- Health insurance
- Semi-flexible work schedule including hours, location, and unpaid vacation policies
BERI is proud to be an Equal Employment Opportunity employer. Our mission to improve human civilization’s long-term prospects for survival and flourishing is in service of all of humanity, and is incompatible with unfair discrimination practices that would pit factions of humanity against one another. We do not discriminate against qualified employees or applicants based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, sexual preference, marital status, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, or any other characteristic protected by federal or state law or local ordinance. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.
How would we feed everyone if the sun was blocked or if there was a significant disruption to industry? ALLFED is working on planning, preparedness and research into practical food solutions so that in the event of a global catastrophe we can respond quickly and save lives and reduce the risk to civilization.
Food storage might seem like the obvious solution but it is very expensive, so we are researching alternative food sources that can scaled up quickly that don’t require the sun. Ideally these catastrophes would not happen and we support efforts to avoid them. Our research focuses on global catastrophic events rather than smaller scale disasters but some of our research may have implications for how we deal with current disasters. We are also focused on events where people have survived rather than human extinction events.
No current job postings for ALLFED, but you can still get involved!
MIRI’s mission is to ensure that the creation of smarter-than-human intelligence has a positive impact. Our strategic focus is our technical research agenda on “superintelligence alignment,” composed of numerous subproblems in AI, logic, decision theory, and other fields.
Our technical research program currently employs four full-time research fellows, fosters collaboration with our research associates and others, runs several research workshops each year, and funds independently-organized MIRIx workshops around the world.
We’re seeking multiple research fellows who can work with our other research fellows to solve open problems related to superintelligence alignment, and prepare those results for publication. For those with some graduate study or a Ph.D. in a relevant field, the salary starts at $65,000 to $75,000 per year, depending on experience. For more senior researchers, the salary may be substantially higher, depending on experience. All full-time employees are covered by our company health insurance plan. Visa assistance is available if needed.
Ours is a young field. Our current research agenda includes work on tiling agents, logical uncertainty, decision theory, corrigibility, and value learning, but those subtopics do not exhaust the field. Other research topics will be seriously considered if you can make the case for their tractability and their relevance to the design of self-modifying systems which stably pursue humane values.
This is not a limited-term position. The ideal candidate has a career interest in these research questions and aims to develop into a senior research fellow at MIRI, or aims to continue these avenues of research at another institution after completing substantial work at MIRI.
Some properties you should have
A formal degree in mathematics or computer science is not required, but is recommended.
For more details, please visit here!
The Machine Intelligence Research Institute is looking for highly capable software engineers to directly contribute to our work on the AI alignment problem, with a focus on projects related to machine learning. We’re seeking engineers with extremely strong programming skills who are passionate about MIRI’s mission and looking for challenging and intellectually engaging work. In this role you will work closely with our research team to: create and run novel coding experiments and projects; build development infrastructure; and rapidly prototype, implement, and test AI alignment ideas related to machine learning.
Some qualities of the ideal candidate:
- Comfortable programming in different languages and frameworks.
- Has mastery in at least one technically demanding area.
- Machine learning experience is not a requirement, though it is a plus.
- Able to work with mathematicians on turning mathematical concepts into elegant code in a variety of environments and languages.
- Able to work independently with minimal supervision, and in team/group settings.
- Highly familiar with basic ideas related to AI alignment.
- Residence in (or willingness to move to) the Bay Area. This job requires working directly with our research team, and won’t work as a remote position.
- Enthusiasm about the prospect of working at MIRI and helping advance the field of AI alignment research.
Our hiring process tends to involve a lot of sample tasks and probationary-hires, so we encourage you to apply sooner than later.
For questions or comments, email firstname.lastname@example.org.
The Machine Intelligence Research Institute is looking for a very specialized autodidact to keep us up to date on developments in machine learning—a “living library” of new results.
ML is a fast-moving and diverse field, making it a challenge for any group to stay updated on all the latest and greatest developments. To support our AI alignment research efforts, we want to hire someone to read every interesting-looking paper about AI and machine learning, and keep us abreast of noteworthy developments, including new techniques and insights.
This is a new position for a kind of work that isn’t standard. Although we hope to find someone who can walk in off the street and perform well, we’re also interested in candidates who think they might take three months of training to meet the requirements.
Examples of the kinds of work you’ll do:
- Read through archives and journals to get a sense of literally every significant development in the field, past and present.
- Track general trends in the ML space—e.g., “Wow, there sure is a lot of progress being made on Dota 2!”—and let us know about them.
- Help an engineer figure out why their code isn’t working—e.g., “Oh, you forgot the pooling layer in your convolutional neural network.”
- Answer/research MIRI staff questions about ML techniques or the history of the field.
- Share important developments proactively; researchers who haven’t read the same papers as you often won’t know the right questions to ask unprompted!
The Machine Intelligence Research Institute (MIRI) is accepting applications for a full-time research fellow to develop theorem provers with self-referential capabilities, beginning by implementing a strongly typed language within that very language. The goal of this research project will be to help us understand autonomous systems that can prove theorems about systems with similar deductive capabilities. Applicants should have experience programming in functional programming languages, with a preference for languages with dependent types, such as Agda, Coq, or Lean.
Center for Effective Altruism
The Centre for Effective Altruism helps to grow and maintain the effective altruism movement. Our mission is to
- create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible; and
- make the advancement of the wellbeing of all a worldwide intellectual project, doing for the pursuit of good what the Scientific Revolution did for the pursuit of truth.
As an Assistant Event Producer (contract role) at the Centre for Effective Altruism, you will report to Amy Labenz and will work to ensure the success of the Effective Altruism Global San Francisco 2019 conference, taking place in June or July 2019 (to be confirmed) for approximately 600 people. You will be responsible for assisting with logistical planning for the events, managing vendors and ensuring the smooth production of the event.
- Assisting with project plans for EA Global and ensuring that key components of the conference are completed in a timely manner.
- Assisting with logistical tasks including liaising with the venue, managing vendors (catering, AV etc) and helping to produce the agenda.
- Updating the event website and online agenda.
- Contributing to the accounting process for the event.
- Serving as a logistical point-of-contact during the event and in the days and weeks leading up to the event.
- Contributing to the management of event registration and feedback systems.
- Creating and implementing additional systems and tools to improve coordination and information sharing between events.
- Generally ensuring that the event goes well and that all necessary tasks are completed.
- Long-term commitment to developing skills in logistics and operations.
- Experience running large, logistically complex events.
- Excellent organization skills, including a demonstrated ability to juggle multiple projects with competing deadlines and priorities.
- Robust intrinsic motivation. We’re looking for someone who can act autonomously with little guidance, who can be tenacious in ensuring that things get done.
Applications for this position must be received no later than Sunday, November 4th 2018, 12:00 am GMT
How to apply:
Email your CV/resume and cover letter to Katie Glass at email@example.com.
For more details, please visit here!
Future of Humanity Institute
The Future of Humanity Institute is a leading research centre in the University of Oxford looking at big-picture questions for human civilization. We seek to focus our work where we can make the greatest positive difference. Our researchers regularly collaborate with governments from around the world and key industry groups working on artificial intelligence. To read more about the institute’s research activities, please see http://www.fhi.ox.ac.uk/research/research-areas/.
The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Learning the Preferences of Ignorant, Inconsistent Agents, Safe Reinforcement Learning via Human Intervention, Deep RL from Human Preferences, and the Building Blocks of Interpretability. Past interns have collaborated with FHI researchers on a range of publications.
Applicants should have a background in machine learning or computer science, or in a related field (statistics, mathematics, physics, cognitive science). Previous research experience in machine learning or computer science is desirable but not required.
This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.
Internships are for 2.5 months or longer. We are now accepting applications for internships starting in or after September 2018 on a rolling basis. Interns are usually based in Oxford but remote internships are sometimes possible. (As per University guidelines, candidates must be fluent in English.)
To apply, please submit a CV and a short statement of interest (including relevant experience in machine learning, computer science, and programming) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organizations. Please direct questions about the application process to firstname.lastname@example.org.
For more details, please visit http://www.fhi.ox.ac.uk/vacancies/
Leverhulme Centre for the Future of Intelligence
The Leverhulme Centre for the Future of Intelligence (CFI) is a new, highly interdisciplinary research centre, addressing the challenges and opportunities of future development of artificial intelligence (AI), in both the short and long term. Funded by the Leverhulme Trust for 10 years, it is based in Cambridge, with partners in Oxford, Imperial College, and UC Berkeley. The Centre will have close links with industry partners in the AI field, and with policymakers, as well as with many academic disciplines. It will also work closely with a wide international network of researchers and research institutes.
CFI is seeking a full-time researcher to design reinforcement learning (RL) tasks and test their suitability for use in a competition by running machine learning (ML) algorithms on them. The tasks will be written in MALMO, Microsoft’s AI experimentation platform built on top of Minecraft. The project involves translating experiments from the animal cognition literature, used to test different aspects of intelligence, into AI problems. The translated problems need to be tested with common ML baselines to ensure that they are solvable, and to provide data for comparison. Results will be used to improve the competition design and update problem specifications where necessary.
The researcher will join the Animal-AI Olympics team on the Kinds of Intelligence project at CFI and have the opportunity to become involved with all aspects of the competition, including design and organisation. The position is based at Imperial College London but will include integration with the main CFI team at Cambridge. The applicant will also collaborate on the research papers that will come out of the competition. The competition is set to run during 2019, with the final results announced at the end of the year.
- Masters degree or PhD in Computing or related field.
- Experience experimenting with standard deep reinforcement learning algorithms.
- Familiarity with Python and TensorFlow (or equivalent).
- Interest in the differences between Biological and Artificial Intelligence
- Experience with cognitive experiments, especially aspects of intelligence commonly considered challenging for current AI approaches
- Preference will be given to applicants with a proven research record and publications in the relevant areas.
In addition to completing the online application, candidates should attach:
- A CV (max two pages)
- A one-page research statement indicating why you are interested in the above post and why your expertise is relevant.
For applications queries please contact Jamie Perrins: email@example.com
Center for the Study of Existential Risk
The Centre for the Study of Existential Risk is an interdisciplinary research centre focused on the study of risks threatening human extinction that may emerge from technological advances. CSER aims to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.
An existential risk is one that threatens the existence of our entire species. The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between philosopher Huw Price, cosmologist Martin Rees, and software entrepreneur Jaan Tallinn — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. The Centre’s aim is to develop a new science of existential risk, and to develop protocols for the investigation and mitigation of technology-driven existential risks.
The Centre for the Study of Existential Risk (CSER) invites applications for an Academic Programme Manager.
The Academic Programme Manager will play a central role in developing and supporting CSER’s research team, projects and activities. We seek an ambitious candidate with initiative and a broad intellectual range for a senior research associate level role combining academic and management responsibilities. They will work with CSER’s Directors and research team to lead a subset of CSER’s research projects and develop our overall profile, and to build and maintain our collaborative networks. This is a unique opportunity to play a guiding role in a world-class research centre as it enters an exciting period of growth.
Candidates will have a high level of education, with substantial experience at a postdoctoral level, or equivalent experience within a relevant setting (e.g. policy, industry, think tank or NGO).
The post-holder will be expected to:
- Plan and coordinate activities in CSER’s research programmes.
- Contribute to strategic planning and leadership for CSER and manage the use of research resources and budgets, alongside CSER’s project administrator and CRASSH.
- Organise and participate in meetings, workshops and conferences.
- Act as an ambassador for the Centre’s research, both within Cambridge and externally, engaging with academics, the media, policy-makers, or other external audiences.
- Conduct independent and collaborative research within CSER’s broad focus areas, to be published as papers in leading academic journals.
- Actively seek additional funding for the activities of the Centre, including identifying opportunities, and developing proposals.
The post-holder will be encouraged to set their own priorities within the role, as well as contributing to Centre strategy, and will work with a high degree of independence while supporting the activities and development of CSER’s postdoctoral researchers.
Fixed-term: The funds for this post are available for 3 years from the start date in the first instance.
The closing date for applications is 11 November 2018.
For full advert and links to further information and the application system:
The Open Philanthropy Project
The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.
Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give.
To help us continue to build a great organization, we are looking for a few Associates for our operations team. Responsibilities will encompass a wide range of areas based both on changing organizational needs and personal fit, but could include the following types of activities:
- Analyze, revise and improve our processes and policies (e.g. drafting a remote worker policy, streamlining our onboarding, speeding up our grantmaking, proposing new employee benefits).
- Plan and lead events, such as a multi-day off-site retreat for the Open Phil team
- Collaborate with external programmers to implement Salesforce tools, such as grants management software, CRM, contractor processing, etc.
- Develop a staff handbook and intranet
- Search for new office space and coordinate a move
- Improve the working environment to increase staff productivity and engagement
- Perform administrative tasks including answering inbound email, scheduling, booking travel, submitting reimbursements, data entry, etc.
- This is a full-time position based in San Francisco.
- Starting salary: $70,000 plus an annual 401k grant contribution of $10,500 and a competitive benefits package
Across the organization, our employees are challenged with meaningful work, have the resources for ongoing professional development and learning, and contribute to a collegial and engaging environment where they can thrive. We are committed to fostering a culture of inclusion and encourage individuals with diverse backgrounds and experiences to apply. We especially encourage applications from women, people of color, and individuals with disabilities who are excited about contributing to our mission.
Global Catastrophic Risk Institute
The Global Catastrophic Risk Institute (GCRI) is a nonprofit think tank specializing on the topic of global catastrophic risk (GCR). GCRI works with researchers from many academic disciplines and professionals from many sectors. GCRI leads research, education, and professional networking on GCR. GCRI research aims to identify and assess the most effective ways of reducing the risk of global catastrophe, as well as the issues raised by GCR. GCRI education aims to raise awareness and understanding about global catastrophic risk among students, professionals, and most of all by the general public.
No current job postings for GCRI, but you can still get involved!
80,000 Hours is an Oxford, UK-based organization that conducts research on the careers with positive social impact and provides career advice. It provides this advice online, through one-on-one advice sessions and through a community of like-minded individuals.
We’re not currently focused on hiring, though we’re always interesting in hearing from people dedicated to our mission who’d like to join the team full time. If you’d like to express interest in joining, fill out this short form.
Unfortunately we don’t offer internships or part-time volunteer positions.